Shipped today
MQTT ingest uses QoS 1. The bridge deduplicates by client_request_id, writes a single inference_ledger row, and publishes the canonical reply when the bridge and ledger are reachable.
GPU mini-PC deployments can share the same response schema as cloud and on-prem with MQTT-based retry behavior available today. Jetson appliance packaging remains roadmap, so edge hardware shape is scoped per site during evaluation.
Hardware choices are scoped per site. Any latency expectation on this page is preliminary until we validate against your camera angle, lighting, and traffic pattern.
Roadmap hardware target for a compact appliance profile. Latency expectations are preliminary until site-specific TensorRT validation is complete.
Recommended edge shape when production throughput matters and customer sites already standardize on small-form-factor PCs.
Supported for lower-volume deployments. Expect slower reads and validate against the lane or camera workflow before rollout.
Edge deployments are designed for unreliable uplinks, but the repo currently proves some pieces and leaves others for the next phase.
MQTT ingest uses QoS 1. The bridge deduplicates by client_request_id, writes a single inference_ledger row, and publishes the canonical reply when the bridge and ledger are reachable.
The Python MQTT client and bridge are built for reconnect and redelivery, so duplicate publishes do not double-bill once the ledger path is available.
A packaged edge appliance with durable local disk spooling before the uplink or ledger is reachable is still roadmap. We will not describe that as shipped until it is wired end to end.
Send the camera model, lane geometry, connectivity profile, and read volume. We'll separate what ships today from what needs roadmap work.