4 steps, ≈ 1 minute end-to-end

Quickstart

The shortest path from zero to a live plate read. No SDK, no framework — just a curl you can paste into your shell.

1 · Issue an API key

Sign in, open `/app/keys`, click `Create API key`, pick the `live` environment, and name it something you'll recognise in the audit trail. Copy the raw secret into your keystore immediately — the server shows it exactly once.

2 · Send your first read

The platform's front door is `POST /v1/read`. It takes an `X-API-Key` header and a multipart image field. A tight plate crop (JPEG or PNG, up to ~8 MB) gets you the best latency, but a full-frame image also works — the detector crops for you.

curl -X POST https://praxisedge.ai/v1/read \
  -H 'X-API-Key: pe_live_replace_with_your_key' \
  -F '[email protected]'

3 · Inspect the response

Every successful read returns the predicted plate, a tier decision (ACCEPT / RETRY / NO_READ) based on the calibrated confidence, per-character scores with alternative-hypothesis probabilities, grammar validation (which US state patterns the plate matches), and a `request_id` you can cross-reference with the audit trail. Shape matches the `ReadResponse` schema in `api/src/atc_lpr_api/schema.py`.

{
  "request_id": "95429505-20cf-407a-9473-aab4e06fb67a",
  "model_version": "ocr-brain-alpha",
  "calibration_version": "calibration-v1",
  "inference_latency_ms": 43.7,
  "plate": "bhb988",
  "decision": "RETRY",
  "tier_threshold": 0.5,
  "plate_confidence": {
    "calibrated": 0.0509,
    "raw_min_char": 0.0758,
    "raw_geometric_mean": 0.1026,
    "entropy": 3.5453
  },
  "per_character": [
    {
      "pos": 0,
      "char": "b",
      "confidence": 0.0523,
      "alternatives": [
        {
          "char": "r",
          "prob": 0.0451
        },
        {
          "char": "h",
          "prob": 0.0409
        }
      ]
    },
    {
      "pos": 1,
      "char": "h",
      "confidence": 0.044,
      "alternatives": [
        {
          "char": "m",
          "prob": 0.0437
        },
        {
          "char": "w",
          "prob": 0.0428
        }
      ]
    },
    {
      "pos": 2,
      "char": "b",
      "confidence": 0.0466,
      "alternatives": [
        {
          "char": "a",
          "prob": 0.0432
        }
      ]
    },
    {
      "pos": 3,
      "char": "9",
      "confidence": 0.0515,
      "alternatives": [
        {
          "char": "5",
          "prob": 0.0493
        }
      ]
    },
    {
      "pos": 4,
      "char": "8",
      "confidence": 0.0564,
      "alternatives": [
        {
          "char": "9",
          "prob": 0.0538
        }
      ]
    },
    {
      "pos": 5,
      "char": "8",
      "confidence": 0.0556,
      "alternatives": [
        {
          "char": "9",
          "prob": 0.0553
        }
      ]
    }
  ],
  "grammar": {
    "valid": true,
    "matched_pattern": "three letters then three digits",
    "matched_pattern_id": "3L3N",
    "matched_states": [
      "HI",
      "IA",
      "MS",
      "MT",
      "NM",
      "OK",
      "SD"
    ]
  }
}

4 · Tail the ledger

Open `/app/inferences` in the dashboard. Reads submitted through the streaming + dashboard-issued path land here with plate, decision, status, latency, and the full request_id — filter by camera, site, status, or time window to find a specific call later. (The direct `/v1/read` path ships on gpubox with a static dev key today; the dashboard-issued auth + ledger wire-up is the next PR on the demo-ready list.)

Ready for real traffic?

The in-app onboarding wizard walks you through sites, cameras, and key issuance — same flow as above, but with the dashboard doing the bookkeeping.