Concepts
Aeroza's surface is small on purpose. There are two domains — alerts and radar grids — and four ways to query them: list, single-detail, point sample, and polygon reduction. This page explains each piece end-to-end.
NWS alerts
Alerts come from the National Weather Service public API and are normalised into a flat schema with a five-level severity ladder: Unknown → Minor → Moderate → Severe → Extreme. Every alert carries a polygon (or a fallback bbox) so geospatial filters work uniformly.
- List:
GET /v1/alertsreturns active alerts as a GeoJSONFeatureCollection, filterable bypoint,bbox, or minimumseverity. - Stream:
GET /v1/alerts/streamis a Server-Sent Events feed re-emitting newly observed alerts published on theaeroza.alerts.nws.newNATS subject. Use this for real-time dashboards. - Detail:
GET /v1/alerts/{id}returns one alert with the long-formdescriptionandinstructionfields the list endpoint omits.
MRMS files (the catalog)
MRMS — Multi-Radar / Multi-Sensor — is NOAA's blended CONUS radar product, published as gzipped GRIB2 files on AWS Open Data every ~2 minutes. The aeroza-ingest-mrms worker lists the bucket and persists a row per file: key, product, level, validAt, sizeBytes, and etag. The catalog is the "what data is available right now" feed.
Why catalog before payload? The discovery step is cheap (one S3 list call) and never fails the way decoding can. Decoupling it from materialisation means a missing system library or a malformed GRIB doesn't silence the freshness signal.
Materialised grids (the queryable layer)
The aeroza-materialise-mrms worker decodes each catalogued GRIB2 with cfgrib + eccodes, writes it to a Zarr store, and records the locator (URI, variable, shape, dtype, nbytes) in the mrms_grids table. It triggers two ways:
- Event: subscribes to
aeroza.mrms.files.newand runs a tick per arriving file event — fresh data lands as a queryable grid in seconds. - Backstop interval: a 60s scheduler also runs the same catalog-scan tick, so missed events / cold starts catch up on the next sweep.
Successful materialisations publish aeroza.mrms.grids.new, which downstream consumers (nowcasting, alerts, webhooks) can subscribe to.
Raster tiles (the map layer)
GET /v1/mrms/tiles/{z}/{x}/{y}.png renders a 256×256 Web-Mercator tile of the latest matching grid: nearest-neighbour sample from the Zarr store, NWS dBZ ramp, 86%-opaque so the basemap shows through where there's no echo. Tiles outside the grid extent (or when no grid has materialised yet) come back as a fully-transparent PNG so MapLibre / Leaflet don't spam 404 retries. fileKey pins a specific grid — used by the timeline scrubber on /map to fetch historical tiles. The same fileKey mechanism powers the 1-hour radar auto-loop in /map's header: the page boots playing through every grid in the last hour at 2× by default, with a speed selector (1×/2×/4×/8×) for slowing down to inspect a developing storm cell. Scrubbing the timeline pauses the loop; pressing ▶ Loop 1h resumes it.
Point sample
GET /v1/mrms/grids/sample?lat=&lng= returns the nearest-cell value for a point against the latest grid (or one valid at-or-before at_time). Three things to know:
- Tolerance. By default the request 404s if no cell centre is within
0.05°of the point — bare nearest- neighbour would happily return a value miles away if the request falls outside the grid. Tunable viatolerance_deg. - Longitude convention. MRMS publishes on
[0, 360); the API speaks[-180, 180]on the wire. The translation happens server-side; you never see it. - Matched coords. The response carries both the requested
lat/lngand the actual cell coords the value came from — useful for caching, deduping, or confirming "you asked for X, you got cell Y".
Polygon reduction
GET /v1/mrms/grids/polygon applies a reducer over the cells of one grid whose centres fall inside a polygon. Vertices are flat lng,lat,lng,lat,... (GeoJSON / OGC ordering, same as bbox); the ring is implicitly closed. Four reducers:
| Reducer | Returns | Use case |
|---|---|---|
max | Highest value among cells inside the polygon | Worst-case intensity over a region |
mean | Arithmetic mean | Aggregate exposure |
min | Lowest value | "All clear" threshold checks |
count_ge | Number of cells with value ≥ threshold | "Is anything ≥ 40 dBZ in this polygon?" — geofencing |
The polygon's bounding box is used to slice the grid down before the ray-cast mask runs, so a small region over CONUS only loads a few kilobytes off Zarr instead of the full ~100 MB array.
METAR (surface observations)
METAR is the global standard for hourly surface weather reporting at airports. The aeroza-ingest-metar worker polls the Aviation Weather Center JSON API for a configurable list of ICAO stations (default: a CONUS top-20 sample) every 5 minutes. AWC returns already-parsed records, so there is no in-tree METAR text parser; the rawText column preserves the original string for callers who want their own.
Each row is keyed on (stationId, observationTime) — re-fetches that find no change are no-ops, and SPECI updates within a cycle update the row in place. Measurement fields are nullable (a station whose dewpoint sensor isn't reporting still gets a row, just with null in those columns).
- List:
GET /v1/metar— filter bystation,since/until,bbox(same convention as/v1/alerts), andlimit. Newest first. - Latest:
GET /v1/metar/{station}/latest— most-recent observation for one airport. Case-insensitive on the path.
Useful as ground-truth point observations next to the MRMS gridded products: sanity-check a nowcast at a specific airport, or join METAR readings against forecast cells for station-resolved verification.
Nowcasts
For each newly-materialised observation grid, the aeroza-nowcast-mrms worker generates predicted grids at 10, 30, and 60-minute horizons and persists them to mrms_nowcasts. The catalog surface is GET /v1/nowcasts — same shape as /v1/mrms/grids with two extra columns:
algorithm— which forecaster produced this row. Two ship today:persistence(the §7 baseline) andpysteps(Lucas–Kanade dense optical flow + semi- Lagrangian extrapolation). NowcastNet / ensemble pySTEPS land later.forecastHorizonMinutes— lead time. The (algorithm, horizon) pair is the dimension we report verification numbers against.
The two algorithms are peers on the calibration page — their MAE / bias / RMSE rows trend side-by-side. Persistence is the trivial copy-forward baseline; pySTEPS computes a velocity field from the last few observations and advects the most recent frame along it. Run pysteps with aeroza-nowcast-mrms --algorithm pysteps (the worker fetches a small lookback window per tick from the catalog, so there's no separate state to manage). When the catalog has fewer than the required past frames, pySTEPS falls back to persistence rather than crashing.
Newly-persisted nowcasts also publish aeroza.nowcast.grids.new on NATS. Webhook subscriptions that include this event in their events array receive a signed delivery per persisted forecast.
Calibration — the moat
The aeroza-verify-nowcasts worker scores every previously-issued forecast against the real observation that arrives at its validAt. Per-(forecast, observation) MAE / bias / RMSE rows live in nowcast_verifications; GET /v1/calibration aggregates them by algorithm × horizon over a window:
| Metric | Reads as | What it tells you |
|---|---|---|
maeMean | Mean absolute error (dBZ) | How far off, on average, ignoring direction |
biasMean | Mean signed error (dBZ) | Whether the algorithm runs hot or cold on average |
rmseMean | Root-mean-square error (dBZ) | Like MAE but penalises big misses harder |
sampleCount | Cells contributing to the means | The denominator — small numbers mean noisy aggregates |
pod / far / csi | Categorical skill scores | How well the algorithm caught threshold crossings |
thresholdDbz | The threshold the categorical metrics scored against | Default 35 dBZ — operational meteorology's "convective cell" cutoff. null if rows in the bucket disagreed. |
Continuous means (maeMean, biasMean, rmseMean) are sample-weighted: a verification with 1M cells contributes 1M times to the bucket. Small windows of bad weather shouldn't dominate the average just because they're more frequent.
Categorical scores (pod / far / csi) compute on a contingency table stored per verification — four counts of forecast/observed crossings of the threshold (hits, misses, false alarms, correct negatives). The aggregate sums the cells across rows then computes the ratio at the end; averaging POD/FAR/CSI across rows directly is wrong (the average of ratios isn't the ratio of averages). When a bucket has no contributing categorical rows or the denominator is zero, the route emits null rather than a misleading 0.
For trend-watching, GET /v1/calibration/series returns the same metrics time-bucketed (bucketSeconds from 5 min to 1 day). That's what the sparkline on /calibration charts: same Y-axis per row so a row's downward trend lines up with a peer's at a glance. The metric switcher above the matrix has six tabs — MAE (continuous error), POD / FAR / CSI (categorical skill at the configured threshold), and Brier / CRPS (probabilistic skill, ensemble rows only). Each non-baseline cell shows a small ↑/↓ N% vs persistence ribbon on the active metric so the question "did this algorithm beat the baseline?" answers itself at a glance.
Per the plan §3.3, calibration is the trust signal nobody else in the dev-API weather space publishes. The probabilistic complement to POD/FAR/CSI now ships too: when the source nowcast is an ensemble (e.g. --algorithm lagged-ensemble), the verifier scores Brier (mean squared error of event probability) and the fair-CRPS ensemble estimator (continuous ranked probability score) and the calibration aggregate exposes brierMean / crpsMean / ensembleSize alongside MAE. Reliability diagrams and a STEPS-perturbed ensemble are the next probabilistic-skill steps.
Webhooks & alert rules
Every subject the platform publishes on NATS is also a webhook event. Subscribers register a target URL and an events array; the dispatcher translates each NATS message into an HTTP POST with an HMAC-SHA256 signature in the Aeroza-Signature header (Stripe-style v1=<hex>) and the publish time in Aeroza-Timestamp. Two subjects are wired so far:
aeroza.alerts.nws.new— every newly-observed NWS alert (the same stream behind/v1/alerts/stream).aeroza.nowcast.grids.new— every persisted nowcast (one event per algorithm × horizon × valid_at).
Subscriptions can be filtered by an alert rule — a tiny DSL with two predicate kinds: point (alert polygon intersects a circle of radius radiusMeters around (lat, lng)) and polygon (alert intersects a caller-supplied polygon). Rules can also gate on a minimum severity and an optional event-name allowlist. One rule can back many subscriptions, so a "Texas storms ≥ Severe" rule is a first-class object you can attach to as many webhook targets as you need.
The dispatcher's retry queue records every attempt in webhook_deliveries: response status, latency, response-body excerpt for failures. A circuit breaker flips a subscription to disabled after repeated non-success — a visible, human-readable signal so a 4xx storm from a flaky receiver doesn't burn the queue. CRUD all of the above through /v1/webhooks and /v1/alert-rules.
Stats snapshot
GET /v1/stats is a compact "what does the system know right now?" endpoint: alert counts (active, total, latest expiry), MRMS file/grid counts, and the freshest valid_at / materialised_at timestamps. Cheap aggregate queries — designed to be polled every 10–30 seconds by a dashboard.
API keys & auth
Every route is anonymous by default. Bearer-token auth exists server-side and is opt-in per deployment via AEROZA_AUTH_REQUIRED=true. Tokens are minted with the aeroza-api-keys CLI and have the format aza_live_<random>; only the HMAC-SHA-256 hash is persisted, keyed by AEROZA_API_KEY_SALT for domain separation.
Pass the token as Authorization: Bearer <token> (or set apiKey on the SDK client). Currently the only gated route is GET /v1/me, which returns the calling key's metadata: name, owner, prefix (visible identifier), scopes, rate-limit class, and last-used timestamp. HTTP CRUD over /v1/api-keys arrives once we have an admin scope to gate it on; until then the CLI is the management plane.
Ready to make queries? See the API reference or open the dev console to try them against live data.