Skip to content

Architecture

Data Flow Architecture

How data moves from the official Helldivers 1 API through validation, storage, and normalization to the frontend. Click any node for details.

Helldivers Bot Data Flow ArchitectureInteractive diagram showing 4 data sources (live API, snapshots, seed files, force refresh) flowing through worker threads and validation into raw cache and normalized database tables, then consumed by frontend components.DATA SOURCESPROCESSINGDATABASEFRONTENDRAWNORMALIZEDget_campaign_statusLive war state + statsget_snapshotsHistorical time-seriesprisma/seed/seasons/*.jsonBootstrap (first deploy)get_snapshots (forced)Re-fetch any seasonWorker Threadcron.jssetTimeout looppoll > validate > upsertSeed Scriptprisma db seed / startupupdateSeason()fetch + validate + upsertrebroadcast_status1 row/season - raw JSONrebroadcast_snapshot1 row/season - raw JSONh1_seasonh1_livecampaigns + stats + maph1_eventh1_introduction_orderh1_points_maxLive Dashboardmap + stats + playersreads h1_live (3 rows)Event Alertsactive defend/attack5-15s~1honceon demandupsertread
Official API
Worker / Processing
Raw Cache (rebroadcast)
Normalized (h1_*)
Seed Files (past wars)
Frontend Components
Click any node for details

Key Concepts

Two-Table Strategy

Raw API responses are stored in rebroadcast tables (faithful JSON), while normalized h1_* tables enable filtering, joining, and aggregation. Both coexist to avoid trade-offs.

Worker Thread

A dedicated thread polls the official API every 5-15 seconds using setTimeout (not setInterval) to prevent overlapping requests. Validates with Zod before any database writes.

Confirm Pattern

Each update cycle creates an unconfirmed season row, writes all child data, then confirms by setting last_updated. This ensures partial writes are detectable.

On-Demand Fetching

Missing seasons are fetched from the official API on first request. The /archives page derives available seasons from the current season number, not a database query.