At a Glance
- Client: Live event timing and scoring systems provider
- Challenge: Event data trapped at local sites — no centralized access, no real-time distribution, no API for broadcasters
- Timeline: Production-ready platform delivered in 4 weeks
- Solution: Secure, multi-tenant data broker with real-time WebSocket streaming
- Outcome: New data product capability, broadcast integration ecosystem, ready for customer deployment
The Client
A company specializing in live event timing and scoring systems. Their hardware is deployed at competitive events — capturing results, split times, and performance data in real-time for athletes, officials, and spectators.
The Challenge
The client's hardware was powerful, but disconnected. Event data lived on local machines at individual venues with no centralized collection, no real-time distribution, and no API for third-party consumption. This created several pain points:
- Event organizers had no way to share live results with spectators or broadcasters without manual effort.
- Multi-venue customers running event series across multiple locations had no unified view of their data.
- Integration partners — broadcast graphics vendors, mobile app developers, analytics platforms — couldn't build on the data without direct hardware access.
- New revenue streams — live data feeds for broadcasters, real-time spectator apps, third-party integrations — were impossible without a data layer.
The client needed a secure, real-time data broker that could sit between their timing hardware and the digital world. It had to be lightweight enough for small events, yet robust enough for major competitions with data streaming to broadcast systems and thousands of spectators simultaneously.
The timeline was aggressive — they needed a working platform within 6 weeks to meet commitments to their own customers.
Our Solution
We designed and built PulseRelay — a production-grade API platform with three guiding principles:
1. Security First, Not Security Later
Device data is often sensitive — production metrics, environmental readings, operational states. Multi-tenant access control had to be foundational, not an afterthought.
What we built:
- Role-based API key system with three distinct roles:
admin(full platform access),publisher(write access to specific channels), andsubscriber(read-only access to specific channels). - Channel-level permissions — every API key is scoped to specific channels. A publisher for one sensor group can't write to another, even with valid credentials.
- HMAC-SHA256 key hashing with a server-side pepper — API keys are never stored in plain text. Even a full database breach wouldn't expose usable credentials, protecting both the client and their customers.
- Secure admin portal with Argon2 password hashing, CSRF protection, session management, and anti-caching headers. The admin interface runs as a completely separate service, never exposed to the public internet — reducing attack surface significantly.
- Rate limiting with Redis-backed sliding windows — protects the platform from runaway devices or abuse, ensuring one misbehaving device can't impact other customers.
2. Real-Time as a Core Feature
Operators don't want data five minutes later. Monitoring systems need readings the instant a sensor triggers. We made real-time streaming a first-class capability.
What we built:
- WebSocket endpoint (
/v1/ws/stream) that pushes events to connected clients the moment they arrive from devices. - Channel-based filtering — WebSocket clients only receive data for channels they're authorized to see, enforced server-side. An admin sees everything; a subscriber only sees their permitted channels.
- Redis Pub/Sub backbone — events flow from ingestion through Redis to all connected subscribers. This enables horizontal scaling without complex custom logic, so the platform grows with demand.
- Automatic intersection of permissions — if a subscriber with access to three channels connects and requests only one, they get exactly one. No over-fetching, no data leakage.
- Connection management with automatic cleanup of dead sockets and concurrent-safe broadcasting.
3. Flexible Schema, Zero Friction
Every device type has different data — temperature sensors, motion detectors, industrial PLCs, custom hardware. We needed to accept any payload structure without forcing a rigid schema.
What we built:
- Generic event model — every event has a
channel,eventType,eventKey, and flexible JSONpayload. The platform routes and stores without caring what's inside the payload. - Intelligent upsert logic — devices sometimes re-send corrected data. Our system detects duplicates via composite unique keys (
channel+eventKey) and seamlessly updates existing records. - Automatic timestamping — events use device timestamps when provided, server time when not.
- Configurable data retention — events auto-delete after a configurable period, keeping storage costs predictable.
- Search API with composable filters (channel, event type, time ranges) and pagination for historical queries.
Architecture
We chose a split-service architecture with Redis as the real-time backbone:
┌─────────────────────────────────────────────────────────────────────┐
│ Device / Sensor │
│ │ │
│ │ HTTP POST (event data) │
└──────────────────────────────┼──────────────────────────────────────┘
│
▼
┌────────────────────────────────────┐ ┌────────────────────────────┐
│ PUBLIC API (port 8000) │ │ ADMIN PORTAL │
│ │ │ (port 8001) │
│ • POST /v1/events │ │ │
│ • GET /v1/events/search │ │ • API key management │
│ • GET /v1/events/{id} │ │ • Create / revoke keys │
│ • WS /v1/ws/stream │ │ • Role assignment │
│ │ │ • Channel scoping │
│ Internet-facing │ │ VPN/SSH only │
└──────────┬─────────────────────────┘ └───────────┬────────────────┘
│ │
▼ ▼
┌────────────────┐ ┌─────────────────────────────┐
│ Redis │ │ PostgreSQL │
│ │ │ │
│ • Pub/Sub │◀────────────────▶│ • events │
│ • Rate limits │ │ • api_keys (HMAC hashed) │
│ • Key cache │ │ • admin_users │
└────────────────┘ └─────────────────────────────┘
Why this architecture?
- Split services — the public API faces the internet; the admin portal never does. A misconfiguration in one can't expose the other.
- Redis as the real-time layer — Pub/Sub enables instant event distribution to all connected WebSocket clients. Rate limiting and key validation caching keep the database load predictable.
- PostgreSQL for durability — events are stored for historical queries and retention management. Composite unique keys ensure data integrity.
Technology Stack
- Python 3.12 with FastAPI — async-first framework for high-throughput data ingestion
- PostgreSQL 16 with asyncpg — fastest async PostgreSQL driver, native JSON support
- Redis 7 with hiredis — Pub/Sub for real-time streaming, rate limiting, key validation cache
- SQLAlchemy 2.0 — ORM with async support (enables easy database swapping to MariaDB/MySQL)
- Alembic — database migrations, auto-generated from models
- Docker Compose — reproducible deployment across development, staging, and production
- Traefik — reverse proxy for clean subdomain routing
- Argon2 + HMAC-SHA256 — industry-standard password and API key hashing
- Jinja2 + HTMX — server-rendered admin UI with dynamic interactions, no JavaScript framework overhead
What We Delivered
Core Platform
- Secure data ingestion endpoint that accepts events from any HTTP-capable device
- Real-time WebSocket streaming with channel-based access control
- Search API with composable filters and pagination for historical queries
- Three-role permission model (
admin,publisher,subscriber) with channel-level granularity - Intelligent duplicate handling (upsert) for device re-transmissions
- Configurable data retention with automatic cleanup
Admin & Operations
- Web-based admin portal for API key lifecycle management (create, scope, expire, revoke)
- Redis-backed rate limiting protecting against runaway devices
- Full Docker deployment with health checks and environment-driven configuration
- Automated test suite with CI pipeline for safe, confident deployments
Demo & Onboarding Tools
- Live Event Viewer — browser-based dashboard showing events in real-time as they arrive. Ready for client demos or embedding in their own applications.
- Data Simulator — web UI that mimics device behavior, useful for demos and onboarding integration partners without physical hardware.
Business Capabilities Unlocked
- Live Data Displays — public screens, broadcast overlays, and digital signage can subscribe and show data instantly.
- Real-Time Monitoring — operations teams see device data the moment it's generated.
- Multi-Site Visibility — one API for data across all locations.
- Third-Party Integrations — partners build on the same API devices use.
- Data Products — device data becomes a licensable asset.
The Results
We delivered a production-ready platform within the client's timeline, enabling them to meet their customer commitments.
For the client:
- New revenue stream — the data platform becomes a value-add they can offer to hardware customers ("buy our equipment, get cloud data included").
- Partner ecosystem — integration partners can now build dashboards, mobile apps, and analytics tools on top of the API without direct hardware access.
- Multi-tenant ready — they can onboard multiple end-customers with confidence that data stays isolated between tenants.
- Operational simplicity — Docker-based deployment means they can spin up new environments without our involvement.
For their customers:
- Real-time visibility — site operators see device data the instant it's generated, from anywhere.
- Multi-site consolidation — organizations with equipment across multiple locations get a single API to query all their data.
- Integration flexibility — the same API that devices use to push data is available for third-party tools to consume it.
Why It Worked
The aggressive timeline required us to make smart architectural decisions upfront:
- Split services kept security concerns isolated — the admin portal never touches the public internet.
- Redis as the real-time backbone meant we didn't have to build custom WebSocket scaling logic.
- Generic event model avoided weeks of back-and-forth on schema design — the platform accepts whatever the hardware sends.
- Docker Compose gave us reproducible environments from day one, eliminating "works on my machine" delays.
The platform is now in the client's hands, ready for production deployment with their first customers.
About Us
We specialize in building purpose-built data platforms that connect physical systems to digital products. Whether you're dealing with IoT devices, industrial equipment, monitoring systems, or sensor networks — if your hardware generates valuable data that needs to be collected, secured, and distributed, we can help.
Our strengths:
- API design & development — RESTful APIs, WebSocket streaming, real-time data pipelines
- Security architecture — role-based access control, API key management, multi-tenant data isolation
- Hardware integration — bridging legacy systems and physical devices with modern cloud platforms
- Docker & cloud deployment — containerized applications ready for any hosting environment
- Full-stack web development — from backend services to admin interfaces and client-facing UIs
Let's Talk
If your business has data locked in hardware, spreadsheets, or legacy systems — and you want to turn it into a connected digital product — we'd love to hear from you.
[Get in touch on Upwork] | [Get in Touch Directly →]
The work described here formed part of a larger project delivered for a client. Details have been anonymised to protect confidentiality, but the technical implementation, results, and process reflect real work done.