Real-time Progress Updates with Go SSE and Redis Pub/Sub

Feb 23, 2026 5 min read

How I built a lightweight real-time notification system for long-running AI jobs

TL;DR:

  • I built a lightweight Go service (~300 lines) that streams real-time progress updates for long-running AI jobs to the browser using Server-Sent Events (SSE).
  • A Python/Celery worker publishes job status changes to Redis; the Go service subscribes and forwards updates only to clients watching that job.
  • A Hub pattern in Go manages thousands of concurrent SSE connections safely, using goroutines, channels, and non-blocking broadcasts.
  • This pattern avoids wasteful polling, is simpler than WebSockets for one-way updates, and is easy to deploy as a small independent service.

When I built the Meeting Intelligence Platform, one of the core UX problems was simple: long-running AI jobs (5–10 minutes) left users staring at spinners with no idea what was happening. I needed a way to push real-time progress updates from the backend pipeline (Python + Celery + Redis) to the browser, without adding a lot of complexity or overhead.

The options were:

  1. Polling — Client asks "is it done yet?" every few seconds. Simple but wasteful.
  2. WebSockets — Bidirectional, powerful, but overkill for one-way updates.
  3. Server-Sent Events (SSE) — One-way stream from server to client. Native browser support. Perfect fit.

In this post, I'll walk through the solution I implemented: a small Go service that bridges Redis pub/sub to browser clients using SSE, and how this pattern can be reused for any background job system that needs live status updates.

Why Go for SSE?

The Python API handles business logic—authentication, file uploads, database operations. But SSE connections are long-lived and concurrent. A single server might hold hundreds of open connections waiting for updates.

Go excels here:

  • Goroutines handle thousands of concurrent connections efficiently
  • Low memory footprint per connection
  • No GIL (Global Interpreter Lock) concerns
  • Fast startup time for containerized deployments

The SSE service is intentionally small (~300 lines) and does one thing: bridge Redis pub/sub to browser clients without client-side polling.

The Architecture

┌─────────────┐     ┌─────────────┐     ┌─────────────┐
│   Browser   │◀────│  Go SSE     │◀────│   Redis     │◀────┐
│   Client    │ SSE │  Service    │ Sub │   Pub/Sub   │     │
└─────────────┘     └─────────────┘     └─────────────┘     │
                                                             │
┌─────────────┐     ┌─────────────┐                          │
│   Python    │────▶│   Celery    │──────────────────────────┘
│   API       │     │   Worker    │ Publish
└─────────────┘     └─────────────┘
  1. Browser connects to SSE endpoint: GET /events/{job_id}
  2. Go service registers client, waits for updates
  3. Celery worker processes job, publishes status to Redis
  4. Go service receives Redis message, pushes to connected clients

The Hub: Managing Concurrent Connections

The core challenge is tracking which clients are watching which jobs. I use a Hub pattern—a central registry that routes updates to the right connections.

For readers less familiar with Go: the Hub pattern is a common concurrency approach where a single goroutine owns the shared state and communicates with other goroutines via channels. This avoids locks in most code paths and makes reasoning about concurrency straightforward.

// from hub.go
type Hub struct {
	clients    map[string]map[*Client]bool // job_id -> set of clients
	register   chan *Client
	unregister chan *Client
	broadcast  chan JobUpdate
	mu         sync.RWMutex
}

The map structure is job_id -> set of clients. When an update arrives for job X, I only notify clients watching job X.

The Hub runs in a single goroutine, processing events sequentially to avoid race conditions:

// from hub.go
func (h *Hub) Run() {
	for {
		select {
		case client := <-h.register:
			h.addClient(client)

		case client := <-h.unregister:
			h.removeClient(client)

		case update := <-h.broadcast:
			h.broadcastUpdate(update)
		}
	}
}

This is Go's classic "share memory by communicating" pattern. External code sends messages via channels; the Hub processes them safely.

Non-Blocking Broadcasts

When broadcasting, I don't want a slow client to block updates to others. The solution: buffered channels and non-blocking sends.

// from hub.go
func (h *Hub) broadcastUpdate(update JobUpdate) {
	h.mu.RLock()
	defer h.mu.RUnlock()

	clients, ok := h.clients[update.JobID]
	if !ok {
		return
	}

	for client := range clients {
		select {
		case client.Channel <- update:
		default:
			// Client buffer full, skip
		}
	}
}

Each client has a buffered channel (capacity 10). If it's full, I skip that client rather than blocking. This prevents one slow connection from affecting others.

Trade-off: A slow client may miss intermediate status updates. For this use case that's acceptable—they'll catch the next update or the final status. If your use case requires guaranteed delivery of every message, you'd need a different approach (e.g., persistent message queue with acknowledgments).

The SSE Handler

SSE is just HTTP with specific headers and a streaming response. The browser's EventSource API handles reconnection automatically.

// from handlers.go
func HandleSSE(hub *Hub) http.HandlerFunc {
	return func(w http.ResponseWriter, r *http.Request) {
		jobID := extractJobID(r.URL.Path)
		if jobID == "" {
			http.Error(w, "Missing job_id", http.StatusBadRequest)
			return
		}

		if !setupSSEHeaders(w) {
			return
		}

		flusher := w.(http.Flusher)
		client := createClient(jobID)

		hub.Register(client)
		defer hub.Unregister(client)

		sendConnectedEvent(w, flusher, jobID)
		listenForUpdates(w, flusher, r.Context(), client)
	}
}

Key points:

  • Extract job ID from URL path
  • Set SSE headers (Content-Type: text/event-stream, Cache-Control: no-cache)
  • Register client with Hub
  • Send initial "connected" event
  • Loop until client disconnects or job completes

The headers are critical:

// from handlers.go
func setupSSEHeaders(w http.ResponseWriter) bool {
	flusher, ok := w.(http.Flusher)
	if !ok {
		http.Error(w, "SSE not supported", http.StatusInternalServerError)
		return false
	}

	w.Header().Set("Content-Type", "text/event-stream")
	w.Header().Set("Cache-Control", "no-cache")
	w.Header().Set("Connection", "keep-alive")
	return true
}

Listening for Updates

The update loop watches two things: client disconnection and incoming updates.

// from handlers.go
func listenForUpdates(w http.ResponseWriter, flusher http.Flusher, ctx context.Context, client *Client) {
	for {
		select {
		case <-ctx.Done():
			log.Printf("Client disconnected for job: %s", client.JobID)
			return

		case update, ok := <-client.Channel:
			if !ok {
				return
			}

			if err := sendUpdate(w, flusher, update); err != nil {
				continue
			}

			if isTerminalStatus(update.Status) {
				return
			}
		}
	}
}

When ctx.Done() fires, the HTTP connection closed—client navigated away or lost connectivity. When the job reaches a terminal status (completed or failed), I close the connection server-side. The client has the final status; no need to keep the connection open.

Redis Subscription

The Go service subscribes to a Redis pub/sub channel. The Python worker publishes; Go receives and broadcasts.

// from redis.go
func SubscribeToRedis(ctx context.Context, rdb *redis.Client, hub *Hub) {
	pubsub := rdb.Subscribe(ctx, "job_updates")
	defer pubsub.Close()

	for {
		msg, err := pubsub.ReceiveMessage(ctx)
		if err != nil {
			if ctx.Err() != nil {
				return // Context cancelled, shutdown
			}
			log.Printf("Redis receive error: %v", err)
			time.Sleep(time.Second)
			continue
		}

		update, err := parseJobUpdate(msg.Payload)
		if err != nil {
			continue
		}

		hub.Broadcast(update)
	}
}

Error handling is simple: log and retry after a brief pause. Redis is reliable; transient errors resolve themselves.

The Python Side: Publishing Updates

The Celery worker publishes status updates at key points—when processing starts, completes, or fails.

# from tasks.py
def publish_job_update(job_id: str, status: str, results_url: str = None, error: str = None):
    """Publish job status update to Redis for SSE service."""
    payload = {
        "job_id": job_id,
        "status": status,
    }
    if results_url:
        payload["results_url"] = results_url
    if error:
        payload["error"] = error
    
    try:
        redis_client.publish("job_updates", json.dumps(payload))
    except Exception as e:
        logger.error(f"Failed to publish job update: {e}")

Called like this when a job completes:

# from tasks.py
publish_job_update(
    job_id=job_id,
    status="completed",
    results_url=f"/meeting-intelligence/meeting/{meeting.id}"
)

Browser Client

The browser uses the native EventSource API:

const eventSource = new EventSource(`/sse/events/${jobId}`);

eventSource.addEventListener('connected', (e) => {
    console.log('Connected, waiting for updates...');
});

eventSource.addEventListener('status', (e) => {
    const data = JSON.parse(e.data);
    updateProgressUI(data.status);
    
    if (data.status === 'completed') {
        window.location.href = data.results_url;
    }
});

eventSource.onerror = () => {
    // EventSource auto-reconnects
    console.log('Connection lost, reconnecting...');
};

EventSource handles reconnection automatically. If the connection drops, it retries with exponential backoff. No additional code needed.

Deployment with Docker Compose

The SSE service runs as a separate container:

sse:
  build:
    context: ./meeting-intelligence/sse
    dockerfile: Dockerfile
  environment:
    - REDIS_URL=redis://redis:6379/0
    - PORT=8080
  depends_on:
    - redis

Traefik routes /sse/* to this service. The Python API and SSE service share Redis but are otherwise independent.

Lessons Learned

1. SSE over WebSockets for one-way updates

WebSockets add complexity—connection upgrades, ping/pong frames, reconnection logic. SSE is simpler and has native browser support. Use WebSockets when you need bidirectional communication.

2. Buffer channels to avoid blocking

A slow client shouldn't affect others. Buffered channels with non-blocking sends solve this elegantly.

3. Close connections on terminal status

Don't keep connections open indefinitely. When the job completes, close server-side. Reduces resource usage and signals to the client that updates are done.

4. Keep the service focused

The SSE service is ~300 lines. It does one thing. This makes it easy to reason about, test, and deploy independently.

Wrapping Up

This pattern—Redis pub/sub bridged to SSE via a lightweight Go service—works well for any scenario where backend processes need to push updates to browsers. It's simple, scalable, and uses battle-tested components.

You can see this pattern in action on the live demo — upload a meeting recording and watch the real-time progress updates.

Part of a series on building the Meeting Intelligence Platform. Next up: Secure Server-to-Server Authentication with RSA Signatures.

Building something similar? I help teams design and implement real-time systems, AI pipelines, and polyglot architectures. 

[Hire Me on Upwork →] | [Get in Touch Directly →] to discuss your project.