← Back to Blog
March 2026 / Engineering

Multi-Tenant Real-Time Events: Building Tenant-Scoped SSE with 75 Lines of Go

How we built a lightweight in-memory pub/sub broker that keeps real-time clinical events strictly isolated between practices.

James Morrison

CTO, Medelic

When a patient calls a GP surgery and our voice AI picks up, the reception staff need to see what is happening in real time. The call appears on their dashboard, updates stream in as the conversation progresses, and the triage result lands the moment the call ends. None of this can leak between practices. Here is how we solved both problems with a single, small abstraction.

The Problem

Medelic is multi-tenant. Each GP practice is a tenant with its own patients, staff, and call data. When we stream events over Server-Sent Events (SSE), every subscriber must receive only the events that belong to their practice. A busy morning at one surgery must be invisible to every other surgery on the platform.

We also need the system to be resilient. A slow browser tab or a flaky mobile connection must never back up the event pipeline for other subscribers. And because these are clinical coordination events, not chat messages, we can tolerate the occasional dropped event - the UI can always poll for current state. What we cannot tolerate is a data leak across tenant boundaries.

The Broker

The core of the solution is an in-memory pub/sub broker that uses the tenant ID as the channel key. The entire implementation fits in 75 lines of Go:

type VoiceEventBroker struct {
    mu          sync.RWMutex
    subscribers map[string][]chan string // tenantID -> channels
}

func (b *VoiceEventBroker) Subscribe(tenantID string) (chan string, func()) {
    ch := make(chan string, 64)
    b.mu.Lock()
    b.subscribers[tenantID] = append(b.subscribers[tenantID], ch)
    b.mu.Unlock()
    return ch, func() { /* remove ch, close it */ }
}

func (b *VoiceEventBroker) Publish(tenantID string, event VoiceEvent) {
    data, _ := json.Marshal(event)
    b.mu.RLock()
    subs := b.subscribers[tenantID]
    b.mu.RUnlock()
    for _, ch := range subs {
        select {
        case ch <- string(data):
        default: // drop for slow subscribers
        }
    }
}

Three design choices do most of the work here:

  • Tenant-keyed map - The subscriber map is keyed by tenant ID. A publish call only iterates subscribers for that tenant. There is no filtering step, no access-control check at delivery time - isolation is structural.
  • Buffered channels - Each subscriber gets a channel with a buffer of 64 events. This absorbs short bursts without blocking the publisher.
  • Non-blocking send - The select/default pattern drops events for subscribers whose buffers are full. A single slow client never blocks the pipeline for everyone else.

The SSE Handler

The HTTP handler that serves the SSE stream is equally minimal. It subscribes, loops, and cleans up on disconnect:

func (s *Service) VoiceCallEventsStreamSSE(
    ctx context.Context, tenantID, userID string,
    eventChan chan<- string, closeChan <-chan bool,
) {
    sub, unsubscribe := s.eventBroker.Subscribe(tenantID)
    defer unsubscribe()
    for {
        select {
        case event, ok := <-sub:
            if !ok { return }
            eventChan <- event
        case <-closeChan:
            return
        case <-ctx.Done():
            return
        }
    }
}

The framework handles the SSE framing, connection keepalive, and HTTP response flushing. The handler only needs to move events from the broker channel to the response channel and exit cleanly when the client disconnects.

Reusing the Broker Across Features

Once the broker exists, extending it to new event types costs almost nothing. Our patient portal - where patients track messages, appointment changes, and triage results - reuses the same broker and the same SSE infrastructure. Because events are already scoped by tenant, the portal stream is automatically isolated to the correct practice. Adding a new event type is a single Publish() call from whatever handler produces the event.

"The best infrastructure is the kind you extend by writing one line of application code. The broker has been running for months and we have never had to touch it."

Why Not Redis, NATS, or Kafka?

The obvious question. External message brokers are powerful, but they add operational complexity - another process to deploy, monitor, and secure. For our use case the tradeoffs favour simplicity:

  • Events are ephemeral. They coordinate the UI in real time; they are not a durable log. If a subscriber misses an event, the next poll or page refresh catches up.
  • We run a single application process per deployment. There is no cross-process fan-out requirement that would justify an external broker.
  • The subscriber count is bounded by the number of concurrent staff and patient sessions - hundreds, not millions.

If we ever scale to multiple application instances behind a load balancer, we can slot Redis pub/sub behind the same interface without changing any handler code. But until that day, fewer moving parts means fewer things to break at 2am.

The Takeaway

Multi-tenant real-time features sound complex, but the core problem - delivering events to the right subscribers without leaking across boundaries - has a small, elegant solution when you make the tenant ID structural rather than incidental. A map keyed by tenant, a buffered channel per subscriber, and a non-blocking send. That is the whole thing.