/Engineering/6 Min Read

Real-Time Architectures for Frontend Developers

Neon sign with service 24 hours text
Photo by erictompkins on unsplash

Why real-time communication matters?

In the modern web, users don't just prefer instant feedback, they expect it. Real-time communication has shifted from a "nice-to-have" to a core requirement, reducing the latency of the traditional request–response model to near zero.

Think about the experiences we rely on daily:

  • Collaborative Tools: Like editing a Google Doc simultaneously with a colleague.
  • Live Engagement: Seeing typing indicators or instant reactions in a chat.
  • Streaming: Video calls and live broadcasts that feel seamless.

The Problem: The Limits of Request–Response The traditional HTTP model struggles here for one simple reason: communication is always client-initiated. The server cannot proactively push updates to the browser. This means if another user makes a change, your browser doesn't know about it until it asks.

The Cost of "Faking" It To bridge this gap, developers historically relied on short polling, sending requests at fixed intervals (e.g., every 5 seconds) just to check for updates. This is inefficient for two reasons:

  1. High Overhead: Every single check triggers a full HTTP request/response cycle with heavy headers.
  2. Wasted Resources: Most of these requests return empty-handed, creating unnecessary load on both the server and the network.

The Solution Real-time technologies like WebSockets and Server-Sent Events (SSE) solve this by establishing a persistent connection. Instead of repeatedly knocking on the door, the client opens the door once, allowing the server to push data the moment it becomes available.

What is Real-Time Communication on the Web?

At its core, Real-Time Communication (RTC) refers to the near-instant exchange of data where feedback feels immediate. It’s the magic that allows content to update dynamically on your screen without you ever hitting the "Refresh" button.

This marks a major shift from the early days of static web pages to the interactive applications we use today, where users are consuming and producing data simultaneously.

A Crucial Distinction:"Real-time" on the web is not "hard real-time" (like the microsecond precision needed for an airbag sensor). In web development, we aim for "soft real-time", where small delays are acceptable, provided the interaction feels instantaneous to the human eye.

The HTTP Bottleneck The challenge lies in the foundation of the web itself: HTTP. By default, HTTP is a request–response protocol, meaning only the client (browser) can start a conversation. The server cannot speak unless spoken to. This raises the fundamental engineering problem: How does the server notify the client when something changes?

Framing the Problem When choosing a real-time strategy, you aren't just picking a technology; you are answering three specific trade-off questions:

  • Initiation: Who starts the conversation?
  • Frequency: How often does the data actually change?
  • Cost: What is the overhead in terms of bandwidth and server processing?

These answers will dictate your architecture. Different (like Polling, Server-Sent Events (SSE), or WebSockets) offer distinct solutions to these trade-offs, each with its own strengths and limitations.

Key real-time communication strategies

1. Polling: The "Are We There Yet?" Approach

Polling was the earliest workaround to bypass the passive nature of HTTP. Essentially, the client simulates a push by repeatedly asking the server for updates.

There are two main flavors:

A. Short Polling

The brute-force method. The client sets a timer (e.g., setInterval) and knocks on the server's door every few seconds.

  • How it works: Client requests → Server responds (with data or empty) → Repeat.
  • The Trade-off: While dead simple to implement, it is highly inefficient. Most requests return empty-handed, wasting bandwidth and server CPU on processing HTTP headers for no reason.

B. Long Polling

A smarter approximation of a push.

  • How it works: The client sends a request, but the server doesn't close the connection immediately. It holds the line open until it has new data (or a timeout occurs). Once the client receives data, it immediately opens a new connection.
  • The Trade-off: This reduces empty responses and feels closer to real-time. However, keeping thousands of connections "hanging" consumes significant server memory and makes scaling difficult.

2. Server-Sent Events (SSE): The One-Way Street

Polling is often overkill when the client just needs to listen. If you are building a news feed or a stock ticker, you don't need to reply to the server; you just need to receive.

Server-Sent Events (SSE) standardize this by creating a single, long-lived HTTP connection where the server pushes data whenever it wants.

Why choose SSE?

  • Simplicity: It runs over standard HTTP. If you know how to handle a REST request, you can handle SSE.
  • Firewall Friendly: Since it's just HTTP/HTTPS, it rarely gets blocked by strict enterprise firewalls (unlike WebSockets).
⚠️ Note on Connections: In older HTTP/1.1 environments, browsers limit SSE connections to 6 per domain. This can be a bottleneck for users with multiple tabs open. Fortunately, HTTP/2 largely removes this limit thanks to connection multiplexing.

3. WebSockets: The Full-Duplex Highway

SSE is great for broadcasting, but modern apps (chats, multiplayer games, collaborative editing) are conversations, not monologues.

WebSockets (RFC 6455) solve this by upgrading the HTTP handshake into a persistent TCP connection that allows full-duplex communication.

  • Bidirectional: Both client and server can send data independently at any time.
  • Low Overhead: Once connected, data frames have minimal header weight compared to HTTP requests.

The Cost of Power

WebSockets are the gold standard for performance, but they come with a "complexity tax":

  1. Stateful: The server must track the state of every open connection.
  2. Infrastructure: Load balancing is harder (you need sticky sessions or a pub/sub Redis layer), and handling disconnections/reconnections requires robust custom logic.

Comparison & Trade-offs: Choosing Your Fighter

Short polling, long polling, SSE, and WebSockets all get the job done, but the "cost" of that job varies wildly. Choosing the right one is about balancing three main levers: Complexity, Performance, and Infrastructure.

1. The Complexity Tax

How hard is it to build and maintain?

  • Short Polling: The simplest approach. It relies on standard HTTP requests and works everywhere with minimal setup.
  • Long Polling: Slightly more complex. Backend logic must handle open requests and reconnections carefully to avoid missed updates.
  • Server-Sent Events (SSE): Medium complexity. The browser API (EventSource) is simple and handles auto-reconnection, but it requires specific server-side text formatting.
  • WebSockets: The most complex option. Developers must manually manage the connection lifecycle, reconnection logic, heartbeats, and error states.

2. Scalability and Efficiency

Here is where the architecture significantly impacts your server costs:

  • Short Polling: High. Updates are delayed by the polling interval.
  • Long Polling: Medium. Near real-time, but brief delays occur during the "response $\to$ new request" cycle.
  • SSE & WebSockets: Low. Persistent connections allow immediate pushes. WebSockets generally offer the lowest overhead (no HTTP headers after handshake).

3. Scalability and Efficiency

Here is where the architecture significantly impacts your server costs:

  • Short Polling (Least Efficient): Generates high unnecessary load. Most requests return empty, wasting CPU and bandwidth on repeated HTTP headers.
  • Long Polling (Memory Heavy): Better than short polling, but still resource-intensive. Holding thousands of open HTTP connections consumes significant server memory.
  • SSE (Scales Well): Excellent for one-way streams. Crucially, with HTTP/2, multiple streams can be multiplexed over a single connection, bypassing the old browser limit of 6 connections per domain.
  • WebSockets (Complex Scaling): Highly efficient for data transfer, but difficult to scale horizontally. Because connections are stateful, adding more servers often requires a synchronization layer (like Redis Pub/Sub) to ensure messages reach users connected to different server nodes.

4. Infrastructure Compatibility

  • Polling: Safe bet. Fully compatible with legacy systems and HTTP/1.1.
  • SSE: Works well with modern infrastructure (CDNs, Proxies) since it runs over standard HTTP.
  • WebSockets: Can face challenges with strict enterprise firewalls or proxies that don't support long-lived TCP upgrades.

5. Real-World Use Cases: When to Use What?

Short Polling

Best for: Low-frequency updates where simplicity outweighs performance.

  • Checking for new emails every few minutes.
  • Simple status checks on long-running background jobs.

Long Polling

Best for: Fallback scenarios.

  • Environments where WebSockets are blocked by firewalls.
  • Legacy applications that haven't been modernized yet.

Server-Sent Events (SSE)

Best for: One-way data streams (Server to Client).

  • Live news feeds and sports scores.
  • Stock market tickers.
  • Real-time system monitoring logs or dashboards.

WebSockets

Best for: High-frequency, bidirectional interaction.

  • Chat Applications: (WhatsApp, Slack web).
  • Multiplayer Games: Where low latency is critical.
  • Collaborative Tools: (Figma, Google Docs).
  • Live Proctoring: Streaming video/audio and data simultaneously.

Where to Use Each Strategy?

Forget the hype trends. Choosing the right strategy comes down to three pragmatic factors: your traffic patterns, latency tolerance, and infrastructure constraints.

1. WebSockets: The Heavy Lifter

The "Gold Standard" for interactive apps.

Use this when:

  • You need full 2-way conversations: Ideally suited for apps where client and server must exchange messages independently at any time.
  • Latency is critical: You need sub-100ms updates without the overhead of repeated HTTP handshakes.
  • You handle binary data: Unlike SSE, WebSockets natively handle images, audio, and video streams alongside text.

Typical Use Cases: Multiplayer games, collaborative editing (Figma/Google Docs), instant messaging, and high-frequency trading apps.

2. Server-Sent Events (SSE): The Broadcaster

The efficient specialist for one-way feeds.

Use this when:

  • Traffic is strictly one-way: You just need to push data from Server → Client.
  • You want "Batteries Included": You need automatic reconnection and event ID tracking out of the box (features you'd have to code manually with WebSockets).
  • You use HTTP/2 or HTTP/3: You can leverage multiplexing to bypass the old "6 connections per domain" limit, allowing multiple streams on a single connection.

Limitation: Remember that SSE supports text only. It does not handle binary data natively.

Typical Use Cases: Live news feeds, sports scores, social media notifications, server monitoring dashboards.

3. Long Polling: The Safety Net

The reliable fallback for tough environments.

Use this when:

  • Compatibility is king: You need to support legacy browsers or older HTTP/1.1 backends.
  • Firewalls are blocking you: Some strict enterprise proxies block WebSocket traffic but allow the standard HTTP requests used here.
  • Real-time is "nice to have": You can tolerate slightly higher latency and server overhead in exchange for universal access.

4. Specialized Protocols (Beyond the Basics)

Sometimes standard web protocols aren't the right fit. Consider these for specific niches:

  • gRPC: Best for Microservices. Ideal for high-performance backend communication using Protobufs and HTTP/2. Note: Requires gRPC-Web for browser support.
  • WebRTC: Best for Peer-to-Peer (P2P). The standard for direct audio/video calls between users. Crucial Note: You still need a signaling server (usually WebSockets) just to set up the initial handshake.
  • MQTT: Best for IoT. Optimized for low-bandwidth, unreliable networks where devices (sensors, smart home tech) need to talk efficiently.

What’s next?

In the next article, we’ll put these concepts into practice by building a small real-time chat using WebSockets. We’ll focus on connection lifecycle, message flow, and the trade-offs you only notice when working with real code.