← All insights
Real-Time Architecture 7 min read

Real-Time or Nothing

Why polling fails in iGaming — and how SignalR plus RabbitMQ give operators genuine real-time visibility, not the illusion of it.

W
WebPrefer Engineering
December 2025
Real-Time Architecture
Polling vs Event-Driven Push
Polling Model (30s interval)
Balance changes
STALE — up to 30 seconds
Player sees update (late)
Constant DB load · even with no changes
Event-Driven Push (SignalR + RabbitMQ)
Balance changes
RabbitMQ
SignalR
Player sees update (~50ms)
Zero polling load · DB reads only on change
Real-time isn't a feature — it's a different class of product

A lot of gaming platforms describe themselves as real-time. What they usually mean is: we refresh the data every 30 seconds. That's not real-time. That's polling with a short interval — and in iGaming, the difference matters more than most people expect.

Why polling fails at scale

The intuitive model for keeping data fresh is polling: every N seconds, the client asks the server "has anything changed?" The server queries the database, returns the current state, and the client updates its view.

This works at low volume. It breaks in three ways as scale increases:

Performance cost. Every polling client generates constant read load on the database, regardless of whether anything changed. With thousands of concurrent sessions, this load is significant — and it peaks during exactly the moments when the system is already under most stress (large promotions, jackpot events, high-traffic periods).

Latency mismatch. A 30-second polling interval means a player's displayed balance can be 30 seconds stale. On a fast-paced slot session, that's many game rounds. On a withdrawal flow, it means the player sees the pre-withdrawal balance for up to 30 seconds after the transaction completes. Players notice.

Rule timing errors. The behavior engine is the most critical failure mode. If behavioral rules fire on a schedule — every N minutes — rather than on events, the timing gap creates real operational problems.

A concrete example

A player triggers a bonus threshold at 11:58pm. The rule evaluation job runs at midnight. The bonus credits at 12:00am. The player's session ends at 11:59pm because they didn't see the bonus arrive. They contact support. The audit trail shows the bonus was applied — correctly — two minutes after the threshold was hit. But the player's experience was that they triggered a condition and nothing happened. The platform did everything right, just too late.

The real-time architecture

PAM's real-time layer is built on two complementary technologies: SignalR for the player-facing connection, and RabbitMQ for internal event propagation.

They serve different purposes:

The combination means that the full path from "something happened" to "player sees it" involves no polling and no scheduled jobs. It's event-driven end to end.

The balance update path

When a game round completes and a payout is credited:

Game ProviderSends payout callback to PAM integration endpoint
PAM.BL.CasinoProcesses payout, updates wallet balance atomically
RabbitMQPublishes BalanceChanged event
PAM.Service.BehaveEvaluates behavior rules against the event (parallel)
SignalR HubPushes BalanceChange event to player's connection
Player BrowserBalance updates without a page refresh or poll request

The entire path — from provider callback to player seeing the updated balance — takes milliseconds. Not because the servers are fast, but because the architecture has no waiting: no poll timer to expire, no scheduled job to run, no batch to process.

SignalR at scale: the Redis backplane

SignalR runs on a single server by default. When the player API is load-balanced across multiple instances, a SignalR push sent to one instance won't reach players connected to other instances.

PAM solves this with a Redis backplane. When a balance update needs to be pushed, it's published to the Redis backplane. Every API instance subscribes to the backplane and forwards the message to any player connected to that instance. The player gets the update regardless of which instance they connected to.

This is also why Redis (Valkey/ElastiCache in production) is a hard dependency, not an optional performance enhancement. The real-time delivery model depends on it.

Real-time in the back office

The same event stream that feeds player-facing real-time updates feeds the back-office portal. An operator watching a player's session in the back office sees balance changes, transaction events, and compliance flags as they happen — not after a page refresh.

This has a practical compliance value. When a suspicious pattern emerges — a sequence of deposits and withdrawals that suggests structuring, or a rapid escalation through bonus thresholds — the compliance team sees it as it develops, not after a batch report runs.

The compliance window

In regulated markets, the time between a compliance event and an operator's response to it matters. A suspicious pattern that's visible in real-time can be reviewed while the player is still active. The same pattern surfaced in a next-day report is historical — the intervention opportunity has passed. Real-time isn't a UX feature. For compliance, it's operationally meaningful.

Behavior rules fire on events, not schedules

BeAware, the behavior engine, consumes the same RabbitMQ event stream. This means behavioral rules fire the moment the triggering event occurs — not at the next job interval.

When a player crosses a deposit threshold that triggers a responsible gaming intervention, BeAware receives the DepositCompleted event and evaluates the rule immediately. The intervention is dispatched — a prompt, an email, a flag for review — while the player is still in session. Not the next morning. Not the next scheduled run. Now.

This changes what responsible gaming interventions can accomplish. An intervention that reaches a player while they're actively playing has a different opportunity for impact than one that arrives in their inbox the next day. The timing is not incidental — it's the point.

Connection management and reliability

SignalR connections drop. Network conditions change. Mobile players move between WiFi and cellular. The real-time model needs to handle reconnection gracefully.

PAM's SignalR configuration includes automatic reconnect with exponential backoff, client-side connection state management, and a fallback to polling if WebSocket connections are unavailable in the player's environment. The polling fallback is significantly less efficient than the WebSocket path, but it ensures players on constrained networks still receive balance updates — slower, but reliably.

When a client reconnects after a dropped connection, it receives the current state as a fresh push — not a diff from the last known state, which may no longer be accurate.

Today

PAM's real-time layer handles continuous event streams across all active operators and brands simultaneously. Balance updates reach players within milliseconds of a transaction committing. Compliance events are visible to back-office operators as they occur. Behavioral rules fire on events, with no scheduling lag. The system generates no polling load — read queries happen in response to actual data changes, not on timers.

The constraint real-time imposes

Building a genuinely real-time system means accepting that your architecture has to be event-driven from the start. You can't retrofit real-time onto a polling-based system cleanly — you end up with a polling system that also has a WebSocket layer, and the two models conflict. The WebSocket pushes data that clients also poll for, creating inconsistency.

The event-driven model has to be the primary model. Polling — where it exists — is a fallback for degraded conditions, not the default path. That design decision shapes every layer: how the database is updated, how services communicate, how the back office is built, how the behavior engine fires rules.

In iGaming, real-time is not a feature. It's a constraint that shapes your architecture. The alternative isn't "slightly delayed real-time" — it's a different class of product.

Share this insight
Share on LinkedIn
Preview post text
More insights
Get in Touch

Ready to see it?

We offer live demos scoped to your specific operation type — whether you're launching a new brand, migrating from an existing platform, or evaluating options for a white-label deployment.

Address
Wahlbecksgatan 8, 582 13 Linköping, Sweden
Mikael Lindberg Castell
mikael@webprefer.com
CEO & Founder, WebPrefer AB