Back-office support agents in a gaming operation are information workers. Their job is to answer questions: Why was this withdrawal declined? What's the status of this KYC review? Why did this bonus not credit? How long has this player been depositing above the SOW threshold?
Every one of these questions has an answer somewhere in the platform. The problem is that "somewhere" means navigating eight different back-office screens, cross-referencing multiple modules, and knowing which audit trail to check. An experienced agent does this in three minutes. A newer agent does it in fifteen.
When we started thinking about AI integration, the first question was: what would an AI assistant actually need to know to be useful here? The answer was: the same things the agent needs to know — and exactly no more.
The obvious approach and why it fails
The obvious implementation is to give the AI assistant access to the database and let it query what it needs. Ask the assistant about a player's withdrawal status, and it runs a query against the banking tables. Ask about their KYC status, it queries the KYC module.
The back-office permission model exists for a reason. Different roles see different data. A support agent at level 1 doesn't have access to SOW review details. A KYC reviewer doesn't have access to the player's full payment history. An affiliate manager doesn't have access to player-level data at all. If the AI assistant has database access, it bypasses every one of these controls. An agent asking the AI about something they're not authorized to see would receive an answer the platform is specifically designed to prevent them from getting.
An AI that leaks context it shouldn't see isn't just a privacy problem. In a regulated gaming jurisdiction, it's a compliance failure — and potentially a licensing risk.
The permission-scoped context model
The approach we took inverts the usual model. Instead of giving the AI access to data and filtering after the fact, we construct the context the AI receives using the same permission resolver that governs the rest of the back office.
When an agent opens the AI assistant, the system builds a context object for that session. The context contains:
- The agent's identity and role
- Their current permission set (resolved from the RBAC model)
- The player they're currently viewing (if any), with data fields filtered to what the agent's role can see
- Recent relevant events from the platform — transactions, compliance events, communications — filtered by both recency and permission
The AI never receives data the agent can't see. It doesn't make queries. It receives a curated context that was built using the same authorization logic that governs the back-office UI. If a data field doesn't appear in the back-office view for that role, it doesn't appear in the AI context.
The AI assistant can only answer questions about data that's in its context. Its context is built with the agent's permissions. Therefore the AI can only help with data the agent is already authorized to see. This isn't a limitation — it's the design. A tool that respects your authorization model is more useful than one that bypasses it, because operators can actually deploy it without compliance risk.
What the assistant can do
Within its permission-scoped context, the assistant handles multi-turn conversations naturally. An agent might ask:
"Why was this player's withdrawal declined last week?"
The assistant looks at the player's payment history in the context, finds the declined transaction, identifies the reason code (insufficient KYC documentation), and summarizes it in plain language. The agent follows up:
"Has their KYC been submitted since then?"
The assistant checks the KYC status in the context and responds. The conversation continues naturally, with the assistant maintaining awareness of what was discussed earlier in the session.
This is the practical value: an agent who would have spent five minutes navigating to four different screens can get the same information in thirty seconds. Multiply that by the volume of support queries a live operation handles, and the efficiency gain is material.
The model and its integration
PAM's AI assistant uses Google Gemini as the underlying model. The choice was driven by practical considerations: context window size (the player context objects can be large), API reliability, and the model's performance on structured data summarization tasks.
The integration is a dedicated module — PAM.AI — that owns the context construction logic, the prompt template, and the API interaction. The context is constructed fresh for each conversation start, using the agent's current session state and the player record they're viewing. Conversation history is maintained within the session and included in subsequent prompts to enable multi-turn reasoning.
The module is platform-aware but loosely coupled. It reads player state through the same service interfaces that the back-office UI uses — not directly from the database. This means the permission filtering happens in the service layer, exactly where it always happens. The AI module can't bypass it because it never goes around it.
What the assistant cannot do
Scope matters. The assistant is explicitly not:
- An action-taking agent — it cannot modify player state, approve KYC documents, or process payments
- A data export tool — it cannot be prompted to produce bulk data summaries or player lists
- An escalation bypass — it cannot give agents access to information their role excludes
These constraints are enforced at the context construction level, not by prompting the model to "refuse" certain requests. Prompt-level refusals are not a security model. Structural context scoping is.
Auditability
Every AI session is logged: the agent identity, the player context that was constructed (and therefore what data the AI had access to), the questions asked, and the responses given. If an agent asks the AI about a player who later raises a data access complaint, there's a complete record of what the AI knew, what it said, and who asked.
This auditability requirement shaped the implementation more than the model choice or the prompt design. In regulated industries, "we have AI but we can't show you what it knew or said" is not a defensible position. The audit trail is part of the product.
The AI assistant is live in the PAM back office. Agents use it to surface player context faster, to understand the history behind a support case before speaking to the player, and to navigate the audit trail of complex multi-step events. The most common feedback is that it removes the "which screen do I need to check?" overhead that slows down newer agents. The most important fact about its deployment: compliance has no concerns about what data it can access, because it can only access what they already authorized.
The principle
AI in a regulated industry is not primarily a model problem. It's a context problem. The model is well-solved. The hard part is ensuring the context the model receives respects the same constraints as every other part of the system — and doing this structurally, not by hoping the model follows instructions it was told not to ignore.
If you're evaluating AI for a regulated back office, the questions to ask are not about model quality. They're about context scoping: Does the AI see only what the human it's assisting can see? Is that enforced in the authorization layer, or in a prompt? Is the session auditable? The answers to those questions determine whether the AI is deployable — not the benchmark scores.