Most AI features in back-office tools are pull-shaped. An operator opens an interface, asks a question or kicks off a task, watches the agent work, approves the side effects, and closes the session. The human is in the loop because the human started the loop.
Some work doesn't fit that shape. A weekly summary of at-risk players. A nightly sweep that flags KYC documents missing supporting evidence. A 3am pass over the day's failed withdrawals to draft a triage report for the morning shift. These tasks have no natural starting human. They have a clock.
The product question is whether the AI gets to participate in clock-shaped work at all — and if so, on what terms.
What changes when no operator is in the room
In an interactive AI Studio session, the operator is the safety layer. Plans are shown before execution. Write tools generate previews. Approval is required for each side-effecting step. The model can propose anything, but nothing happens that a human didn't see and authorize.
Strip the operator out, and three things the interactive session relied on are no longer there.
- No real-time approval gate. A preview shown to no one is just a log entry. If the agent generates a preview and waits for an approval that will never come, the schedule's purpose — running automatically — is defeated.
- No course correction. If the model misinterprets the goal at 03:00, no one is going to spot it and redirect the session. The misinterpretation just runs.
- No ad-hoc authority. An operator running a live session brings their permission set with them. A scheduled session has to inherit a permission set that someone configured days or weeks earlier — and trust that those permissions are still appropriate.
Each of these has to be answered at the design level, before the schedule is allowed to run.
The cron and the schedule record
The dispatch mechanism is plain: a cron task ticks every minute, queries the database for schedules whose NextRunAt has come due, and dispatches a session-requested event for each one. The event reaches an event listener that materializes a fresh AI Studio session with the schedule's stored goal as its starting prompt. From the session's perspective, this looks identical to an operator typing the goal — same planner, same tools, same persistence.
The schedule record itself is the interesting part. Stored alongside the goal and the cron expression are the user identity the session runs as, the skin scope, a label, and — crucially — a list of trusted actions: write-tool names that this schedule is pre-authorized to execute without per-step approval.
Everything else flows from those fields.
The trust list, in detail
A scheduled session can call any read-only tool the user identity is permitted to use. Reading data has no side effects — the worst case is a log entry that says "the agent looked at this." For writes, the rule is stricter: the only write tools a scheduled session can execute are those named explicitly in its trust list. Every other write tool is filtered out of the catalog the model sees.
This is a structural filter, not a prompt instruction. The scheduled session does not see GrantBonusTool in its tool list at all unless that tool was explicitly added to the schedule's trusted actions. The model can't decide to use it, can't infer that it might be useful, can't propose it. It simply doesn't exist from the session's point of view.
A scheduled session that says "you may not use the bonus tool" in its prompt is one prompt-injection or context-confusion away from doing it anyway. A scheduled session that has the bonus tool removed from its tool catalog cannot use it under any circumstances, because there is nothing for the model to call. The boundary is enforced at the catalog construction layer, where it cannot be talked out of.
Even within trusted actions, the preview generation step still runs. The schedule executes the preview, records it, then executes the action. Both records are persisted. The reason the preview is still generated, even when no one is watching, is that the audit trail of a scheduled run needs to contain the same information as an interactive run — what the action was, what it touched, whether it was reversible. A regulator reading the audit log a year later should not be able to tell whether a particular write was approved live or executed against a trust list.
What schedules are actually used for
The pattern works best for tasks with three properties: the goal is stable (it doesn't depend on real-time context an operator would supply), the side effects are bounded (the trust list can enumerate them), and the value of running automatically beats the value of an operator approving each instance.
The schedules our early adopters configured are concentrated in a narrow band:
- Internal reporting. Compose a daily / weekly summary and post it to a back-office channel. Trust list: the message-posting tool, scoped to internal channels. No customer-facing communication.
- Triage flagging. Identify accounts that match a risk pattern (large unfilled SOW gaps, KYC documents about to expire, RG threshold proximity) and add an internal flag with a generated note. Trust list: the flagging tool. Flags are visible to operators on next login; nothing customer-facing happens automatically.
- Cohort metrics. Compute a cohort metric (new-deposit retention, bonus utilization, churn rate by segment) and store the result in a dashboard table. Trust list: the metric-write tool. No player-level state changes.
What we have not seen — and what we don't recommend configuring — is a schedule that grants bonuses, approves payments, or sends customer-facing communication on its own. Those actions are within the AI's capabilities, but in a regulated operation they are not within the AI's autonomy budget. The operator-in-the-loop model exists for them, and it works.
It would be technically possible to add high-impact write tools to a trust list and let a 3am cron grant bonuses, send customer emails, or modify limits. The platform won't stop you. It also won't stop you from regretting it. The trust list is sized to what an operator would routinely approve without thinking — internal flags, internal reports, internal metrics. If a write would make an operator pause for thirty seconds during the day, it does not belong in a trust list at night.
What happens when a schedule misbehaves
The visibility model is the same as for interactive sessions. Every scheduled run produces a complete session record: the plan, the steps, the tool calls and their args, the previews generated for write steps, the side effects executed, and the result of each. Failures are recorded with their error context. The next-run timestamp is updated even if the run failed; schedules don't get into infinite-retry loops.
Operators see scheduled runs in the same audit interface as interactive runs. They can replay a session — re-execute it against current data to see what would happen now — or fork it into an interactive session if they want to take it in a different direction manually.
If a schedule is producing runs that look wrong, the response is to disable the schedule and either rewrite the goal or tighten the trust list. There's no concept of a "rogue agent" that needs to be intercepted in flight, because nothing the agent can do is outside its trust list, and the trust list was authored by a human.
The replan question, again
Replanning works in scheduled sessions, but conservatively. If a query step returns unexpected results — say, the cohort the schedule was supposed to summarize turns out to be empty — the agent can replan to produce a sensible result (a "no records this period" report, for example). What it cannot do is replan into a write tool that wasn't in its trust list. The replanning model is still bound by the catalog, and the catalog is still bound by the trust list.
This means scheduled agents can adapt to surprising data, but cannot adapt to surprising authority. That asymmetry is intentional. Adapting to data is the kind of flexibility that makes scheduled agents useful. Adapting to authority is the kind of flexibility that turns them into a compliance problem.
PAM's scheduled AI sessions run on a per-minute cron tick, dispatching against schedules whose next-run time has come due. Each schedule carries a goal, a user identity, a skin scope, and an explicit trust list of write tools. Sessions execute exactly the same planner-driven flow as interactive sessions — same tools, same previews, same audit trail — with the operator's role replaced by a structural filter on the tool catalog. The outputs flow into back-office channels, dashboards, and internal flags where operators see them on their next login. Customer-facing actions remain on the interactive path.
The principle
Autonomy and accountability are not opposites; they're a tradeoff with a tunable in the middle. The tunable is the trust list. A schedule with no trusted actions is a query-only agent — useful for surfacing patterns, but unable to act on them. A schedule with a small, well-chosen trust list is an autonomous worker for a specific kind of work. A schedule with an unbounded trust list is a way to bypass the controls that everything else in the system depends on.
The interesting design decision in scheduled AI is not the cron, the dispatcher, or the model. It's the answer to a single question: what is this schedule allowed to do, and how is that enforced? If the answer is "it depends what the operator approves at runtime," it isn't a schedule. If the answer is "whatever the model decides," it isn't safe. The right answer is a list, written in advance, that the system enforces structurally — and an audit trail that proves it did.