Interactive Demo
Native DemoPRISM: Windshield, Not Rearview Mirror
Interactive demonstration of predictive adaptation and trajectory-aware behavior.
Content Note
This chapter demonstrates predictive safety in a relationship conflict context. It includes discussion of coercive control patterns, escalation indicators, and safety planning.
The scenario shows how PRISM detects risk trajectories before explicit danger is stated — and responds with calibrated, agency-preserving support.
Rearview Mirror vs. Windshield
What changes when a system can see where conversations are heading, not just where they've been.
Most AI adaptation works like a rearview mirror. It sees what just occurred and adjusts based on that. User seemed confused? Simplify. User seemed bored? Add complexity. User disengaged? Try something different.
The problem is timing. By the time confusion is visible, frustration has already set in. By the time boredom is obvious, attention has already wandered. By the time disengagement is clear, the user may already be gone.
Reactive adaptation is damage control. It responds to problems after they've become problems.
| Rearview Mirror | Windshield (PRISM) |
|---|---|
| Flags keywords | Models trajectory |
| Responds to explicit risk | Predicts implicit escalation |
| Treats turns independently | Connects time |
| Adjusts after struggle | Adjusts before struggle |
| Waits for crisis | Intervenes before crisis |
| Same approach until failure | Adapts to predicted state |
| Optimization is hidden | All objectives inspectable |
PRISM tracks five behavioral dimensions simultaneously: engagement trajectory, energy patterns, cognitive load, emotional signals, and skill progression. Each dimension has its own prediction model, but the interactions between dimensions matter most.
A student with declining energy but stable engagement is different from a student with declining engagement but stable energy. The first needs a break. The second needs a different approach.
PRISM projects across three horizons: next turn (high confidence), three turns ahead (moderate confidence), and five turns ahead (lower confidence, wider uncertainty). Predictions get less certain over longer horizons, and the system knows how confident it is.
Predictive systems without transparency become manipulation systems. If an AI is adjusting its behavior based on predictions about you, and you can't see what it's predicting or why it's adjusting, the power imbalance is dangerous.
PRISM's principle: every prediction is inspectable. Every adaptation is explainable. Every objective is declared. Users can always ask what PRISM is predicting, what objectives it's optimizing for, and why it made specific adjustments.
SafetyMesh has absolute veto authority over any adaptation PRISM recommends. Prediction doesn't override safety. Ever.