Our Approach to Trust
Trust in AI systems can’t be demanded. It has to be earned through architecture - through systems that can be examined, audited, and verified.
The Cognitive OS is built on a simple principle: if you can inspect everything, you can trust appropriately.
This page explains how we handle the things that matter most: your data, AI memory, safety decisions, and the transparency of AI reasoning.
The Cognitive OS is built on four commitments: rules that hold, transparency on demand, memory that persists, and identity that holds. Here is how we deliver on each.
Data & Privacy
What We Store
The Cognitive OS stores conversation transcripts and structured data. This is required for memory, adaptation, and context to work. Here is what that means:
Conversation history - We store full conversation transcripts so you can access your current and past conversations. This is your chat history.
Memory - Chronicle, a separate system, builds structured memory over time by tracking what is significant across your interactions. This is how the Cognitive OS remembers what matters, not just what was said most recently.
Profile data - ProfileForge stores structured inferences about your preferences, communication style, and expertise level based on your interactions.
What we do not do - We do not share your data across users. We do not sell your data. We do not use your data for advertising.
For educational deployments involving minors, memory and data handling are governed by institutional agreements and applicable student privacy laws.
How data is stored depends on how the Cognitive OS is used:
Multi-turn conversations - Conversation transcripts are stored so users can access their current and past conversations. Separately, Chronicle and ProfileForge build structured memory and profile data from these interactions, tracking what is significant over time. This is what enables the Cognitive OS to maintain context, build memory, and deliver on its four commitments.
Single API calls - For one-time requests, no conversation data is retained beyond the response unless explicitly configured.
Your Rights
You can always:
- View what the system knows about you (
/profile) - Edit any stored information (
/profile edit) - Delete your data (
/profile delete) - Export your data (
/profile export)
Memory & Chronicle
The section above explains what we store. Here is how memory works and how you control it.
How Memory Works
Chronicle stores structured significance, not raw transcripts. This means:
- What matters gets remembered (goals, decisions, preferences)
- What doesn’t matter fades (casual exchanges, routine interactions)
- Memory is prioritized by importance, not just recency
Memory Controls
You decide what is remembered. Commands available in all Cognitive OS solutions:
| Command | What It Does |
|---|---|
/memory | See what the system remembers |
/memory clear | Start fresh (clear session memory) |
/profile delete | Remove all stored profile data |
Exact controls may vary by deployment and solution configuration, but the underlying capabilities are always present.
The “Non-Creepy” Line
We design memory to be useful without being invasive. The test: if you can examine everything the system knows about you, and delete it, it is not creepy.
That is why /profile exists. Full transparency into what is stored. Always.
Safety & SafetyMesh
How Safety Decisions Work
SafetyMesh provides graduated, context-aware protection - not binary blocking.
160 states, not 2. Traditional safety is block/allow. SafetyMesh operates across 16 risk domains, each with 10 sensitivity levels. This enables nuanced responses: educational discussions about difficult topics are supported; actual harmful requests are contained.
Context matters. The same content might be appropriate in one context and not another. SafetyMesh evaluates full context, not keywords.
Guidance over blocking. When possible, SafetyMesh guides toward safe alternatives rather than simply refusing. “I can’t help with that” is a last resort, not a first response.
Safety decisions are made architecturally through SafetyMesh. As we develop additional transparency tooling, safety inspection capabilities will be documented here.
The Never-Abandon Commitment
If you’re ever in genuine crisis, the system will never just cut off the conversation. SafetyMesh includes a never-abandon protocol: stay present, provide resources, maintain connection until appropriate support is available.
Safety means protection, not abandonment.
Transparency & AuditLens
Every Decision Is Explainable
The Cognitive OS includes AuditLens - a self-reporting introspection layer that makes AI reasoning visible.
Ask “why did you respond that way?” and get a real answer:
- What the system understood about your request
- What options it considered
- Why it chose the approach it did
- What tradeoffs it navigated
Transparency by Architecture
This isn’t logging bolted on after the fact. Transparency is architectural - built into how the system reasons, not added as an afterthought.
Governance Hierarchy
When systems interact, clear hierarchy prevents conflicts:
- SafetyMesh - Safety can override any other system. Non-negotiable.
- Truth Verification - No system can fabricate data or claim false certainty.
- KnowledgeKernel - Positions constrain claims before expression.
- ORCHESTRA - Coordination operates within safety and truth bounds.
- ProfileForge + PRISM - Personalization advises but doesn’t override governance.
PersonaForge enforces rules architecturally, built into the persona itself, not bolted onto the prompt. These rules hold because they are structural, not suggestions.
This hierarchy is structural, not policy. It can’t be prompted around.
These governance mechanisms operate independently of the underlying LLM and apply consistently across model providers.
Questions?
If you have questions about how we handle trust, data, or governance, we’re happy to discuss.
For our formal privacy policy (cookies, site data, legal requirements), see Privacy Policy.