Interactive Demo
Native DemoProfileForge: You Own Your Profile
Interactive demonstration of transparent user modeling and profile control.
This is a representative profile panel showing the kinds of information ProfileForge tracks and how it presents them to users. Everything shown was either declared by the student directly or inferred from their behavior during sessions, with confidence levels visible on every field.
Nothing is hidden. Nothing was scraped from outside the conversation. Nothing was purchased from a data broker.
If you can examine everything the system knows about you, and delete it, it's not creepy. It's a tool.
Notice what the profile does not contain: no medical diagnoses, no political beliefs, no religious views, no financial assessments, no psychological labels. These categories are architecturally prohibited. The system cannot infer them, even if adjacent signals are present.
A student who mentions anxiety about tests does not receive a mental health classification. They receive a preference note: "benefits from low-pressure framing during assessments."
Some things must be stated, not guessed. ProfileForge draws a line and enforces it by architecture, not policy.
Three sources of profile data, in order of priority:
Declared: Things the student said directly. "I prefer step-by-step explanations." "I'm a first-generation college student." These have the highest confidence because the user stated them.
Demonstrated: Patterns the system noticed. The student consistently asks for examples before abstractions. They engage more in the morning. They prefer written over audio. These are inferred from behavior, not questionnaires.
Corrected: The student said "I actually prefer more challenge, not less." The system's inference was overridden. The correction has confidence 1.0 because the user explicitly stated it.
User statements always win. ProfileForge learns from corrections.
Every field has a confidence score. High confidence means the system is fairly certain, based on repeated observation or explicit declaration. Lower confidence means a tentative inference from limited data.
The system knows how uncertain it is. And it tells you.
Your profile is configuration, not prediction. It describes what you've shown, not what the system assumes.
Declared
Demonstrated (12 sessions)
Corrected by user
Declared
Active sessions
Demonstrated (pattern)
Declared
Demonstrated
Declared
Default
The same student profile travels across every Cognitive OS application. The profile doesn't change. But how each application uses it does.
Below: the same student asks "Can you explain this concept?" in three different contexts. Watch how the profile shapes each response without the student having to re-configure anything.
Student asks: "Can you explain completing the square?"
System response: Starts with a concrete example (x² + 6x + 5 = 0), walks through each step visually, then names the general pattern. Pushes to a harder example immediately because the profile says "higher challenge."
Student asks: "Can you explain assertive communication?"
System response: Opens with a real scenario (saying no to a friend), walks through the components step by step, then offers a practice round. Written format, no role-play audio. Tone is warm but structured.
Student asks: "Can you explain the causes of the Civil War?"
System response: Starts with a specific event (John Brown's raid), uses it to illustrate the larger tensions, then builds the structural analysis. Connects to first-generation college goal: "This kind of analysis is exactly what college essays ask you to do."
Three different applications. Three different subjects. Three different interaction styles. But the same student was recognized across all of them.
She didn't fill out a preference form three times. She didn't re-train each system. She didn't adjust settings in each app. Her profile traveled with her.
Identity travels with you. Configuration, not repetition.
The profile did not manipulate the student toward engagement targets. It did not optimize for time-on-platform. It did not hide its influence.
Every adaptation above can be traced back to a specific profile field that the student can see, edit, or delete. The system adapts to serve declared goals, not hidden objectives.
Personalization is only ethical when the person being personalized can see and control it.
Your Data, Your Control
Five actions every user can take. No exceptions. No fine print.
Every inference, every confidence score, every source. The student can see their complete profile at any time. Nothing is hidden in a backend the user cannot access.
This includes the reasoning: not just "prefers step-by-step" but "inferred from 12 sessions where step-by-step explanations correlated with higher engagement and faster comprehension."
The system inferred "prefers low challenge" from early sessions. The student disagrees: "I actually want harder problems, I was just getting used to the system." One correction. Confidence set to 1.0. The system updates immediately.
User statements always override system inference. Always.
Delete a single field. Delete a category. Delete the entire profile. Deletion takes effect immediately at the application layer. The system stops using deleted data the moment you remove it. Permanent removal from storage follows defined retention windows configured by the deployment.
After deletion, the system starts fresh. It doesn't adapt based on what was deleted. Audit trails may record that a deletion occurred (for compliance), but the content of what was deleted is not retained.
Download your complete profile in a portable format. Take it with you. Use it to inform a new system, a different platform, or your own records.
This is real data portability, not a PDF summary. The exported profile contains every field, every confidence score, every source, in a structured format another system could use.
The student can decide which applications see which parts of their profile. MathBridge can see learning preferences but not career goals. ConversationCraft can see communication style but not academic performance.
Field-level permissions. Not all-or-nothing. And an emergency lockdown that revokes all external access with a single action.
Attention, Not Surveillance
The line between personalization that serves you and profiling that controls you.
Most AI personalization follows the advertising model: collect behavioral data silently, build profiles users can't see, optimize for platform goals (engagement, retention, conversion), and make it nearly impossible to delete.
The creepy feeling isn't irrational. It comes from asymmetry: the system knows things about you that you don't know it knows. The power runs one direction.
| Surveillance Systems | ProfileForge |
|---|---|
| Infer traits silently | Declare traits explicitly or infer with transparency |
| Optimize for engagement | Optimize for stated goals |
| Store behavioral exhaust | Store meaningful configuration |
| Share across systems invisibly | Share with explicit permission |
| Difficult to delete | Immediately and fully revocable |
| Monetize attention | Protect agency |
| Hidden models | Everything inspectable |
1. Protected categories are never inferred. Medical, political, religious, financial, psychological. These boundaries are architectural, not policy. The system cannot cross them, regardless of what adjacent signals suggest.
2. User statements always override system inference. If the system thinks you prefer low challenge and you say you want more, the system updates. Immediately. Without argument.
3. Data sharing requires explicit consent. By default, profile data is not aggregated across users, not used for model training, and not shared with third parties. If an operator configures broader use, it requires explicit, auditable decisions and appropriate consent mechanisms. Silent defaults don't exist.