What You See

This is a representative profile panel showing the kinds of information ProfileForge tracks and how it presents them to users. Everything shown was either declared by the student directly or inferred from their behavior during sessions, with confidence levels visible on every field.

Nothing is hidden. Nothing was scraped from outside the conversation. Nothing was purchased from a data broker.

If you can examine everything the system knows about you, and delete it, it's not creepy. It's a tool.

What's Not Here

Notice what the profile does not contain: no medical diagnoses, no political beliefs, no religious views, no financial assessments, no psychological labels. These categories are architecturally prohibited. The system cannot infer them, even if adjacent signals are present.

A student who mentions anxiety about tests does not receive a mental health classification. They receive a preference note: "benefits from low-pressure framing during assessments."

Some things must be stated, not guessed. ProfileForge draws a line and enforces it by architecture, not policy.

How It Got Here

Three sources of profile data, in order of priority:

Declared: Things the student said directly. "I prefer step-by-step explanations." "I'm a first-generation college student." These have the highest confidence because the user stated them.

Demonstrated: Patterns the system noticed. The student consistently asks for examples before abstractions. They engage more in the morning. They prefer written over audio. These are inferred from behavior, not questionnaires.

Corrected: The student said "I actually prefer more challenge, not less." The system's inference was overridden. The correction has confidence 1.0 because the user explicitly stated it.

User statements always win. ProfileForge learns from corrections.

Confidence Transparency

Every field has a confidence score. High confidence means the system is fairly certain, based on repeated observation or explicit declaration. Lower confidence means a tentative inference from limited data.

The system knows how uncertain it is. And it tells you.

Your profile is configuration, not prediction. It describes what you've shown, not what the system assumes.

Design Trace
Hidden personalization systems never tell you how certain they are. That's how you end up with confident wrong assumptions driving your experience. By showing confidence, ProfileForge invites correction. A user who sees "prefers concise answers (moderate confidence)" can say "actually, I want depth on technical topics." The system updates. Without visible confidence, the user never knows the system is guessing.
Student Profile User-Owned
🎓 Learning Preferences
Explanation style Step-by-step
Declared
Examples first Yes, before abstractions
Demonstrated (12 sessions)
Challenge level Higher than default
Corrected by user
📋 Goals & Context
Primary goal First-generation college prep
Declared
Current focus Algebra II, college essays
Active sessions
🧠 Interaction Patterns
Peak engagement Mornings
Demonstrated (pattern)
Response format Written, not audio
Declared
Assessment comfort Low-pressure framing
Demonstrated
🔒 Accessibility & Safety
Pronouns she/her
Declared
Safety preferences Standard
Default
Same Student, Three Solutions

The same student profile travels across every Cognitive OS application. The profile doesn't change. But how each application uses it does.

Below: the same student asks "Can you explain this concept?" in three different contexts. Watch how the profile shapes each response without the student having to re-configure anything.

Profile Active: Step-by-step · Examples first · Higher challenge · Written format · Low-pressure assessments · First-gen college
MathBridge
Math Tutoring

Student asks: "Can you explain completing the square?"

System response: Starts with a concrete example (x² + 6x + 5 = 0), walks through each step visually, then names the general pattern. Pushes to a harder example immediately because the profile says "higher challenge."

Profile applied: Examples before abstraction. Step-by-step scaffolding. Challenge level elevated. Assessment framing will be low-pressure when quiz prep begins.
ConversationCraft
Communication Skills

Student asks: "Can you explain assertive communication?"

System response: Opens with a real scenario (saying no to a friend), walks through the components step by step, then offers a practice round. Written format, no role-play audio. Tone is warm but structured.

Profile applied: Examples before theory. Step-by-step structure. Written format preference respected. Practice offered but not pressured (low-pressure framing).
Quest
Research & Study

Student asks: "Can you explain the causes of the Civil War?"

System response: Starts with a specific event (John Brown's raid), uses it to illustrate the larger tensions, then builds the structural analysis. Connects to first-generation college goal: "This kind of analysis is exactly what college essays ask you to do."

Profile applied: Concrete example first. Step-by-step layering. Goal-aligned framing (college prep). Challenge level appropriate for content.
What Just Happened

Three different applications. Three different subjects. Three different interaction styles. But the same student was recognized across all of them.

She didn't fill out a preference form three times. She didn't re-train each system. She didn't adjust settings in each app. Her profile traveled with her.

Identity travels with you. Configuration, not repetition.

What the Profile Did Not Do

The profile did not manipulate the student toward engagement targets. It did not optimize for time-on-platform. It did not hide its influence.

Every adaptation above can be traced back to a specific profile field that the student can see, edit, or delete. The system adapts to serve declared goals, not hidden objectives.

Personalization is only ethical when the person being personalized can see and control it.

Design Trace
Most cross-platform personalization works by tracking behavioral exhaust: clicks, dwell time, scroll depth, mouse movements. This data is collected invisibly, shared across systems without user knowledge, and optimized for platform goals (engagement, conversion, revenue). ProfileForge inverts this entirely: the data is user-declared or user-visible, shared with explicit awareness, and optimized for user-declared goals. The difference isn't technical. It's philosophical: who does personalization serve?

Your Data, Your Control

Five actions every user can take. No exceptions. No fine print.

👁
View Everything

Every inference, every confidence score, every source. The student can see their complete profile at any time. Nothing is hidden in a backend the user cannot access.

This includes the reasoning: not just "prefers step-by-step" but "inferred from 12 sessions where step-by-step explanations correlated with higher engagement and faster comprehension."

The test: If you can examine everything the system knows about you, it's a tool working for you. If you can't, it's a system working on you.
Correct Any Inference

The system inferred "prefers low challenge" from early sessions. The student disagrees: "I actually want harder problems, I was just getting used to the system." One correction. Confidence set to 1.0. The system updates immediately.

User statements always override system inference. Always.

Why this matters: Systems that can't be corrected become prisons of their own assumptions. A student labeled "low confidence" by an algorithm carries that label into every future interaction. ProfileForge lets the student rewrite the label.
🗑
Delete Anything, or Everything

Delete a single field. Delete a category. Delete the entire profile. Deletion takes effect immediately at the application layer. The system stops using deleted data the moment you remove it. Permanent removal from storage follows defined retention windows configured by the deployment.

After deletion, the system starts fresh. It doesn't adapt based on what was deleted. Audit trails may record that a deletion occurred (for compliance), but the content of what was deleted is not retained.

The principle: If you can't delete it, it isn't yours. ProfileForge treats user data as user property, not platform assets. Operators configure retention and compliance requirements for their deployment context.
📦
Export Your Profile

Download your complete profile in a portable format. Take it with you. Use it to inform a new system, a different platform, or your own records.

This is real data portability, not a PDF summary. The exported profile contains every field, every confidence score, every source, in a structured format another system could use.

Why portability matters: If your profile only works inside one ecosystem, the ecosystem owns it, not you. True ownership means you can leave and take yourself with you.
🔒
Control Cross-Solution Sharing

The student can decide which applications see which parts of their profile. MathBridge can see learning preferences but not career goals. ConversationCraft can see communication style but not academic performance.

Field-level permissions. Not all-or-nothing. And an emergency lockdown that revokes all external access with a single action.

The architecture: Other systems request profile access. Users preview what would be shared. Users grant or deny. All access is logged. All access is revocable. Consent is not a checkbox buried in a terms of service. It's a visible, active, ongoing choice.

Attention, Not Surveillance

The line between personalization that serves you and profiling that controls you.

🚨
How Personalization Usually Works

Most AI personalization follows the advertising model: collect behavioral data silently, build profiles users can't see, optimize for platform goals (engagement, retention, conversion), and make it nearly impossible to delete.

The creepy feeling isn't irrational. It comes from asymmetry: the system knows things about you that you don't know it knows. The power runs one direction.

The standard defense is "but we're helping you." That's the same defense used by every surveillance system in history. The question isn't whether the output is helpful. It's whether the person being profiled has agency over the profiling.
🔍
The Contrast
Surveillance SystemsProfileForge
Infer traits silentlyDeclare traits explicitly or infer with transparency
Optimize for engagementOptimize for stated goals
Store behavioral exhaustStore meaningful configuration
Share across systems invisiblyShare with explicit permission
Difficult to deleteImmediately and fully revocable
Monetize attentionProtect agency
Hidden modelsEverything inspectable
This is not a feature comparison. It's a values statement. ProfileForge exists because we believe the person being personalized should have more power than the system doing the personalizing. That's not a technical decision. It's an ethical one.
🛡
Three Boundaries That Never Move

1. Protected categories are never inferred. Medical, political, religious, financial, psychological. These boundaries are architectural, not policy. The system cannot cross them, regardless of what adjacent signals suggest.

2. User statements always override system inference. If the system thinks you prefer low challenge and you say you want more, the system updates. Immediately. Without argument.

3. Data sharing requires explicit consent. By default, profile data is not aggregated across users, not used for model training, and not shared with third parties. If an operator configures broader use, it requires explicit, auditable decisions and appropriate consent mechanisms. Silent defaults don't exist.

We don't infer who you are. We ask. And if we notice something, we show you what we noticed and let you decide whether it's accurate. That's the difference between attention and surveillance.