On the naive user expectation of what AI assistants will know in context
To all the companies who are building AI into existing products and platforms there’s a special hurdle to overcome. Users have come to expect that your product is whole and coherent. Data that one part of the product has should be available in other parts. Changes due to activity at one time that modifies the system or the data are expected to hold over time. When you add AI like an assistant or Copilot, usually deeply visually integrated experiences users will naturally assume the same for the AI assistant.
When users engage with an AI assistant embedded in a product—especially a brand-bound, account-aware platform —they do so with a set of expectations that aren’t naïve. They’re reasonable, intuitive, and symbolically consistent with how the product appears to work.
Users Aren’t Wrong, We’re Misleading Them
We’ve created the illusion of an intelligent, integrated, contextually aware assistant. But we haven’t delivered the substance behind that illusion.
Where Expectations Outpace Reality
Here’s what users (rightly) assume:
The AI understands the screen.
It lives in the same visual container as the data. If a human assistant were looking at the same dashboard, they’d see what I see. So the AI must too.
The AI remembers what I just said.
Conversation implies continuity. If we talked earlier in this session, that thread should persist. If we talked yesterday, and it’s “still me,” that context should carry forward. What I said before is retained, is mutually referenceable.
The AI knows what the company knows.
If it’s branded as a your product’s assistant, embedded in your branded UI, using your product’s APIs—it should know what your product knows. My account, my customers, my data. All of it.
These aren’t edge cases. They’re base cases. And they’re being violated regularly by most current implementations. Context that is fast, that is comprehensive, that’s coherent and nuanced and accurate is surprisingly hard to deliver in applied AI products, especially those with lots of legacy code and data.
What This Reveals: A Doctrine of Situated AI
To resolve the dissonance, we need to shift our framing.
Not “how do we teach users what AI can’t do?”
But “how do we build AI that matches what its embodiment implies it can do?”
Three principles follow:
1. Situated AI → Must perceive the interface
Inject on-screen context. Surface active record data. Include navigation state. If it’s visible to the user, it should be available to the assistant.
2. Persistent AI → Must remember conversational and account state
Use memory (session and long-term) to create continuity. Track user preferences, past interactions, prior corrections.
3. Embodied AI → Must match brand, context, and capability
The assistant is the product in the user’s mind. Don’t skin a generic model in brand colors and call it done. Bridge the gap between symbol and function.
Design Response
This isn’t a roadmap. It’s a reality check. But it implies workstreams that may include:
Context injection pipelines (UI, data, navigation)
Identity and memory linkage (account-bound, permissioned)
Feedback mechanisms (model uncertainty, user correction)
Surface affordance tuning (signal limits of awareness)
Users are not wrong. They are pattern-matching, expectation-modeling, semiotic readers of what we’ve built.
We can either close the gap between symbol and substance—or keep disappointing them with polite hallucinations.



