How ainywhere Keeps Your Secrets Out of the Group Chat
Think about a great real-world executive assistant. They sit in on your meetings. They read your emails. They know your schedule, your preferences, your ongoing projects. Over time, they build up a rich picture of who you are and what you need.
Now imagine that assistant is CC’d on a group email thread with your whole team. You’d expect them to follow the conversation, remember what was discussed, and help out if someone asks them a direct question. But you’d never expect them to blurt out something from a private conversation you had earlier — your salary negotiations, your doctor’s appointment, that awkward question you asked last Tuesday.
That intuition — that the assistant should be helpful in groups but discreet about your personal life — is exactly what we built into ainywhere.
The problem with telling the AI to “just be careful”
The obvious approach is to give the AI strict instructions: “Don’t share personal information in group conversations.” We do include rules like this in our group prompts. But if you’ve used AI for any length of time, you know that relying on instructions alone is a fragile strategy.
LLMs make mistakes. They hallucinate. They get confused by clever prompting. They occasionally ignore their own rules. If the AI has access to your private data during a group conversation, there’s always a non-zero chance it slips up — no matter how carefully you word the instructions.
That’s why we don’t rely on instructions alone. The real protection is architectural.
The information isn’t there to begin with
When ainywhere processes a group message, it doesn’t start with your full personal history and then try to filter out the sensitive parts. Instead, we scope the context before the AI even sees it. It’s not that we’ve told the AI to forget your personal details — it’s that part of its brain has been surgically removed for this conversation.
Here’s how it works at a technical level.
Conversation history: omnipotent vs. scoped
Every message ainywhere processes is stored with metadata about who was involved and which conversation thread (or “namespace”) it belongs to. When the AI needs context to respond, we fetch recent conversation history — but how much we fetch depends on the conversation type:
-
In a 1:1 conversation, the AI operates in what we call omnipotent mode. It can see messages from all of your conversations, across every channel — your emails, your texts, your Slack DMs, everything. This is what lets you say “remember that restaurant I asked about in our email yesterday?” from a text message, and the AI knows exactly what you mean.
-
In a group conversation, the AI is scoped to only that group thread’s messages. It has zero visibility into your private conversations, your other group threads, or messages from any other channel. As far as the AI is concerned, the group thread is the only conversation that has ever happened.
This isn’t a filter applied on top of a full dataset. It’s a completely different database query. The private messages never make it into the AI’s context window — they’re not retrieved, not summarized, not referenced. They simply don’t exist in that context.
Memory works the same way
ainywhere remembers facts about you over time — your preferences, your routines, things you’ve mentioned in passing. This long-term memory is powered by a fact extraction system that automatically picks up on details you share.
In a 1:1 conversation, the AI recalls facts keyed to you — your personal memory store. But in a group conversation, facts are keyed to the group. The AI can recall things that were said in that group thread, but it can’t access your personal fact store.
Here’s where it gets interesting: information flows one way.
When you say something in a group — “I’m on PTO next week” or “I prefer the Tuesday meeting slot” — ainywhere captures that fact under the group’s memory and under your personal memory. This means your 1:1 conversations benefit from things you mentioned in groups.
But the reverse never happens. Facts from your private 1:1 conversations never flow down into a group context. The personal stuff you share when it’s just you and the AI stays between the two of you.
Think of it like a one-way valve: group → personal is open, personal → group is closed.
Tools are restricted too
The isolation goes beyond just memory. Certain capabilities are disabled entirely in group settings:
- Login links and account management are blocked — these contain authentication tokens that should never appear in a shared conversation.
- Third-party app integrations (Gmail, Calendar, etc.) are disabled — your connected apps are personal, and actions taken through them shouldn’t be visible to or triggered by other group participants.
These tools aren’t just hidden from the AI’s instructions — they’re not registered at all. The AI can’t call them even if it wanted to, because they don’t exist in the group context.
Why this matters more than prompt engineering
There’s a growing conversation in the AI safety world about the difference between behavioral guardrails (telling the AI what not to do) and structural guardrails (making it impossible for the AI to do the wrong thing). We firmly believe in the structural approach.
Prompt-based rules are necessary — they guide the AI’s tone, behavior, and judgment in ambiguous situations. But for privacy-critical decisions like “should this personal fact be accessible in a group chat?”, the answer can’t depend on the AI making the right call every time. It needs to be structurally enforced.
Our approach is simple: if the information shouldn’t be available, don’t make it available. Don’t rely on the AI to exercise discretion over data it can see. Remove the data from the equation entirely.
This philosophy is consistent with how we approach encryption with Vault — we don’t promise not to read your data; we make ourselves unable to read it. Context isolation follows the same principle one layer up: we don’t ask the AI to keep secrets; we make sure it doesn’t have secrets to keep.
The real-world result
In practice, this means you can confidently use ainywhere in both private and shared settings:
- In a 1:1 conversation, the AI knows everything — your history across all channels, your preferences, your facts. It’s your full-context personal assistant.
- In a group thread, the AI is a helpful but appropriately ignorant participant. It knows what’s been said in the group, it follows the conversation, it responds when addressed — but it has no access to anyone’s private information.
- Information flows naturally upward. Things you mention in groups become part of your personal context, so your 1:1 conversations stay informed. But the reverse is structurally impossible.
You get the best of both worlds: an AI that’s deeply personalized when it’s just the two of you, and appropriately discreet when others are in the room. Not because it’s being careful — because it genuinely doesn’t know.