Your Digital Self — unified context for all AI agents
Sign in with GoogleCreate a persistent identity layer that any MCP-compatible AI agent can access. Manage your preferences, context, and secrets with granular permission controls.
This service lets you create a shared "Digital Memory" and "Identity Layer" across different AI agents (Claude, ChatGPT, Windsurf, etc.). Here's what you can do with this system:
A project you describe to one agent is automatically known by others. No need to say "where were we?" — all agents can access your current context (Layer 1).
Instead of giving your API keys or passwords to each agent separately, store them in the Vault (Layer 3). When an agent needs access, you get a consent request and approve it with a temporary token.
Your language preference, communication style (formal/casual), and timezone are consistent across all AI agents. No need to reconfigure these settings in every new conversation.
See which agent accessed which data and when through the Audit Log. You have full control over your data.
Go to the API Keys tab. Give your key a name (e.g., "Claude Desktop") and select the layers you want to grant access to. Important: Copy the key immediately; you won't see it again.
To connect an agent like Claude Desktop, add this to your claude_desktop_config.json:
{
"mcpServers": {
"user-context": {
"command": "curl",
"args": [
"-X", "POST",
"-H", "Authorization: Bearer YOUR_API_KEY_HERE",
"-H", "Content-Type: application/json",
"-d", "{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"tools/list\"}",
"https://cereb.run/mcp"
]
}
}
}
Note: Most MCP clients expect a local command or SSE. This server supports HTTP POST with JSON-RPC.
Replace YOUR_API_KEY_HERE with the key you created in the first step.
You can also use curl to test it directly from your terminal:
# List available tools
curl -X POST \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/list"}' \
https://cereb.run/mcp
# Get Layer 0 (Public) Context
curl -H "Authorization: Bearer YOUR_API_KEY" \
https://cereb.run/api/v0/context
When an agent needs sensitive data, it will send a request. You'll see this in the Vault Requests tab. Once approved, the agent gets a temporary token to read the specific fields you allowed.
Windsurf, Claude Desktop, or any MCP client can access the LLM Gateway using the tools below. This way, when you say "the project I worked on in Gemini yesterday," the agent knows what you're referring to.
Lists all your LLM conversations. Can filter by provider. The agent can see which conversations you have.
Returns the full message history of a specific conversation. Answers the question "What did I discuss with Gemini yesterday?"
Searches across all conversation history by keyword. For example: "Find my conversation about Rust macros."
Sends a message to another LLM through the gateway. When you say "Ask Gemini about this," the agent uses this tool.
Forks a conversation at a specific point and transfers it to a different LLM.
Returns token usage and cost metrics. Answers the question "How much have I spent?"
While working on a project in Windsurf, you say: "Recall the database architecture I discussed with Gemini yesterday."
Windsurf calls the search_conversations tool via MCP, finds your conversation, and automatically loads the context.
# Search conversations via MCP
curl -X POST \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{"name":"search_conversations","arguments":{"query":"database architecture"}}}' \
https://YOUR_SERVER_URL/mcp
# Send a message to Gemini via MCP
curl -X POST \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{"name":"chat_with_llm","arguments":{"message":"Analyze this code","provider":"gemini","model":"gemini-2.0-flash"}}}' \
https://YOUR_SERVER_URL/mcp
| Name | Permissions | Status | Last Used | Actions |
|---|
| Time | Action | Layer | Granted |
|---|
Project updates, decisions, learnings, and notes stored by your AI agents are listed here.
Download all your data in JSON format.
Permanently delete your account and all associated data.