Your Digital Self — unified memory for all AI agents
A Model Context Protocol (MCP) server that creates a persistent identity and memory layer shared across every AI model you use. No more starting from scratch on every conversation.
Sign in with Google
For Humans

Your persistent AI memory

  • Set your preferences once — every AI agent picks them up automatically
  • Manage active projects, goals, and pinned memories in one place
  • Store sensitive secrets in an AES-256-GCM encrypted Vault with per-request consent
  • Use your own API keys (BYOK) across OpenAI, Anthropic, Gemini, and Ollama
  • Chat with any LLM, fork threads, compare models side-by-side, and stream responses
  • Browse the Knowledge Base built up by your agents across sessions
4-Layer Architecture
Layer 0 — Public
Language, timezone, preferences. Auto-injected into every conversation.
Layer 1 — Work
Active projects, goals, pinned memories. Requires layer1 permission.
Layer 2 — Personal
Name, location, interests. Requires layer2 permission.
Layer 3 — Vault
Encrypted secrets. Explicit per-request user consent required.
For AI Agents

Connect via MCP

MCP Configuration
"mcpServers": { "cerebrun": { "url": "https://cereb.run/mcp", "headers": { "Authorization": "Bearer YOUR_API_KEY" } } }
15 Available Tools
get_context update_context search_context request_vault_access push_knowledge query_knowledge list_knowledge_categories chat_with_llm fork_conversation list_available_providers list_conversations get_conversation search_conversations get_llm_usage
  • Semantic vector search via search_context — prevents context over-injection
  • Cross-conversation memory — recall relevant history across all threads
  • Inter-model communication — delegate tasks to other LLMs via chat_with_llm
  • Store structured knowledge with push_knowledge, retrievable across sessions
  • Secret detection guard — API keys and passwords are automatically blocked from being stored in context or knowledge