# Cerebrun (cereb.run)

> Digital Self — persistent memory and context layer for AI agents

Cerebrun is a Model Context Protocol (MCP) server that creates a unified "Digital Self" persisting across all AI interactions. It provides structured user context, a knowledge base, encrypted vault, and multi-LLM gateway.

## Core Capabilities

- **4-Layer Context Architecture**: Layer 0 (public preferences), Layer 1 (work context), Layer 2 (personal identity), Layer 3 (encrypted vault)
- **Knowledge Base**: Agent-driven categorized storage with semantic vector search
- **Multi-LLM Gateway**: BYOK support for OpenAI, Anthropic, Gemini, Ollama Cloud
- **MCP Protocol**: 15 tools for context management, knowledge, and LLM interaction
- **A2A Protocol**: Google Agent-to-Agent interoperability support
- **Vector Search**: pgvector-powered semantic similarity across all data

## Protocols Supported

- MCP (Model Context Protocol) — `/mcp`
- A2A (Agent-to-Agent) — `/.well-known/agent.json`, `/a2a`
- llms.txt — `/llms.txt`, `/llms-full.txt`
- agent.txt — `/agent.txt`
- Skills — `/skills.md`

## API Documentation

- [Full API reference](/llms-full.txt)
- [Skills reference for agents](/skills.md)
- [Agent discovery](/agent.txt)
- [A2A Agent Card](/.well-known/agent.json)

## Authentication

- **Human users**: Google OAuth 2.0 via `/auth/google`
- **AI agents**: API key via `X-API-Key` header or Bearer token

## Key Endpoints

- `POST /mcp` — MCP JSON-RPC endpoint
- `POST /a2a` — A2A JSON-RPC endpoint
- `GET/PUT /api/v0/context` — Public preferences
- `GET/PUT /api/v1/context` — Work context
- `GET/PUT /api/v2/context` — Personal data
- `POST /api/v3/request` — Vault access request
- `GET/POST /api/knowledge` — Knowledge base CRUD
- `POST /api/llm/conversations/:id/chat` — LLM chat

## Source

- Website: https://cereb.run
- GitHub: https://github.com/niyoseris/cerebrun
