Memcone MCP
Add Memcone to your editor once. Your AI can then call memcone.context, memcone.remember, and memcone.recall automatically.
Memcone MCP is just the delivery layer for the same memory primitives. It gives AI tools access to your persistent memory without changing how Memcone works.
Why MCP
Your IDE already talks HTTP. MCP exposes Memcone as named tools (remember, recall, context) so agents discover memory operations without embedding URLs or rolling custom plugins. One URL (https://memcone.com/api/mcp), one Bearer token — scope resolution and compiled context stay server-side.
How it flows at runtime:
Authorization: Bearer … → resolve active project → compile identity + rules + memory → JSON-RPC tool resultAuthorization: Bearer … → resolve active project → compile identity + rules + memory → JSON-RPC tool resultSee also Concepts for how Projects, global memory, and API scopes relate.
MCP tools (RPC)
These are registered by the Memcone MCP server (tools/list). Hosts display them to the agent; users rarely invoke them manually.
| Tool | Purpose |
|---|---|
| memcone.context | Session bootstrap — identity, rules, decisions, architecture map, dual-lane memory summaries. Call once at conversation start with an optional task string so retrieval stays relevant. |
| memcone.remember | Persist a fact or session summary (event text). Writes semantic memory for the resolved scope (aligned with POST /v1/remember). |
| memcone.recall | Search stored beliefs (query string) when memcone.context is not enough — aligned with POST /v1/recall. |
Example prompts
These steer agents toward continuity without exposing RPC names as user jargon:
- “Load project context before we start.”
- “Save that decision before we move on.”
- “Checkpoint — summarize what we built and what’s next.”
- “What did we decide last time about auth?”
Setup
Run this in your project directory:
npx @memcone/cli linknpx @memcone/cli linkThat's it. link detects your stack, creates a cloud project, and writes the MCP config directly into whichever IDEs it finds (Cursor, Claude Code, Windsurf, VS Code). Restart your IDE and memory is live.
You need:
- A Memcone API key from dashboard → API keys
- Node.js so
npxcan run the CLI
How it works
CLI Cloud AI Tool
──── ───── ───────
npx @memcone/cli link → project state → compiled context
stored safely injected silentlyCLI Cloud AI Tool
──── ───── ───────
npx @memcone/cli link → project state → compiled context
stored safely injected silentlyCLI is where truth is created. It detects your repo, infers your stack, and pushes your project state to the cloud. It also writes the MCP config into your IDE automatically.
MCP is where truth is consumed. At the start of every AI session, Memcone compiles your project state into a structured context block and injects it automatically — before your AI writes a single line.
What link does
npx @memcone/cli linknpx @memcone/cli link- Prompts for your API key (saved to
~/.memcone/credentials, never again) - Detects your stack, conventions, and tooling
- Creates a project in your Memcone account and sets it as active
- Uploads your identity, rules (AGENTS.md / CLAUDE.md / .cursorrules), local skills, and architecture map
- Asks which IDE(s) should get MCP (interactive picker); then writes the matching files (e.g.
.vscode/mcp.json,.cursor/mcp.json,.mcp.jsonfor Claude Code, …). UseMEMCONE_MCP=…/--mcp=…in scripts to skip the prompt.
Restart your IDE after running. All MCP calls are automatically scoped to this project.
Manual config
If your IDE wasn't detected, or you prefer to configure manually, all IDEs use the same Memcone MCP endpoint:
Cursor / Claude Code / Windsurf — mcp.json in the IDE config dir:
{
"mcpServers": {
"memcone": {
"type": "http",
"url": "https://memcone.com/api/mcp",
"headers": {
"Authorization": "Bearer mem_live_YOUR_KEY"
}
}
}
}{
"mcpServers": {
"memcone": {
"type": "http",
"url": "https://memcone.com/api/mcp",
"headers": {
"Authorization": "Bearer mem_live_YOUR_KEY"
}
}
}
}VS Code — .vscode/mcp.json in your project root (requires VS Code 1.99+):
{
"servers": {
"memcone": {
"type": "http",
"url": "https://memcone.com/api/mcp",
"headers": {
"Authorization": "Bearer mem_live_YOUR_KEY"
}
}
}
}{
"servers": {
"memcone": {
"type": "http",
"url": "https://memcone.com/api/mcp",
"headers": {
"Authorization": "Bearer mem_live_YOUR_KEY"
}
}
}
}See the per-IDE guides in the sidebar for exact file paths.
What gets injected
On every AI session, memcone.context is called automatically. The AI receives a compiled context block in this order:
## Project Identity
framework: Next.js App Router
packageManager: pnpm
auth: Better Auth
orm: Drizzle ORM
semicolons: false
strictMode: true
## Hard Rules
[verbatim contents of AGENTS.md / CLAUDE.md / .cursorrules]
## Architecture
[enabled pack best-practice sections, filtered to current task]
## Key Decisions
[architectural decisions stored via CLI or dashboard]
## Working Memory
[retrieved facts from past sessions relevant to the current task]## Project Identity
framework: Next.js App Router
packageManager: pnpm
auth: Better Auth
orm: Drizzle ORM
semicolons: false
strictMode: true
## Hard Rules
[verbatim contents of AGENTS.md / CLAUDE.md / .cursorrules]
## Architecture
[enabled pack best-practice sections, filtered to current task]
## Key Decisions
[architectural decisions stored via CLI or dashboard]
## Working Memory
[retrieved facts from past sessions relevant to the current task]Your AI sees this before your first message. You never have to explain your stack again.
What Memcone does NOT do
- It does not require you to write any code or configure prompts
- It is not a chat interface — it runs silently in the background
- It does not automatically capture what happened mid-session without the AI storing it
The distinction: your project identity, rules, and decisions are always injected automatically — you set those up once with the CLI. Working memory (what you were doing last session) requires your AI to have stored it. The tool descriptions tell the AI to do this automatically at session end, but it only has what was actually stored.
Session continuity — how it actually works
Memcone injects context at session start. But that context is only as good as what was stored in previous sessions.
There are two layers of persistent state:
| Layer | What it is | How it gets stored |
|---|---|---|
| Project state | Stack, rules, decisions | npx @memcone/cli link / sync |
| Working memory | What happened last session | AI calls memcone.remember |
Working memory is the gap. Your AI stores working memory automatically when:
- You share a preference or decision mid-session (
"I want to use Redis for this") - The session wraps up — the AI stores a session summary before you close the chat
If you close a chat without the AI having stored anything, the next session will have project state but no working memory of what you were doing.
To ensure continuity, end your sessions explicitly:
you: "save our progress before I close this"
AI: calls memcone.remember → "Built auth flow. Next: email verification."you: "save our progress before I close this"
AI: calls memcone.remember → "Built auth flow. Next: email verification."Or just say: "checkpoint" — the AI will store what was built, what's in progress, and what comes next.
Keeping project state up to date
Run npx @memcone/cli sync after significant changes — new dependencies, architecture decisions, tooling changes:
npx @memcone/cli syncnpx @memcone/cli syncThis rescans your repo, diffs against the last known state, and pushes only what changed.
That includes:
- identity and stack changes
- rules and planning files
- local skills from agent skill folders
- architecture map updates
Active project
memcone.context, memcone.remember, and memcone.recall require an active project on your account (set when you run npx @memcone/cli link and pick a project). Without one, tools return JSON-RPC -32002. Open Projects in the dashboard, create or select a project, then retry. Semantic memory scopes to that project id so it lines up with API scopes.
Rate limits
MCP tool calls (memcone.context, memcone.remember, memcone.recall) are limited to 60 requests per minute per API key. Going over returns JSON-RPC error -32029 with HTTP 429 and a short retry hint in the message.
Plan agent loops accordingly: bursts of tool calls from multiple chats share the same key. See the rate limit strip on API Keys in the dashboard.
Supported tools
| Tool | Guide | |---|---| | Cursor | Cursor setup → | | VS Code (Copilot) | VS Code setup → | | Claude Code | Claude Code setup → | | Windsurf | Windsurf setup → | | Cline | Cline setup → | | JetBrains AI | JetBrains setup → | | Hermes Agent | Hermes setup → | | Antigravity | Antigravity setup → | | OpenCode | OpenCode setup → | | OpenClaw | OpenClaw setup → | | Codex CLI | Codex CLI setup → | | Zed | Zed setup → |
Product split
Memcone has two different integration surfaces:
| Surface | Best for | What the user configures |
|---|---|---|
| REST API | your app or backend | API key + POST /v1/remember / recall / context |
| CLI + MCP | IDEs and coding agents | link / sync once, then one MCP server config |
If you are adding memory to your product, start with the API docs.
If you want your IDE or coding agent to open every session with project context already loaded, stay on the CLI + MCP path.