Model Context Protocol

Your AI forgets you
the moment the session ends.

Add Memcone as an MCP server in one command. Your editor gets persistent memory with the same three primitives: context, remember, and recall.

Get API key Read the docs

One command. Your AI remembers everything from here on.


Get started

Run this in your project directory
npx @memcone/cli link
01Detects your stack and links the project to your account
02Writes MCP config into Cursor, Claude Code, Windsurf, or VS Code automatically
03Restart your IDE — memory is live

Exposed tools

memcone.context

Retrieve the most relevant memory for this scope as a ready-to-inject string. Called before each response — one call, no prompt engineering required.

Before every response

memcone.remember

Store a fact or event in memory. Extracts, deduplicates, and embeds automatically. Contradictions resolve without manual cleanup.

When user reveals something

memcone.recall

Semantic search over stored memory. Returns ranked results matching a specific query.

When user references the past

How it works

01

IDE fetches tool manifest

Your agent runtime reads /.well-known/mcp.json and registers the three Memcone tools automatically.

02

Model decides when to call

The LLM reads the tool descriptions and decides autonomously when to fetch context, store facts, or recall memory — you don't prompt it.

03

Memcone handles everything else

Storage, embedding, caching — all transparent to the model. It just gets clean memory text back.