POST /v1/context

Retrieve a compressed, LLM-ready context string for a given task.

Call this once before your LLM completion. Inject the result into your system prompt. The model responds with continuity.

Request

bash
POST https://api.memcone.com/v1/context
Authorization: Bearer mem_live_...
Content-Type: application/json
POST https://api.memcone.com/v1/context
Authorization: Bearer mem_live_...
Content-Type: application/json
json
{
  "scopeId": "user_123",
  "task": "help user debug their Next.js build error"
}
{
  "scopeId": "user_123",
  "task": "help user debug their Next.js build error"
}

Parameters

| Field | Type | Required | Description | |---|---|---|---| | scopeId | string | ✓ | Scope to retrieve memory from | | task | string | ✓ | What the AI is about to do — shapes retrieval |

Query parameters

| Param | Values | Default | Description | |---|---|---|---| | mode | fast | fresh | fast | See Turbo mode |

Request headers

| Header | Value | Description | |---|---|---| | X-Memcone-Mode | fast | fresh | Alternative to ?mode= query param | | X-Memcone-Debug | true | Include sources and scores in response |

Response

json
{
  "result": "The user is a TypeScript developer who prefers strict mode. They use Next.js App Router and deploy to Railway. They dislike verbose error messages.",
  "tokens_saved": 84,
  "cache_hit": true
}
{
  "result": "The user is a TypeScript developer who prefers strict mode. They use Next.js App Router and deploy to Railway. They dislike verbose error messages.",
  "tokens_saved": 84,
  "cache_hit": true
}

| Field | Type | Description | |---|---|---| | result | string | Ready-to-inject context string. Empty string if no memory found. | | tokens_saved | number | Tokens saved vs raw fact retrieval | | cache_hit | boolean | Whether served from Redis cache |

Response headers

| Header | Description | |---|---| | X-Memcone-Cache | HIT or MISS | | X-Memcone-Latency-Ms | Total server latency in ms | | X-Memcone-Tokens-Saved | Token compression delta |

Debug response

With X-Memcone-Debug: true:

json
{
  "result": "The user prefers TypeScript with strict mode.",
  "tokens_saved": 42,
  "cache_hit": false,
  "sources": [
    { "text": "user prefers TypeScript over JavaScript", "strength": 2.7, "similarity": 0.91 },
    { "text": "user always uses strict mode", "strength": 1.9, "similarity": 0.85 }
  ]
}
{
  "result": "The user prefers TypeScript with strict mode.",
  "tokens_saved": 42,
  "cache_hit": false,
  "sources": [
    { "text": "user prefers TypeScript over JavaScript", "strength": 2.7, "similarity": 0.91 },
    { "text": "user always uses strict mode", "strength": 1.9, "similarity": 0.85 }
  ]
}

Usage pattern

typescript
const { result: memory } = await fetch('https://api.memcone.com/v1/context', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${process.env.MEMCONE_API_KEY}`,
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    scopeId: userId,
    task: 'help the user with their current coding task',
  }),
}).then(r => r.json())

// inject into system prompt
const messages = [
  { role: 'system', content: memory ? `User context: ${memory}` : 'No prior context.' },
  { role: 'user', content: userMessage },
]
const { result: memory } = await fetch('https://api.memcone.com/v1/context', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${process.env.MEMCONE_API_KEY}`,
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    scopeId: userId,
    task: 'help the user with their current coding task',
  }),
}).then(r => r.json())

// inject into system prompt
const messages = [
  { role: 'system', content: memory ? `User context: ${memory}` : 'No prior context.' },
  { role: 'user', content: userMessage },
]