MemoryNode integration

After the quickstart, most applications only need two HTTPS calls to start delivering value:

  1. Write memoryPOST /v1/memories
  2. RetrievePOST /v1/search

Base URL: https://api.memorynode.ai (or your self-hosted Worker URL).

Authentication

Send the API key from the console (Settings → API keys):

Authorization: Bearer mn_…your_key…

(or x-api-key: mn_…).

Minimal write

curl -sS -X POST "https://api.memorynode.ai/v1/memories" \
  -H "Authorization: Bearer $MEMORYNODE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"text":"Customer prefers email digests on Mondays.","extract":false}'

Responses include x-request-id on success and request_id inside JSON errors — log both.

Minimal search

curl -sS -X POST "https://api.memorynode.ai/v1/search" \
  -H "Authorization: Bearer $MEMORYNODE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"query":"email digest Monday preference","top_k":5,"namespace":"default","search_mode":"hybrid"}'

Save the response header x-request-id — you need it for support and for explicit retrieval feedback (below).

What you get without extra code

These run on every write/search; you do not configure separate services for them.

CapabilityWhat it means for your app
Lifecycle intelligenceConfidence, volatility, expiry, and supersession are applied on ingest and reflected in ranking — not a separate “lifecycle API”.
Semantic dedupeNear-duplicate text or embeddings can return { deduped: true, memory_id, dedupe_kind } instead of a second row.
Contradiction handlingConflicting facts are resolved on write; check intelligence.conflict_state on new memories.
SQL recall + Worker rankingPostgres returns candidates; business ranking (lifecycle, learning, freshness) runs in the Worker after fusion.
Retrieval learningSearch and context improve from usage over time; send thumbs up/down with POST /v1/feedback and the search x-request-id.
Model-agnostic chatExtraction and evolution use CHAT_PROVIDER (OpenAI, Anthropic, or Gemini on the Worker). Embeddings stay OpenAI/stub with per-chunk version metadata — no bulk re-embed when you change chat vendor.

Details: API_USAGE.md §4 (lifecycle, providers, retrieval architecture).

Explicit retrieval feedback

curl -sS -X POST "https://api.memorynode.ai/v1/feedback" \
  -H "Authorization: Bearer $MEMORYNODE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"feedback":"positive","request_id":"'"$REQUEST_ID_FROM_SEARCH"'"}'

TypeScript SDK

@memorynodeai/sdk wraps the same routes with retries and typed errors.

Deeper reference