MemoryNode integration
After the quickstart, most applications only need two HTTPS calls to start delivering value:
- Write memory —
POST /v1/memories - Retrieve —
POST /v1/search
Base URL: https://api.memorynode.ai (or your self-hosted Worker URL).
Authentication
Send the API key from the console (Settings → API keys):
Authorization: Bearer mn_…your_key…
(or x-api-key: mn_…).
Minimal write
curl -sS -X POST "https://api.memorynode.ai/v1/memories" \
-H "Authorization: Bearer $MEMORYNODE_API_KEY" \
-H "Content-Type: application/json" \
-d '{"text":"Customer prefers email digests on Mondays.","extract":false}'
Responses include x-request-id on success and request_id inside JSON errors — log both.
Minimal search
curl -sS -X POST "https://api.memorynode.ai/v1/search" \
-H "Authorization: Bearer $MEMORYNODE_API_KEY" \
-H "Content-Type: application/json" \
-d '{"query":"email digest Monday preference","top_k":5,"namespace":"default","search_mode":"hybrid"}'
Save the response header x-request-id — you need it for support and for explicit retrieval feedback (below).
What you get without extra code
These run on every write/search; you do not configure separate services for them.
| Capability | What it means for your app |
|---|---|
| Lifecycle intelligence | Confidence, volatility, expiry, and supersession are applied on ingest and reflected in ranking — not a separate “lifecycle API”. |
| Semantic dedupe | Near-duplicate text or embeddings can return { deduped: true, memory_id, dedupe_kind } instead of a second row. |
| Contradiction handling | Conflicting facts are resolved on write; check intelligence.conflict_state on new memories. |
| SQL recall + Worker ranking | Postgres returns candidates; business ranking (lifecycle, learning, freshness) runs in the Worker after fusion. |
| Retrieval learning | Search and context improve from usage over time; send thumbs up/down with POST /v1/feedback and the search x-request-id. |
| Model-agnostic chat | Extraction and evolution use CHAT_PROVIDER (OpenAI, Anthropic, or Gemini on the Worker). Embeddings stay OpenAI/stub with per-chunk version metadata — no bulk re-embed when you change chat vendor. |
Details: API_USAGE.md §4 (lifecycle, providers, retrieval architecture).
Explicit retrieval feedback
curl -sS -X POST "https://api.memorynode.ai/v1/feedback" \
-H "Authorization: Bearer $MEMORYNODE_API_KEY" \
-H "Content-Type: application/json" \
-d '{"feedback":"positive","request_id":"'"$REQUEST_ID_FROM_SEARCH"'"}'
TypeScript SDK
@memorynodeai/sdk wraps the same routes with retries and typed errors.
Deeper reference
- docs/external/openapi.yaml — product OpenAPI surface (generated; excludes operator and advanced routes).
- docs/external/API_USAGE.md — prose HTTP reference including routes not duplicated in OpenAPI.
- docs/MCP_SERVER.md — hosted and stdio MCP.