Add Memory
Ingest unstructured data into the Memory Cluster.
Operational Pipeline
Section titled “Operational Pipeline”The ingestion API operates on an asynchronous, event-driven model. Requests are validated and acknowledged immediately (202 Accepted), then offloaded to a background worker queue. The data undergoes a specific transformation sequence before becoming searchable:
- Shift-Left Temporal Resolution: The pipeline scans
source.contentfor relative time references (e.g., “next Friday”, “in three days”). These are resolved into absolute ISO 8601 timestamps at the moment of ingestion, creating a deterministic temporal index. - Semantic Enrichment: The system applies bidirectional expansion, injecting implicit context (themes, synonyms) to maximize vector overlap.
- Dual-Write Persistence: Data is synchronously committed to both the Metadata Store and Vector Database.
- Centroid Calibration: Finally, the user’s semantic centroid is recalculated to reflect the new memory distribution, influencing future adaptive search modes.

Text ingest (API)
Section titled “Text ingest (API)”Submit a text-based memory payload.
Payload schema (text)
Section titled “Payload schema (text)”| Field | Type | Status | Description |
|---|---|---|---|
source.type | String | REQUIRED | Must be text. |
source.content | String | REQUIRED | The raw text to ingest. This field is parsed for automatic relative date resolution (Shift-Left). |
userContext | String | OPTIONAL | Additional meta-context (e.g., “Slack Message”, “Email from CEO”) used to guide the enrichment engine. |
curl -X POST "https://api.memorymodel.dev/v1/memory" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -H "x-end-user-id: user_123" \ -d '{ "source": { "type": "text", "content": "The project deadline has been moved to next Friday." }, "userContext": "Slack message from Project Manager" }'// npm install @memorymodel/clientimport { MemoryClient } from '@memorymodel/client';
const client = new MemoryClient({ apiKey: 'sk_live_...', defaultEndUserId: 'user_123'});
const response = await client.add( "The project deadline has been moved to next Friday.", { userContext: "Slack message from Project Manager" });
console.log(response.jobId);# pip install memory-modelfrom memory_model import MemoryClient
client = MemoryClient( api_key="sk_live_...", default_end_user_id="user_123")
response = client.add( "The project deadline has been moved to next Friday.", user_context="Slack message from Project Manager")print(response.job_id)Response (text)
Section titled “Response (text)”{ "status": "Accepted", "jobId": "job_1700000000000_user_123"}Image ingest (API)
Section titled “Image ingest (API)”Submit an image for multimodal processing. The system extracts visual features and semantic context, integrating them into the memory graph alongside text nodes.
Payload schema (image)
Section titled “Payload schema (image)”| Field | Type | Status | Description |
|---|---|---|---|
imageData | String | REQUIRED | Base64-encoded image string. |
userContext | String | OPTIONAL | Context to anchor the visual analysis (e.g., “Screenshot of error logs”). |
curl -X POST "https://api.memorymodel.dev/v1/memory/image" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "imageData": "iVBORw0KGgoAAAANSUhEUgAAAAE...", "userContext": "Screenshot of the error message" }'// npm install @memorymodel/clientimport { MemoryClient } from '@memorymodel/client';
const client = new MemoryClient({ apiKey: 'sk_live_...' });
const response = await client.addImage({ imageData: "iVBORw0KGgoAAAANSUhEUgAAAAE...", userContext: "Screenshot of the error message"});# pip install memory-modelimport base64from memory_model import MemoryClient
client = MemoryClient(api_key="sk_live_...")
img_b64 = "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNk+A8AAQUBAScY42YAAAAASUVORK5CYII="response = client.add_image(img_b64, user_context="Screenshot of the error message")Response (image)
Section titled “Response (image)”{ "status": "Accepted", "jobId": "img_job_1700000000000_user_123", "previewUrl": "https://storage.googleapis.com/..."}Document Ingest (API)
Section titled “Document Ingest (API)”Submit a PDF document for automatic processing and ingestion.
The system accepts PDF files via multipart/form-data, queues them, extracts text from each page, and links them logically within the Memory Graph.
Payload schema (document upload)
Section titled “Payload schema (document upload)”| Field | Type | Status | Description |
|---|---|---|---|
file | File (Binary) | REQUIRED | The PDF file to be uploaded. Must be application/pdf. |
curl -X POST "https://api.memorymodel.dev/v1/ingest/document/upload" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "x-end-user-id: user_123" \ -F "file=@/path/to/your/document.pdf"// npm install @memorymodel/client@0.5.0import { MemoryClient } from '@memorymodel/client';import * as fs from 'fs';
const client = new MemoryClient({ apiKey: "sk_live_...", defaultEndUserId: "user_123"});
const fileData = fs.readFileSync('/path/to/document.pdf');
// Requires client v0.5.0+const response = await client.uploadDocument(fileData, { fileName: 'document.pdf'});
console.log(response.jobId, response.storagePath);# pip install memory-model==0.5.0from memory_model import MemoryClient
client = MemoryClient( api_key="sk_live_...", default_end_user_id="user_123")
with open('/path/to/document.pdf', 'rb') as f: file_data = f.read()
# Requires client v0.5.0+response = client.upload_document( file_data=file_data, file_name="document.pdf")
print(response.job_id, response.storage_path)Response (document)
Section titled “Response (document)”{ "status": "Accepted", "jobId": "doc_job_1700000000000_user_123", "storagePath": "projects/memorymodel/docs/user_123/document.pdf"}