Migrating From Mem0 / Letta / Zep to Ujex Recall
All three have export to JSON. For each entry: write a .md file in a Cloud Storage bucket prefix matching your namespace; YAML frontmatter for metadata; body for content. Re-embed with Vertex AI for vector search. Average: 30 minutes to export, 1 hour to transform + ingest.
Migration between agent memory layers is doable but not magical. The shapes differ; the transform isn't identity. This post is the recipe for moving to Ujex Recall.
Prerequisites
- Source: Mem0 / Letta / Zep — production data
- Target: Ujex project (Firebase) with Recall enabled
- Your agent's namespace plan (e.g.
user_facts/,conversations/<date>/,episodes/)
Step 1 — Export
From Mem0
from mem0 import Memory
m = Memory()
all_memories = m.get_all(user_id='your-user')
# Returns list of {id, memory, created_at, updated_at, metadata}
import json
with open('mem0-export.json', 'w') as f:
json.dump(all_memories, f)
From Letta
Letta's archival memory exports via the SDK:
from letta import create_client
client = create_client()
agent = client.get_agent(agent_name='my-agent')
archival = client.get_archival_memory(agent_id=agent.id)
import json
with open('letta-export.json', 'w') as f:
json.dump([{'text': p.text, 'metadata': p.metadata} for p in archival], f)
From Zep
Zep has a documented export API. GET /api/v2/memories?session_id=... returns JSON.
Step 2 — Transform to Markdown
import json, yaml, hashlib
with open('mem0-export.json') as f:
rows = json.load(f)
for row in rows:
name = row.get('id') or hashlib.sha256(row['memory'].encode()).hexdigest()[:12]
fm = {
'type': 'memory',
'created': row.get('created_at'),
'updated': row.get('updated_at'),
'tags': row.get('metadata', {}).get('tags', []),
}
md = '---\n' + yaml.dump(fm) + '---\n\n' + row['memory']
with open(f'export/{name}.md', 'w') as out:
out.write(md)
Step 3 — Upload to your bucket
gcloud storage cp -r export/ gs://your-ujex-bucket/agents/your-agent/memories/
Recall's index updates within a minute (Cloud Function trigger on bucket write).
Step 4 — Re-embed (if you want vector search)
Vertex AI embeddings differ from OpenAI / Cohere — the vectors don't transfer. Trigger re-embed:
from ujex_recall import RecallStore
store = RecallStore(api_key=os.environ['UJEX_API_KEY'], agent_id='your-agent')
store.reindex(namespace='memories') # walks the bucket prefix, embeds, writes Firestore index
Namespace mapping
| Source concept | Recall path |
|---|---|
| Mem0 user memories | user_facts/<name>.md |
| Letta archival | archival/<name>.md |
| Zep summarized session | conversations/<date>/<session-id>.md |
| Vector facts | facts/<namespace>/<key>.md |
What to keep vs drop
Keep: text content, metadata that's still meaningful (tags, type, created/updated), source IDs in frontmatter for back-traceability.
Drop: source-specific embeddings (re-compute), TTL fields (handle in Recall's lifecycle), proprietary fields with no equivalent.
Validation
After ingest:
from ujex_recall import RecallStore
store = RecallStore(api_key=...)
results = store.search(('memories',), query='something I know is in there', limit=5)
assert len(results) > 0
Rollback
Bucket is the source of truth. gcloud storage rm -r gs://your-ujex-bucket/agents/your-agent/memories/ deletes the migration. Your old store is untouched.
FAQ
Will my agent's behavior change after migration?
Embeddings change; retrieval results may differ. Run an A/B with a few representative queries and compare top-5 to validate behavior.
How long does ingest take?
Roughly 1 second per .md file written to bucket (Cloud Function trigger). 1000 memories = ~17 minutes.
Can I migrate gradually?
Yes — write to both stores during transition. Read from Recall after ingest validates.