Skip to content

Migration Guide

If you’re currently using Anthropic’s MCP Memory server, your knowledge graph lives in a single JSONL file. mcp-memory stores the same data in SQLite with proper indexing, concurrency support, and vector embeddings — but the data format is identical.

The migrate tool reads your existing JSONL file and imports every entity, observation, and relation into the SQLite database. It’s idempotent, fault-tolerant, and safe to run multiple times.

The Anthropic MCP Memory server stores data in JSONL format — one JSON object per line. There are exactly two record types:

An entity represents a node in the knowledge graph with a name, type, and a list of observations:

{"type": "entity", "name": "Session 2026-03-21", "entityType": "Session", "observations": ["Decision: build MCP Memory v2", "Uses FastMCP framework"]}

Required fields:

FieldTypeDescription
typestringMust be "entity"
namestringUnique identifier for the entity
entityTypestringEntity type (defaults to "Generic" if omitted)
observationsstring[]List of observation strings (can be empty)

A relation connects two entities with a typed edge:

{"type": "relation", "from": "MCP Memory v2", "to": "FastMCP", "relationType": "uses"}

Required fields:

FieldTypeDescription
typestringMust be "relation"
fromstringName of the source entity
tostringName of the target entity
relationTypestringType of the relationship

A typical file mixes both record types. Entities should appear before the relations that reference them:

{"type": "entity", "name": "Project Alpha", "entityType": "Project", "observations": ["Started in March 2026", "Uses Python 3.12"]}
{"type": "entity", "name": "SQLite", "entityType": "Technology", "observations": ["Used for persistence"]}
{"type": "entity", "name": "FastMCP", "entityType": "Framework", "observations": ["MCP server framework"]}
{"type": "relation", "from": "Project Alpha", "to": "SQLite", "relationType": "uses"}
{"type": "relation", "from": "Project Alpha", "to": "FastMCP", "relationType": "built_with"}

The migration runs in four sequential phases:

The JSONL file is read line by line. Each line is parsed as a standalone JSON object:

  • If the line parses successfully, the record is classified as an entity or relation by its type field.
  • If JSON parsing fails (corrupt line, encoding issue), a warning is logged and the line is skipped. The line is counted in the errors total.
  • Blank lines are ignored.

This phase is purely in-memory — no database writes happen yet. The parsed records are queued for the next phases.

Each entity record is processed through upsert_entity, which uses SQLite’s ON CONFLICT(name) DO UPDATE:

  • New entity: inserted into the entities table.
  • Existing entity: observations are merged — only observations that don’t already exist are added. The entity type is updated if it differs.

This means you can safely run migration on a file that partially overlaps with data already in the database.

Each relation record is validated before insertion:

  1. Both the from and to entities must exist in the database.
  2. All required fields (from, to, relationType) must be present.
  3. The relation must not already exist (enforced by a UNIQUE constraint on (from_entity, to_entity, relation_type)).

If any check fails, the relation is skipped and counted as skipped. If the relation is valid, it’s inserted via create_relation, which catches IntegrityError on duplicate constraints for safety.

If the embedding engine is available (model downloaded — see Getting Started), embeddings are generated for all imported entities in a single batch at the end of the migration.

This is significantly more efficient than generating embeddings one-by-one during import. The batch approach:

  • Groups all entity text into a single ONNX inference pass.
  • Uses INSERT OR REPLACE on the rowid — existing embeddings are overwritten with fresh vectors.
  • If the embedding engine is not available, this phase is silently skipped. The migration still succeeds, and embeddings will be generated later when entities are accessed via search_semantic.

You have two options: via the MCP tool (recommended for most users) or via Python script (for programmatic access).

If you’re using an MCP client (OpenCode, Claude Desktop, etc.), call the migrate tool with the path to your JSONL file:

{
"source_path": "~/.config/opencode/mcp-memory.jsonl"
}

The tool expands ~ to your home directory automatically. The path must point to an existing, readable file.

The tool returns a JSON result (see Result format below).

For scripting or CI pipelines, import the migration function directly:

from mcp_memory.storage import MemoryStore
from mcp_memory.migrate import migrate_jsonl
store = MemoryStore()
store.init_db()
result = migrate_jsonl(store, "~/.config/opencode/mcp-memory.jsonl")
print(result)

The migrate_jsonl function accepts the same path with ~ expansion. It returns a dictionary with the same fields as the MCP tool response.

You can also run it as a one-liner from the repository root:

Terminal window
uv run python -c "
from mcp_memory.storage import MemoryStore
from mcp_memory.migrate import migrate_jsonl
store = MemoryStore()
store.init_db()
result = migrate_jsonl(store, '~/.config/opencode/mcp-memory.jsonl')
print(result)
"

The migration is idempotent: running it multiple times produces the same result without duplicating data. This is safe because every write operation includes a conflict resolution mechanism:

OperationMechanismEffect on repeated runs
Insert entityON CONFLICT(name) DO UPDATEExisting entities are updated; new observations are merged
Add observationExistence check before insertDuplicate observations are discarded silently
Create relationIntegrityError caught on UNIQUE constraintDuplicate relations are ignored
Store embeddingINSERT OR REPLACE on rowidExisting embedding vector is overwritten with a fresh one

The migration is designed to be fault-tolerant — it processes as many records as possible rather than failing on the first error:

Error conditionBehaviorCounted as
Corrupt line (invalid JSON)Skipped with warningerrors
Entity without name fieldSkippedskipped
Relation with missing fields (from, to, or relationType)Skippedskipped
Relation referencing non-existent entitySkippedskipped
Unknown record type (not "entity" or "relation")Skippedskipped
Individual embedding failureLogged as warning, migration continuesNot counted

The migration never raises an exception for individual record failures. All issues are captured in the result counts.

The migrate tool returns a JSON object with four fields:

{
"entities_imported": 32,
"relations_imported": 37,
"errors": 0,
"skipped": 2
}
FieldTypeDescription
entities_importedintEntity records successfully processed (inserted or updated)
relations_importedintRelations actually created in the database
errorsintLines that failed JSON parsing or threw unexpected exceptions
skippedintRecords skipped due to missing fields, non-existent target entities, or unknown type

After migration completes, verify your data was imported correctly:

Use search_nodes to verify that entities were imported correctly:

{
"query": ""
}

If you downloaded the embedding model, verify that embeddings were generated by running a semantic query:

{
"query": "your search term here",
"limit": 10
}

If results come back with limbic_score and distance fields, embeddings are working. If you get an error about the model, see Getting Started for the model download instructions.

Use open_nodes to verify individual entities by name:

{
"names": ["Session 2026-03-21", "Project Alpha"]
}

Confirm that observations were merged correctly and no data was lost.

Migrating from a default Anthropic install

Section titled “Migrating from a default Anthropic install”

The default location for the Anthropic MCP Memory JSONL file depends on your setup:

Terminal window
# OpenCode
~/.config/opencode/mcp-memory.jsonl
# Claude Desktop (macOS)
~/Library/Application Support/Claude/claude_memory.jsonl
# Claude Desktop (Linux)
~/.config/Claude/claude_memory.jsonl

Pass the correct path to the migrate tool or Python function.

If your JSONL file is still being written to (e.g., you’re running both servers in parallel during a transition period), you can run migration periodically. Because it’s idempotent, each run only imports new entities and observations that weren’t already present.

The migration processes the file line by line — it never loads the entire file into memory. A file with thousands of entities and relations will import without issue. The batch embedding generation phase is the slowest part, but it processes embeddings in chunks rather than all at once.


  • Next: Tools Reference — parameters, responses, and edge cases for all 10 tools
  • Also see: Getting Started — installation, configuration, and first steps