Skip to content

Tools Reference

mcp-memory exposes 19 tools via the MCP protocol. The first 8 are 100% compatible with Anthropic’s MCP Memory API — use it as a drop-in replacement. The remaining 11 extend functionality with semantic search, hybrid retrieval, entity management, and maintenance operations.

ToolDescriptionAnthropicNew
create_entitiesCreate or update entities (merge observations)
create_relationsCreate typed relations between entities
add_observationsAdd observations to an existing entity
delete_entitiesDelete entities (cascades to observations + relations)
delete_observationsDelete specific observations from an entity
delete_relationsDelete specific relations
search_nodesSearch by substring (name, type, observation content)
open_nodesRetrieve entities by exact name
search_semanticSemantic search via vector embeddings + limbic re-ranking🆕
migrateImport from Anthropic’s JSONL format (idempotent)🆕
analyze_entity_splitAnalyze if an entity needs splitting (thresholds: Sesion=15, Proyecto=25, otras=20)🆕
propose_entity_splitAnalyze and propose a split using TF-IDF topic grouping🆕
execute_entity_splitExecute an approved split (creates entities, moves observations, establishes relations)🆕
find_split_candidatesFind all entities in the knowledge graph that need splitting🆕
find_duplicate_observationsFind semantically duplicated observations within an entity (cosine + containment)🆕
consolidation_reportGenerate a read-only consolidation report (split candidates, flagged obs, stale entities)🆕
end_relationExpire an active relation by setting active=0 and ended_at=now🆕
add_reflectionAdd a narrative reflection to an entity, session, relation, or global🆕
search_reflectionsSearch reflections via semantic (KNN) and text (FTS5) hybrid with RRF🆕

Create or update entities in the knowledge graph. If an entity already exists, observations are merged — existing data is never overwritten.

Signature: create_entities(entities: list[dict[str, Any]]) → dict[str, Any]

ParameterTypeRequiredDescription
entitieslist[dict]YesList of entities to create or update

Each entity dict is validated against the EntityInput Pydantic model:

FieldTypeRequiredDefaultDescription
namestrYesEntity name (min 1 character). Acts as unique identifier.
entityTypestrNo"Generic"Entity category (e.g. "Project", "Task", "Component").
observationslist[str]No[]Observations to attach to the entity.
{
"entities": [
{
"name": "CachorroSpace",
"entityType": "Project",
"observations": [
"Stack: Astro 6.x + Starlight 0.38.x",
"Deployed on Vercel at cachorro.space"
]
}
]
}
{
"entities": [
{
"name": "CachorroSpace",
"entityType": "Project",
"observations": [
"Stack: Astro 6.x + Starlight 0.38.x",
"Deployed on Vercel at cachorro.space"
]
}
]
}
{
"error": "1 validation error for EntityInput\nname\n Field required [type=missing, input_value={'entityType': 'Project'}, input_type=dict]"
}
  • Implements upsert via INSERT … ON CONFLICT(name) DO UPDATE. If the entity already exists, updates entity_type and updated_at without deleting previous observations.
  • Observations are merged: new ones are appended, and exact duplicates are discarded silently.
  • Generates an embedding with the complete entity snapshot (name + type + all observations). If the embedding engine is unavailable, the operation completes without error.
  • You can create multiple entities in a single call by passing multiple dicts in the entities array.

Create relations between entities. Both the source and target entities must exist in the knowledge graph before creating a relation.

Signature: create_relations(relations: list[dict[str, Any]]) → dict[str, Any]

ParameterTypeRequiredDescription
relationslist[dict]YesList of relations to create

Each relation dict is validated against the RelationInput Pydantic model:

FieldTypeRequiredDescription
fromstrYesSource entity name. Must exist in the graph.
tostrYesTarget entity name. Must exist in the graph.
relationTypestrYesType of relation (e.g. "contains", "depends_on", "uses").
{
"relations": [
{
"from": "CachorroSpace",
"to": "Astro",
"relationType": "uses"
}
]
}
{
"relations": [
{
"from": "CachorroSpace",
"to": "Astro",
"relationType": "uses"
}
]
}
{
"relations": [
{ "error": "Entity not found: NonExistentEntity" }
],
"errors": [
"Entity not found: NonExistentEntity"
]
}
{
"relations": [
{
"from": "CachorroSpace",
"to": "Astro",
"relationType": "uses",
"error": "Relation already exists"
}
]
}
  • Both entities (from and to) must exist before creating the relation. If either is missing, the relation is not created and an error is returned.
  • The relations table has a UNIQUE(from_entity, to_entity, relation_type) constraint. If the relation already exists, the dict is returned with an "error" key.
  • Does not touch embeddings — relations are structural metadata and do not participate in semantic search.

Add observations to an existing entity. The entity must already exist — use create_entities to create new entities.

Signature: add_observations(name: str, observations: list[str]) → dict[str, Any]

ParameterTypeRequiredDescription
namestrYesExact entity name. Must exist.
observationslist[str]YesObservations to add. Exact duplicates are discarded.
{
"name": "CachorroSpace",
"observations": [
"ECharts 6.x chosen for visualizations",
"Accent color: teal (#2dd4bf)"
]
}
{
"entity": {
"name": "CachorroSpace",
"entityType": "Project",
"observations": [
"Stack: Astro 6.x + Starlight 0.38.x",
"Deployed on Vercel at cachorro.space",
"ECharts 6.x chosen for visualizations",
"Accent color: teal (#2dd4bf)"
]
}
}
{
"error": "Entity not found: NonExistentEntity"
}
  • The entity must exist. This tool never creates new entities.
  • Exact duplicate observations are discarded silently — no error is raised for duplicates.
  • Regenerates the embedding with the updated entity snapshot (all existing observations + new ones).
  • Use this tool when you want to append data to an entity without respecifying its type or existing observations.

Delete entities and all their associated observations and relations.

Signature: delete_entities(entityNames: list[str]) → dict[str, Any]

ParameterTypeRequiredDescription
entityNameslist[str]YesEntity names to delete.
{
"entityNames": ["Old Project", "Deprecated Component"]
}
{
"deleted": ["Old Project", "Deprecated Component"]
}
{
"deleted": ["Old Project"],
"errors": ["Entity not found: Already Deleted Entity"]
}
  • Cascade deletion: deleting an entity automatically removes all its observations and relations via ON DELETE CASCADE in SQLite.
  • Critical — embeddings: the vec0 virtual table from sqlite-vec does not support CASCADE. The code manually deletes embeddings before deleting entities: (1) look up entity rowids, (2) delete from entity_embeddings by rowid, (3) delete from entities. All within an implicit SQLite transaction.
  • May intermittently fail with "cannot start a transaction" under high concurrency in WAL mode. Retrying usually resolves it.
  • Entities that don’t exist are reported in the errors array but don’t prevent other valid deletions.

Delete specific observations from an entity. The entity itself is preserved — only the matched observations are removed.

Signature: delete_observations(name: str, observations: list[str]) → dict[str, Any]

ParameterTypeRequiredDescription
namestrYesExact entity name. Must exist.
observationslist[str]YesObservations to delete (exact content match).
{
"name": "CachorroSpace",
"observations": [
"Accent color: teal (#2dd4bf)"
]
}
{
"entity": {
"name": "CachorroSpace",
"entityType": "Project",
"observations": [
"Stack: Astro 6.x + Starlight 0.38.x",
"Deployed on Vercel at cachorro.space",
"ECharts 6.x chosen for visualizations"
]
}
}
{
"error": "Entity not found: NonExistentEntity"
}
  • Deletion is by exact content match. No patterns, wildcards, or substring matching.
  • If an observation doesn’t exist for that entity, nothing happens — no error is raised for unmatched observations.
  • Regenerates the embedding with the remaining observations after deletion.
  • Useful for removing outdated or incorrect observations without recreating the entire entity.

Delete relations between entities. Requires the full triple to identify the relation.

Signature: delete_relations(relations: list[dict[str, Any]]) → dict[str, Any]

ParameterTypeRequiredDescription
relationslist[dict]YesRelations to delete (same format as create_relations)

Each relation dict:

FieldTypeRequiredDescription
fromstrYesSource entity name
tostrYesTarget entity name
relationTypestrYesRelation type
{
"relations": [
{
"from": "CachorroSpace",
"to": "Astro",
"relationType": "uses"
}
]
}
{
"deleted": [
{
"from": "CachorroSpace",
"to": "Astro",
"relationType": "uses"
}
]
}
{
"deleted": [],
"errors": [
"Relation not found: A -> B (contains)"
]
}
  • Requires the full triple (from + to + relationType) to identify the relation. Partial matches don’t work.
  • If either entity doesn’t exist: "Entity not found: X or Y".
  • If the relation doesn’t exist: "Relation not found: X -> Y (relationType)".
  • Does not touch embeddings — relations are not part of the semantic index.

Search for nodes by name, entity type, or observation content using substring matching. This is a lightweight search that does not require the embedding model.

Signature: search_nodes(query: str) → dict[str, Any]

ParameterTypeRequiredDescription
querystrYesSearch term. Applied as a LIKE pattern across multiple fields.
{ "query": "astro" }
{
"entities": [
{
"name": "CachorroSpace",
"entityType": "Project",
"observations": [
"Stack: Astro 6.x + Starlight 0.38.x",
"Deployed on Vercel at cachorro.space"
]
},
{
"name": "CachorroInk",
"entityType": "Project",
"observations": [
"Blog built with Astro 5.x"
]
}
]
}
  • Uses LIKE with the %query% pattern applied simultaneously to three fields: name, entity_type, and observation content.
  • Uses SELECT DISTINCT to avoid returning the same entity multiple times when it matches on multiple fields.
  • Does not require the ONNX embedding model — works out of the box with no additional setup.
  • Search is case-sensitive by default (standard SQLite LIKE behavior). For case-insensitive search, use semantic search instead.
  • For targeted queries on large graphs, prefer search_nodes or semantic search over open_nodes.

Retrieve specific entities by exact name. Returns full entity data with all observations.

Signature: open_nodes(names: list[str]) → dict[str, Any]

ParameterTypeRequiredDescription
nameslist[str]YesEntity names to retrieve.
{ "names": ["CachorroSpace", "Astro"] }
{
"entities": [
{
"name": "CachorroSpace",
"entityType": "Project",
"observations": [
"Stack: Astro 6.x + Starlight 0.38.x",
"Deployed on Vercel at cachorro.space"
]
}
]
}
  • Search by exact name (WHERE name = ?). No patterns, wildcards, or LIKE matching.
  • If a name doesn’t match any entity, it is silently omitted from results — no error is raised. In the example above, if "Astro" doesn’t exist, only "CachorroSpace" is returned.
  • Does not include relations in the response. To get relations, use search_nodes with a relation-relevant query or search_semantic.
  • Ideal for quick lookups when you know the exact entity name.

Semantic search using vector embeddings with optional full-text hybrid search. Combines semantic (KNN) and text (FTS5) results via Reciprocal Rank Fusion, then applies limbic re-ranking based on access patterns and co-occurrence.

Requires the embedding model — run download_model.py first. See Semantic Search for the full pipeline details and Hybrid Search for the RRF fusion mechanism.

Signature: search_semantic(query: str, limit: int = 10) → dict[str, Any]

ParameterTypeRequiredDefaultDescription
querystrYesQuery text. Encoded as an embedding and compared against stored vectors.
limitintNo10Maximum number of results to return.
{
"query": "web framework for documentation sites",
"limit": 5
}
{
"results": [
{
"name": "CachorroSpace",
"entityType": "Project",
"observations": [
"Stack: Astro 6.x + Starlight 0.38.x",
"Documentation site for open-source repos"
],
"limbic_score": 0.6742,
"scoring": {
"importance": 0.8512,
"temporal_factor": 0.9923,
"cooc_boost": 1.2341
},
"distance": 0.1234,
"rrf_score": 0.018542
},
{
"name": "CachorroInk",
"entityType": "Project",
"observations": [
"Blog built with Astro 5.x"
],
"limbic_score": 0.4521,
"scoring": {
"importance": 0.6234,
"temporal_factor": 0.8756,
"cooc_boost": 0.5123
},
"distance": 0.3591,
"rrf_score": 0.012341
}
]
}
{
"error": "Embedding model not available. Run 'python scripts/download_model.py' to download the model first."
}
  • Requires the ONNX embedding model to be downloaded. If unavailable, returns a descriptive error. All other tools work fine without it.
  • Uses hybrid search: KNN (sqlite-vec cosine distance) runs in parallel with FTS5 (BM25 full-text). Results are merged via Reciprocal Rank Fusion (k=60).
  • If FTS5 returns no results or is unavailable, falls back to pure semantic mode (KNN + limbic re-ranking only).
  • Each result includes:
    • limbic_score — final ranking score (see Limbic System)
    • scoring — breakdown of importance, temporal decay, and co-occurrence boost
    • distance — cosine distance from the query vector
    • rrf_score — Reciprocal Rank Fusion score (only present in hybrid mode; absent in pure semantic)
  • Post-response tracking: after returning results, the engine records access events and co-occurrences for top-K entities. This improves future rankings. Best-effort — does not affect the current response.
  • Lower distance = higher similarity. Formula: d = 1 - cos(A, B), range [0, 2].
  • Entity text is encoded using a Head+Tail+Diversity selection strategy with a budget of 480 tokens (not simple concatenation). See Semantic Search for details.

Import data from Anthropic MCP Memory JSONL format to SQLite. Idempotent — running it multiple times won’t duplicate data. See Migration for a step-by-step guide.

Signature: migrate(source_path: str = "") → dict[str, Any]

ParameterTypeRequiredDefaultDescription
source_pathstrYes""Path to the Anthropic JSONL source file. Must exist.
{
"source_path": "~/.config/opencode/mcp-memory.jsonl"
}
{
"entities_imported": 32,
"relations_imported": 37,
"errors": 0,
"skipped": 2
}
  • Idempotent: entities are upserted, relations are created only if they don’t already exist, and duplicate observations are discarded. Safe to run repeatedly.
  • Relations are imported only if both entities already exist in the graph at the time the line is processed. This means the JSONL file should list entities before their relations.
  • Batch embedding generation: if the embedding engine is available, embeddings are generated for all imported entities at the end of the migration process.
  • The skipped count includes lines that couldn’t be processed (malformed JSON, missing required fields, etc.).
  • Also available as a standalone script — see Getting Started for the CLI invocation.

Signature: analyze_entity_split(entity_name: str) → dict[str, Any]

ParameterTypeRequiredDescription
entity_namestrYesName of the entity to analyze
{
"entity_name": "Proyecto cachorrites"
}
{
"analysis": {
"entity_name": "Proyecto cachorritos",
"entity_type": "Proyecto",
"observation_count": 28,
"threshold": 25,
"needs_split": true,
"topics": {
"Arquitectura": ["Stack: FastMCP + SQLite", "MCP Memory v2"],
"Implementacion": ["97 tests passing", "Query routing implementado"],
"Despliegue": ["Deploy: Vercel", "URL: cachorro.space"]
},
"split_score": 1.12
}
}
  • Analyzes if an entity exceeds its type-specific threshold (Sesion=15, Proyecto=25, otras=20)
  • Uses TF-IDF to group observations into topics
  • needs_split: true when split_score > 1.0 AND observation_count > threshold
  • Does NOT modify the entity — only analyzes

Signature: propose_entity_split_tool(entity_name: str) → dict[str, Any]

ParameterTypeRequiredDescription
entity_namestrYesName of the entity to analyze and split
{
"proposal": {
"original_entity": {
"name": "Proyecto cachorritos",
"entity_type": "Proyecto"
},
"suggested_splits": [
{
"name": "Proyecto cachorritos - Arquitectura",
"entity_type": "Proyecto",
"observations": ["Stack: FastMCP + SQLite", "MCP Memory v2"]
},
{
"name": "Proyecto cachorritos - Implementacion",
"entity_type": "Proyecto",
"observations": ["97 tests passing", "Query routing implementado"]
}
],
"relations_to_create": [
{"from": "Proyecto cachorritos", "to": "Proyecto cachorritos - Arquitectura", "type": "contiene"},
{"from": "Proyecto cachorritos - Arquitectura", "to": "Proyecto cachorritos", "type": "parte_de"},
{"from": "Proyecto cachorritos", "to": "Proyecto cachorritos - Implementacion", "type": "contiene"},
{"from": "Proyecto cachorritos - Implementacion", "to": "Proyecto cachorritos", "type": "parte_de"}
],
"analysis": {
"observation_count": 28,
"threshold": 25,
"split_score": 1.12,
"num_topics": 3
}
}
}
  • Returns proposal: null if entity doesn’t need splitting
  • Creates contiene (parent→child) and parte_de (child→parent) relations
  • Topic extraction uses TF-IDF with Spanish stop words
  • Entity names preserve accents, entityType has no accent

Signature: execute_entity_split_tool(entity_name: str, approved_splits: list[dict[str, Any]], parent_entity_name: str | None = None) → dict[str, Any]

ParameterTypeRequiredDescription
entity_namestrYesName of the original entity to split
approved_splitslist[dict]YesList of approved split definitions
parent_entity_namestrNoOptional explicit parent name

Each approved_split dict must have:

FieldTypeRequiredDescription
namestrYesName for the new sub-entity
entity_typestrYesEntity type (typically same as original)
observationslist[str]YesObservations to move to this sub-entity
{
"entity_name": "Proyecto cachorritos",
"approved_splits": [
{
"name": "Proyecto cachorritos - Arquitectura",
"entity_type": "Proyecto",
"observations": ["Stack: FastMCP + SQLite", "MCP Memory v2"]
}
]
}
{
"result": {
"new_entities": ["Proyecto cachorritos - Arquitectura"],
"moved_observations": 2,
"relations_created": 2,
"original_observations_remaining": 26
}
}
  • Creates new entities from approved splits
  • Moves specified observations from original to new entities
  • Establishes contiene/parte_de relations
  • Uses atomic transaction (all or nothing)
  • Regenerates embeddings for new entities

Signature: find_split_candidates() → dict[str, Any]

{
"candidates": [
{
"entity_name": "Proyecto cachorritos",
"entity_type": "Proyecto",
"observation_count": 28,
"threshold": 25,
"needs_split": true,
"topics": {...},
"split_score": 1.12
},
{
"entity_name": "Sesión 2026-03-31",
"entity_type": "Sesion",
"observation_count": 18,
"threshold": 15,
"needs_split": true,
"topics": {...},
"split_score": 1.2
}
]
}
  • Scans ALL entities in the knowledge graph
  • Returns empty list if no candidates found
  • Does NOT modify any entities — only analyzes

Find observations that may be semantically duplicated within an entity. Returns pairs of observations with similarity scores and match type.

Requires the embedding model — run download_model.py first.

Signature: find_duplicate_observations(entity_name: str, threshold: float = 0.85, containment_threshold: float = 0.7) → dict[str, Any]

ParameterTypeRequiredDefaultDescription
entity_namestrYesName of the entity to check for duplicates
thresholdfloatNo0.85Minimum cosine similarity to consider two observations as duplicates
containment_thresholdfloatNo0.7Minimum containment score for asymmetric text pairs (length ratio >= 2.0)
{
"entity_name": "My Project",
"threshold": 0.85
}
{
"entity_name": "My Project",
"total_observations": 12,
"duplicate_pairs": [
{
"obs_text_a": "Deployed on Vercel with custom domain",
"obs_text_b": "Deployment: Vercel, custom domain configured",
"similarity_score": 0.91,
"match_type": "cosine"
},
{
"obs_text_a": "Stack: FastMCP + SQLite + ONNX Runtime for embeddings",
"obs_text_b": "FastMCP",
"similarity_score": 0.73,
"match_type": "containment"
}
]
}
  • Uses combined similarity: cosine similarity >= threshold OR containment score >= containment_threshold when one text is 2x+ longer than the other (asymmetric length).
  • Union-Find clustering groups observations into duplicate clusters — each pair appears once.
  • The match_type field indicates which similarity metric triggered the match: "cosine" for standard cosine similarity, "containment" for asymmetric text pairs.
  • Read-only — no modifications are made to the knowledge graph. Review the results and use delete_observations to manually consolidate.
  • Requires the embedding model. Returns an error if the model is not available.
  • See Maintenance & Operations for the full deduplication workflow.

Generate a read-only consolidation report analyzing the knowledge graph health across four dimensions: split candidates, flagged observations, stale entities, and large entities.

Signature: consolidation_report(stale_days: float = 90.0) → dict[str, Any]

ParameterTypeRequiredDefaultDescription
stale_daysfloatNo90.0Number of days of inactivity to consider an entity as stale
{
"stale_days": 90
}
{
"summary": {
"total_entities": 47,
"total_observations": 312,
"split_candidates_count": 3,
"flagged_observations_count": 8,
"stale_entities_count": 12,
"large_entities_count": 5
},
"split_candidates": [
{
"entity_name": "Sesión 2026-03-31",
"entity_type": "Sesion",
"observation_count": 18,
"threshold": 15,
"split_score": 1.2
}
],
"flagged_observations": [
{
"entity_name": "My Project",
"observation_text": "Deployed on Vercel",
"similarity_flag": 1
}
],
"stale_entities": [
{
"entity_name": "Old Experiment",
"entity_type": "Generic",
"last_accessed_days_ago": 145,
"access_count": 2,
"observation_count": 3
}
],
"large_entities": [
{
"entity_name": "Proyecto cachorritos",
"entity_type": "Proyecto",
"observation_count": 35
}
]
}
  • Read-only — generates a report without modifying any data. sofia reviews the report and decides what actions to take.
  • Checks 4 areas:
    1. Split candidates: entities exceeding type-specific observation thresholds (Sesion=15, Proyecto=25, others=20) with sufficient topic diversity
    2. Flagged observations: observations where similarity_flag=1 in the observations table (potential semantic duplicates detected by add_observations)
    3. Stale entities: entities not accessed in stale_days days with low access count
    4. Large entities: entities exceeding size thresholds that may need splitting
  • Does not require the embedding model for most checks (only split candidates analysis uses TF-IDF, which is built-in).
  • See Maintenance & Operations for the full consolidation workflow.

Expire an active relation by setting active=0 and ended_at=now. For inverse pairs (contiene/parte_de), also expires the inverse relation.

Signature: end_relation(relation_id: int) → dict[str, Any]

ParameterTypeRequiredDescription
relation_idintYesThe ID of the relation to expire
{
"relation_id": 42
}
{
"result": "Relation 42 expired successfully"
}
  • Sets active=0 and ended_at=now on the specified relation
  • For contiene/parte_de pairs, both directions are expired automatically
  • The relation is not deleted — it remains in the database with active=0 for historical reference
  • If the relation is already expired (active=0), the operation is a no-op
  • Use this instead of delete_relations when you want to preserve the historical fact that a relation existed

Add a narrative reflection to give context and meaning to a memory. Reflections are free-form prose attached to entities, sessions, relations, or globally, with author and mood metadata.

Signature: add_reflection(target_type: str, content: str, author: str = "sofia", mood: str | None = None, target_id: int | None = None) → dict[str, Any]

ParameterTypeRequiredDefaultDescription
target_typestrYesWhat this reflection targets — 'entity', 'session', 'relation', or 'global'
contentstrYesThe reflection text (free prose, no prefixes)
authorstrNo"sofia"Who wrote this — 'nolan' or 'sofia'
moodstr or nullNonullOptional mood — 'frustracion', 'satisfaccion', 'curiosidad', 'duda', 'insight'
target_idint or nullNonullID of the target entity/relation. Required for entity and relation types. null for session and global.
{
"target_type": "entity",
"target_id": 15,
"content": "This entity represents the core architectural decision that shaped the entire project.",
"author": "nolan",
"mood": "insight"
}
{
"result": {
"id": 5,
"target_type": "entity",
"target_id": 15,
"author": "nolan",
"content": "This entity represents the core architectural decision that shaped the entire project.",
"mood": "insight",
"created_at": "2026-04-14 22:47:23"
}
}
  • Reflections are stored in a separate reflections table — independent from observations
  • Each reflection is indexed by parallel FTS5 (reflection_fts) and vector embeddings (reflection_embeddings) for hybrid search
  • target_type='entity' or 'relation' requires a valid target_id. target_type='session' or 'global' uses target_id=null
  • The content field is free-form prose — no structured format required
  • The mood field is optional and uses Spanish terms (consistent with the project’s conventions)
  • Embeddings for reflections use the same ONNX model as entities, stored in a separate vec0 table

Search reflections by semantic similarity and optional filters. Combines semantic (KNN) and text (FTS5) results via Reciprocal Rank Fusion — the same hybrid pipeline used for entity search.

Signature: search_reflections(query: str, author: str | None = None, mood: str | None = None, target_type: str | None = None, limit: int = 10) → dict[str, Any]

ParameterTypeRequiredDefaultDescription
querystrYesSearch text
authorstr or nullNonullFilter by author ('nolan' or 'sofia')
moodstr or nullNonullFilter by mood
target_typestr or nullNonullFilter by target type ('entity', 'session', 'relation', 'global')
limitintNo10Max results (default 10)
{
"query": "architecture decisions that shaped the project",
"author": "nolan",
"limit": 5
}
{
"results": [
{
"id": 5,
"target_type": "entity",
"target_id": 15,
"author": "nolan",
"content": "This entity represents the core architectural decision that shaped the entire project.",
"mood": "insight",
"created_at": "2026-04-14 22:47:23",
"score": 0.8456
}
]
}
  • Uses the same RRF hybrid pipeline as entity search: KNN (semantic) + FTS5 (text) merged via Reciprocal Rank Fusion
  • Searches the reflections index only — does not search entity observations
  • All filters (author, mood, target_type) are optional — combine them for targeted queries
  • Requires the embedding model (same as search_semantic)
  • Returns a score field representing the combined RRF + semantic similarity

Every write operation that changes entity content triggers an embedding update. The following table summarizes how each tool interacts with the embedding system:

OperationEmbedding ActionDetail
create_entities✅ Generate/UpdateFull snapshot: name + type + all observations
add_observations✅ RegenerateRegenerates with all observations (old + new)
delete_observations✅ RegenerateRegenerates without the deleted observations
delete_entities🗑️ DeleteManual deletion before CASCADE (vec0 limitation)
create_relations❌ NoneRelations don’t participate in semantic search
delete_relations❌ NoneSame as above
search_nodes❌ NoneLIKE-based search, no embeddings needed
open_nodes❌ NoneDirect lookup by exact name
search_semantic📖 Read-onlyEncodes query, searches by cosine distance, re-ranks with Limbic Scoring. Records access + co-occurrences post-response.
migrate✅ BatchGenerates embeddings for all imported entities at the end of migration
analyze_entity_split❌ NoneAnalysis only, no modifications
propose_entity_split❌ NoneReturns proposal, no modifications
execute_entity_split✅ RegenerateCreates new entities with fresh embeddings
find_split_candidates❌ NoneScans without modifying
find_duplicate_observations📖 Read-onlyEncodes observations, computes pairwise cosine + containment similarity. Requires embedding model.
consolidation_report❌ NoneAnalysis only, no modifications
end_relation❌ NoneUpdates relation metadata only
add_reflection✅ GenerateEmbedding stored in parallel reflection_embeddings vec0 table
search_reflections📖 Read-onlyEncodes query, searches reflection embeddings via KNN + FTS5 RRF