Tools Reference
Overview
Section titled “Overview”mcp-memory exposes 19 tools via the MCP protocol. The first 8 are 100% compatible with Anthropic’s MCP Memory API — use it as a drop-in replacement. The remaining 11 extend functionality with semantic search, hybrid retrieval, entity management, and maintenance operations.
Compatibility Summary
Section titled “Compatibility Summary”| Tool | Description | Anthropic | New |
|---|---|---|---|
create_entities | Create or update entities (merge observations) | ✅ | |
create_relations | Create typed relations between entities | ✅ | |
add_observations | Add observations to an existing entity | ✅ | |
delete_entities | Delete entities (cascades to observations + relations) | ✅ | |
delete_observations | Delete specific observations from an entity | ✅ | |
delete_relations | Delete specific relations | ✅ | |
search_nodes | Search by substring (name, type, observation content) | ✅ | |
open_nodes | Retrieve entities by exact name | ✅ | |
search_semantic | Semantic search via vector embeddings + limbic re-ranking | 🆕 | |
migrate | Import from Anthropic’s JSONL format (idempotent) | 🆕 | |
analyze_entity_split | Analyze if an entity needs splitting (thresholds: Sesion=15, Proyecto=25, otras=20) | 🆕 | |
propose_entity_split | Analyze and propose a split using TF-IDF topic grouping | 🆕 | |
execute_entity_split | Execute an approved split (creates entities, moves observations, establishes relations) | 🆕 | |
find_split_candidates | Find all entities in the knowledge graph that need splitting | 🆕 | |
find_duplicate_observations | Find semantically duplicated observations within an entity (cosine + containment) | 🆕 | |
consolidation_report | Generate a read-only consolidation report (split candidates, flagged obs, stale entities) | 🆕 | |
end_relation | Expire an active relation by setting active=0 and ended_at=now | 🆕 | |
add_reflection | Add a narrative reflection to an entity, session, relation, or global | 🆕 | |
search_reflections | Search reflections via semantic (KNN) and text (FTS5) hybrid with RRF | 🆕 |
1. create_entities
Section titled “1. create_entities”Create or update entities in the knowledge graph. If an entity already exists, observations are merged — existing data is never overwritten.
Signature: create_entities(entities: list[dict[str, Any]]) → dict[str, Any]
Parameters
Section titled “Parameters”| Parameter | Type | Required | Description |
|---|---|---|---|
entities | list[dict] | Yes | List of entities to create or update |
Each entity dict is validated against the EntityInput Pydantic model:
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
name | str | Yes | — | Entity name (min 1 character). Acts as unique identifier. |
entityType | str | No | "Generic" | Entity category (e.g. "Project", "Task", "Component"). |
observations | list[str] | No | [] | Observations to attach to the entity. |
Example Request
Section titled “Example Request”{ "entities": [ { "name": "CachorroSpace", "entityType": "Project", "observations": [ "Stack: Astro 6.x + Starlight 0.38.x", "Deployed on Vercel at cachorro.space" ] } ]}Example Response
Section titled “Example Response”{ "entities": [ { "name": "CachorroSpace", "entityType": "Project", "observations": [ "Stack: Astro 6.x + Starlight 0.38.x", "Deployed on Vercel at cachorro.space" ] } ]}Error Response
Section titled “Error Response”{ "error": "1 validation error for EntityInput\nname\n Field required [type=missing, input_value={'entityType': 'Project'}, input_type=dict]"}Behavior Notes
Section titled “Behavior Notes”- Implements upsert via
INSERT … ON CONFLICT(name) DO UPDATE. If the entity already exists, updatesentity_typeandupdated_atwithout deleting previous observations. - Observations are merged: new ones are appended, and exact duplicates are discarded silently.
- Generates an embedding with the complete entity snapshot (name + type + all observations). If the embedding engine is unavailable, the operation completes without error.
- You can create multiple entities in a single call by passing multiple dicts in the
entitiesarray.
2. create_relations
Section titled “2. create_relations”Create relations between entities. Both the source and target entities must exist in the knowledge graph before creating a relation.
Signature: create_relations(relations: list[dict[str, Any]]) → dict[str, Any]
Parameters
Section titled “Parameters”| Parameter | Type | Required | Description |
|---|---|---|---|
relations | list[dict] | Yes | List of relations to create |
Each relation dict is validated against the RelationInput Pydantic model:
| Field | Type | Required | Description |
|---|---|---|---|
from | str | Yes | Source entity name. Must exist in the graph. |
to | str | Yes | Target entity name. Must exist in the graph. |
relationType | str | Yes | Type of relation (e.g. "contains", "depends_on", "uses"). |
Example Request
Section titled “Example Request”{ "relations": [ { "from": "CachorroSpace", "to": "Astro", "relationType": "uses" } ]}Example Response (success)
Section titled “Example Response (success)”{ "relations": [ { "from": "CachorroSpace", "to": "Astro", "relationType": "uses" } ]}Error Response (entity not found)
Section titled “Error Response (entity not found)”{ "relations": [ { "error": "Entity not found: NonExistentEntity" } ], "errors": [ "Entity not found: NonExistentEntity" ]}Error Response (duplicate relation)
Section titled “Error Response (duplicate relation)”{ "relations": [ { "from": "CachorroSpace", "to": "Astro", "relationType": "uses", "error": "Relation already exists" } ]}Behavior Notes
Section titled “Behavior Notes”- Both entities (
fromandto) must exist before creating the relation. If either is missing, the relation is not created and an error is returned. - The
relationstable has aUNIQUE(from_entity, to_entity, relation_type)constraint. If the relation already exists, the dict is returned with an"error"key. - Does not touch embeddings — relations are structural metadata and do not participate in semantic search.
3. add_observations
Section titled “3. add_observations”Add observations to an existing entity. The entity must already exist — use create_entities to create new entities.
Signature: add_observations(name: str, observations: list[str]) → dict[str, Any]
Parameters
Section titled “Parameters”| Parameter | Type | Required | Description |
|---|---|---|---|
name | str | Yes | Exact entity name. Must exist. |
observations | list[str] | Yes | Observations to add. Exact duplicates are discarded. |
Example Request
Section titled “Example Request”{ "name": "CachorroSpace", "observations": [ "ECharts 6.x chosen for visualizations", "Accent color: teal (#2dd4bf)" ]}Example Response
Section titled “Example Response”{ "entity": { "name": "CachorroSpace", "entityType": "Project", "observations": [ "Stack: Astro 6.x + Starlight 0.38.x", "Deployed on Vercel at cachorro.space", "ECharts 6.x chosen for visualizations", "Accent color: teal (#2dd4bf)" ] }}Error Response (entity not found)
Section titled “Error Response (entity not found)”{ "error": "Entity not found: NonExistentEntity"}Behavior Notes
Section titled “Behavior Notes”- The entity must exist. This tool never creates new entities.
- Exact duplicate observations are discarded silently — no error is raised for duplicates.
- Regenerates the embedding with the updated entity snapshot (all existing observations + new ones).
- Use this tool when you want to append data to an entity without respecifying its type or existing observations.
4. delete_entities
Section titled “4. delete_entities”Delete entities and all their associated observations and relations.
Signature: delete_entities(entityNames: list[str]) → dict[str, Any]
Parameters
Section titled “Parameters”| Parameter | Type | Required | Description |
|---|---|---|---|
entityNames | list[str] | Yes | Entity names to delete. |
Example Request
Section titled “Example Request”{ "entityNames": ["Old Project", "Deprecated Component"]}Example Response (success)
Section titled “Example Response (success)”{ "deleted": ["Old Project", "Deprecated Component"]}Error Response (partial)
Section titled “Error Response (partial)”{ "deleted": ["Old Project"], "errors": ["Entity not found: Already Deleted Entity"]}Behavior Notes
Section titled “Behavior Notes”- Cascade deletion: deleting an entity automatically removes all its observations and relations via
ON DELETE CASCADEin SQLite. - Critical — embeddings: the
vec0virtual table from sqlite-vec does not support CASCADE. The code manually deletes embeddings before deleting entities: (1) look up entity rowids, (2) delete fromentity_embeddingsbyrowid, (3) delete fromentities. All within an implicit SQLite transaction. - May intermittently fail with
"cannot start a transaction"under high concurrency in WAL mode. Retrying usually resolves it. - Entities that don’t exist are reported in the
errorsarray but don’t prevent other valid deletions.
5. delete_observations
Section titled “5. delete_observations”Delete specific observations from an entity. The entity itself is preserved — only the matched observations are removed.
Signature: delete_observations(name: str, observations: list[str]) → dict[str, Any]
Parameters
Section titled “Parameters”| Parameter | Type | Required | Description |
|---|---|---|---|
name | str | Yes | Exact entity name. Must exist. |
observations | list[str] | Yes | Observations to delete (exact content match). |
Example Request
Section titled “Example Request”{ "name": "CachorroSpace", "observations": [ "Accent color: teal (#2dd4bf)" ]}Example Response
Section titled “Example Response”{ "entity": { "name": "CachorroSpace", "entityType": "Project", "observations": [ "Stack: Astro 6.x + Starlight 0.38.x", "Deployed on Vercel at cachorro.space", "ECharts 6.x chosen for visualizations" ] }}Error Response (entity not found)
Section titled “Error Response (entity not found)”{ "error": "Entity not found: NonExistentEntity"}Behavior Notes
Section titled “Behavior Notes”- Deletion is by exact content match. No patterns, wildcards, or substring matching.
- If an observation doesn’t exist for that entity, nothing happens — no error is raised for unmatched observations.
- Regenerates the embedding with the remaining observations after deletion.
- Useful for removing outdated or incorrect observations without recreating the entire entity.
6. delete_relations
Section titled “6. delete_relations”Delete relations between entities. Requires the full triple to identify the relation.
Signature: delete_relations(relations: list[dict[str, Any]]) → dict[str, Any]
Parameters
Section titled “Parameters”| Parameter | Type | Required | Description |
|---|---|---|---|
relations | list[dict] | Yes | Relations to delete (same format as create_relations) |
Each relation dict:
| Field | Type | Required | Description |
|---|---|---|---|
from | str | Yes | Source entity name |
to | str | Yes | Target entity name |
relationType | str | Yes | Relation type |
Example Request
Section titled “Example Request”{ "relations": [ { "from": "CachorroSpace", "to": "Astro", "relationType": "uses" } ]}Example Response (success)
Section titled “Example Response (success)”{ "deleted": [ { "from": "CachorroSpace", "to": "Astro", "relationType": "uses" } ]}Error Response
Section titled “Error Response”{ "deleted": [], "errors": [ "Relation not found: A -> B (contains)" ]}Behavior Notes
Section titled “Behavior Notes”- Requires the full triple (
from+to+relationType) to identify the relation. Partial matches don’t work. - If either entity doesn’t exist:
"Entity not found: X or Y". - If the relation doesn’t exist:
"Relation not found: X -> Y (relationType)". - Does not touch embeddings — relations are not part of the semantic index.
7. search_nodes
Section titled “7. search_nodes”Search for nodes by name, entity type, or observation content using substring matching. This is a lightweight search that does not require the embedding model.
Signature: search_nodes(query: str) → dict[str, Any]
Parameters
Section titled “Parameters”| Parameter | Type | Required | Description |
|---|---|---|---|
query | str | Yes | Search term. Applied as a LIKE pattern across multiple fields. |
Example Request
Section titled “Example Request”{ "query": "astro" }Example Response
Section titled “Example Response”{ "entities": [ { "name": "CachorroSpace", "entityType": "Project", "observations": [ "Stack: Astro 6.x + Starlight 0.38.x", "Deployed on Vercel at cachorro.space" ] }, { "name": "CachorroInk", "entityType": "Project", "observations": [ "Blog built with Astro 5.x" ] } ]}Behavior Notes
Section titled “Behavior Notes”- Uses LIKE with the
%query%pattern applied simultaneously to three fields:name,entity_type, and observationcontent. - Uses
SELECT DISTINCTto avoid returning the same entity multiple times when it matches on multiple fields. - Does not require the ONNX embedding model — works out of the box with no additional setup.
- Search is case-sensitive by default (standard SQLite
LIKEbehavior). For case-insensitive search, use semantic search instead. - For targeted queries on large graphs, prefer
search_nodesor semantic search overopen_nodes.
8. open_nodes
Section titled “8. open_nodes”Retrieve specific entities by exact name. Returns full entity data with all observations.
Signature: open_nodes(names: list[str]) → dict[str, Any]
Parameters
Section titled “Parameters”| Parameter | Type | Required | Description |
|---|---|---|---|
names | list[str] | Yes | Entity names to retrieve. |
Example Request
Section titled “Example Request”{ "names": ["CachorroSpace", "Astro"] }Example Response
Section titled “Example Response”{ "entities": [ { "name": "CachorroSpace", "entityType": "Project", "observations": [ "Stack: Astro 6.x + Starlight 0.38.x", "Deployed on Vercel at cachorro.space" ] } ]}Behavior Notes
Section titled “Behavior Notes”- Search by exact name (
WHERE name = ?). No patterns, wildcards, or LIKE matching. - If a name doesn’t match any entity, it is silently omitted from results — no error is raised. In the example above, if
"Astro"doesn’t exist, only"CachorroSpace"is returned. - Does not include relations in the response. To get relations, use
search_nodeswith a relation-relevant query orsearch_semantic. - Ideal for quick lookups when you know the exact entity name.
9. search_semantic
Section titled “9. search_semantic”Semantic search using vector embeddings with optional full-text hybrid search. Combines semantic (KNN) and text (FTS5) results via Reciprocal Rank Fusion, then applies limbic re-ranking based on access patterns and co-occurrence.
Requires the embedding model — run download_model.py first. See Semantic Search for the full pipeline details and Hybrid Search for the RRF fusion mechanism.
Signature: search_semantic(query: str, limit: int = 10) → dict[str, Any]
Parameters
Section titled “Parameters”| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
query | str | Yes | — | Query text. Encoded as an embedding and compared against stored vectors. |
limit | int | No | 10 | Maximum number of results to return. |
Example Request
Section titled “Example Request”{ "query": "web framework for documentation sites", "limit": 5}Example Response
Section titled “Example Response”{ "results": [ { "name": "CachorroSpace", "entityType": "Project", "observations": [ "Stack: Astro 6.x + Starlight 0.38.x", "Documentation site for open-source repos" ], "limbic_score": 0.6742, "scoring": { "importance": 0.8512, "temporal_factor": 0.9923, "cooc_boost": 1.2341 }, "distance": 0.1234, "rrf_score": 0.018542 }, { "name": "CachorroInk", "entityType": "Project", "observations": [ "Blog built with Astro 5.x" ], "limbic_score": 0.4521, "scoring": { "importance": 0.6234, "temporal_factor": 0.8756, "cooc_boost": 0.5123 }, "distance": 0.3591, "rrf_score": 0.012341 } ]}Error Response (model not available)
Section titled “Error Response (model not available)”{ "error": "Embedding model not available. Run 'python scripts/download_model.py' to download the model first."}Behavior Notes
Section titled “Behavior Notes”- Requires the ONNX embedding model to be downloaded. If unavailable, returns a descriptive error. All other tools work fine without it.
- Uses hybrid search: KNN (sqlite-vec cosine distance) runs in parallel with FTS5 (BM25 full-text). Results are merged via Reciprocal Rank Fusion (k=60).
- If FTS5 returns no results or is unavailable, falls back to pure semantic mode (KNN + limbic re-ranking only).
- Each result includes:
limbic_score— final ranking score (see Limbic System)scoring— breakdown of importance, temporal decay, and co-occurrence boostdistance— cosine distance from the query vectorrrf_score— Reciprocal Rank Fusion score (only present in hybrid mode; absent in pure semantic)
- Post-response tracking: after returning results, the engine records access events and co-occurrences for top-K entities. This improves future rankings. Best-effort — does not affect the current response.
- Lower distance = higher similarity. Formula:
d = 1 - cos(A, B), range[0, 2]. - Entity text is encoded using a Head+Tail+Diversity selection strategy with a budget of 480 tokens (not simple concatenation). See Semantic Search for details.
10. migrate
Section titled “10. migrate”Import data from Anthropic MCP Memory JSONL format to SQLite. Idempotent — running it multiple times won’t duplicate data. See Migration for a step-by-step guide.
Signature: migrate(source_path: str = "") → dict[str, Any]
Parameters
Section titled “Parameters”| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
source_path | str | Yes | "" | Path to the Anthropic JSONL source file. Must exist. |
Example Request
Section titled “Example Request”{ "source_path": "~/.config/opencode/mcp-memory.jsonl"}Example Response
Section titled “Example Response”{ "entities_imported": 32, "relations_imported": 37, "errors": 0, "skipped": 2}Behavior Notes
Section titled “Behavior Notes”- Idempotent: entities are upserted, relations are created only if they don’t already exist, and duplicate observations are discarded. Safe to run repeatedly.
- Relations are imported only if both entities already exist in the graph at the time the line is processed. This means the JSONL file should list entities before their relations.
- Batch embedding generation: if the embedding engine is available, embeddings are generated for all imported entities at the end of the migration process.
- The
skippedcount includes lines that couldn’t be processed (malformed JSON, missing required fields, etc.). - Also available as a standalone script — see Getting Started for the CLI invocation.
11. analyze_entity_split
Section titled “11. analyze_entity_split”Signature: analyze_entity_split(entity_name: str) → dict[str, Any]
Parameters
Section titled “Parameters”| Parameter | Type | Required | Description |
|---|---|---|---|
entity_name | str | Yes | Name of the entity to analyze |
Example Request
Section titled “Example Request”{ "entity_name": "Proyecto cachorrites"}Example Response
Section titled “Example Response”{ "analysis": { "entity_name": "Proyecto cachorritos", "entity_type": "Proyecto", "observation_count": 28, "threshold": 25, "needs_split": true, "topics": { "Arquitectura": ["Stack: FastMCP + SQLite", "MCP Memory v2"], "Implementacion": ["97 tests passing", "Query routing implementado"], "Despliegue": ["Deploy: Vercel", "URL: cachorro.space"] }, "split_score": 1.12 }}Behavior Notes
Section titled “Behavior Notes”- Analyzes if an entity exceeds its type-specific threshold (Sesion=15, Proyecto=25, otras=20)
- Uses TF-IDF to group observations into topics
needs_split: truewhensplit_score > 1.0ANDobservation_count > threshold- Does NOT modify the entity — only analyzes
12. propose_entity_split
Section titled “12. propose_entity_split”Signature: propose_entity_split_tool(entity_name: str) → dict[str, Any]
Parameters
Section titled “Parameters”| Parameter | Type | Required | Description |
|---|---|---|---|
entity_name | str | Yes | Name of the entity to analyze and split |
Example Response
Section titled “Example Response”{ "proposal": { "original_entity": { "name": "Proyecto cachorritos", "entity_type": "Proyecto" }, "suggested_splits": [ { "name": "Proyecto cachorritos - Arquitectura", "entity_type": "Proyecto", "observations": ["Stack: FastMCP + SQLite", "MCP Memory v2"] }, { "name": "Proyecto cachorritos - Implementacion", "entity_type": "Proyecto", "observations": ["97 tests passing", "Query routing implementado"] } ], "relations_to_create": [ {"from": "Proyecto cachorritos", "to": "Proyecto cachorritos - Arquitectura", "type": "contiene"}, {"from": "Proyecto cachorritos - Arquitectura", "to": "Proyecto cachorritos", "type": "parte_de"}, {"from": "Proyecto cachorritos", "to": "Proyecto cachorritos - Implementacion", "type": "contiene"}, {"from": "Proyecto cachorritos - Implementacion", "to": "Proyecto cachorritos", "type": "parte_de"} ], "analysis": { "observation_count": 28, "threshold": 25, "split_score": 1.12, "num_topics": 3 } }}Behavior Notes
Section titled “Behavior Notes”- Returns
proposal: nullif entity doesn’t need splitting - Creates
contiene(parent→child) andparte_de(child→parent) relations - Topic extraction uses TF-IDF with Spanish stop words
- Entity names preserve accents, entityType has no accent
13. execute_entity_split
Section titled “13. execute_entity_split”Signature: execute_entity_split_tool(entity_name: str, approved_splits: list[dict[str, Any]], parent_entity_name: str | None = None) → dict[str, Any]
Parameters
Section titled “Parameters”| Parameter | Type | Required | Description |
|---|---|---|---|
entity_name | str | Yes | Name of the original entity to split |
approved_splits | list[dict] | Yes | List of approved split definitions |
parent_entity_name | str | No | Optional explicit parent name |
Each approved_split dict must have:
| Field | Type | Required | Description |
|---|---|---|---|
name | str | Yes | Name for the new sub-entity |
entity_type | str | Yes | Entity type (typically same as original) |
observations | list[str] | Yes | Observations to move to this sub-entity |
Example Request
Section titled “Example Request”{ "entity_name": "Proyecto cachorritos", "approved_splits": [ { "name": "Proyecto cachorritos - Arquitectura", "entity_type": "Proyecto", "observations": ["Stack: FastMCP + SQLite", "MCP Memory v2"] } ]}Example Response
Section titled “Example Response”{ "result": { "new_entities": ["Proyecto cachorritos - Arquitectura"], "moved_observations": 2, "relations_created": 2, "original_observations_remaining": 26 }}Behavior Notes
Section titled “Behavior Notes”- Creates new entities from approved splits
- Moves specified observations from original to new entities
- Establishes
contiene/parte_derelations - Uses atomic transaction (all or nothing)
- Regenerates embeddings for new entities
14. find_split_candidates
Section titled “14. find_split_candidates”Signature: find_split_candidates() → dict[str, Any]
Example Response
Section titled “Example Response”{ "candidates": [ { "entity_name": "Proyecto cachorritos", "entity_type": "Proyecto", "observation_count": 28, "threshold": 25, "needs_split": true, "topics": {...}, "split_score": 1.12 }, { "entity_name": "Sesión 2026-03-31", "entity_type": "Sesion", "observation_count": 18, "threshold": 15, "needs_split": true, "topics": {...}, "split_score": 1.2 } ]}Behavior Notes
Section titled “Behavior Notes”- Scans ALL entities in the knowledge graph
- Returns empty list if no candidates found
- Does NOT modify any entities — only analyzes
15. find_duplicate_observations
Section titled “15. find_duplicate_observations”Find observations that may be semantically duplicated within an entity. Returns pairs of observations with similarity scores and match type.
Requires the embedding model — run download_model.py first.
Signature: find_duplicate_observations(entity_name: str, threshold: float = 0.85, containment_threshold: float = 0.7) → dict[str, Any]
Parameters
Section titled “Parameters”| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
entity_name | str | Yes | — | Name of the entity to check for duplicates |
threshold | float | No | 0.85 | Minimum cosine similarity to consider two observations as duplicates |
containment_threshold | float | No | 0.7 | Minimum containment score for asymmetric text pairs (length ratio >= 2.0) |
Example Request
Section titled “Example Request”{ "entity_name": "My Project", "threshold": 0.85}Example Response
Section titled “Example Response”{ "entity_name": "My Project", "total_observations": 12, "duplicate_pairs": [ { "obs_text_a": "Deployed on Vercel with custom domain", "obs_text_b": "Deployment: Vercel, custom domain configured", "similarity_score": 0.91, "match_type": "cosine" }, { "obs_text_a": "Stack: FastMCP + SQLite + ONNX Runtime for embeddings", "obs_text_b": "FastMCP", "similarity_score": 0.73, "match_type": "containment" } ]}Behavior Notes
Section titled “Behavior Notes”- Uses combined similarity: cosine similarity >=
thresholdOR containment score >=containment_thresholdwhen one text is 2x+ longer than the other (asymmetric length). - Union-Find clustering groups observations into duplicate clusters — each pair appears once.
- The
match_typefield indicates which similarity metric triggered the match:"cosine"for standard cosine similarity,"containment"for asymmetric text pairs. - Read-only — no modifications are made to the knowledge graph. Review the results and use
delete_observationsto manually consolidate. - Requires the embedding model. Returns an error if the model is not available.
- See Maintenance & Operations for the full deduplication workflow.
16. consolidation_report
Section titled “16. consolidation_report”Generate a read-only consolidation report analyzing the knowledge graph health across four dimensions: split candidates, flagged observations, stale entities, and large entities.
Signature: consolidation_report(stale_days: float = 90.0) → dict[str, Any]
Parameters
Section titled “Parameters”| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
stale_days | float | No | 90.0 | Number of days of inactivity to consider an entity as stale |
Example Request
Section titled “Example Request”{ "stale_days": 90}Example Response
Section titled “Example Response”{ "summary": { "total_entities": 47, "total_observations": 312, "split_candidates_count": 3, "flagged_observations_count": 8, "stale_entities_count": 12, "large_entities_count": 5 }, "split_candidates": [ { "entity_name": "Sesión 2026-03-31", "entity_type": "Sesion", "observation_count": 18, "threshold": 15, "split_score": 1.2 } ], "flagged_observations": [ { "entity_name": "My Project", "observation_text": "Deployed on Vercel", "similarity_flag": 1 } ], "stale_entities": [ { "entity_name": "Old Experiment", "entity_type": "Generic", "last_accessed_days_ago": 145, "access_count": 2, "observation_count": 3 } ], "large_entities": [ { "entity_name": "Proyecto cachorritos", "entity_type": "Proyecto", "observation_count": 35 } ]}Behavior Notes
Section titled “Behavior Notes”- Read-only — generates a report without modifying any data. sofia reviews the report and decides what actions to take.
- Checks 4 areas:
- Split candidates: entities exceeding type-specific observation thresholds (Sesion=15, Proyecto=25, others=20) with sufficient topic diversity
- Flagged observations: observations where
similarity_flag=1in the observations table (potential semantic duplicates detected byadd_observations) - Stale entities: entities not accessed in
stale_daysdays with low access count - Large entities: entities exceeding size thresholds that may need splitting
- Does not require the embedding model for most checks (only split candidates analysis uses TF-IDF, which is built-in).
- See Maintenance & Operations for the full consolidation workflow.
17. end_relation
Section titled “17. end_relation”Expire an active relation by setting active=0 and ended_at=now. For inverse pairs (contiene/parte_de), also expires the inverse relation.
Signature: end_relation(relation_id: int) → dict[str, Any]
Parameters
Section titled “Parameters”| Parameter | Type | Required | Description |
|---|---|---|---|
relation_id | int | Yes | The ID of the relation to expire |
Example Request
Section titled “Example Request”{ "relation_id": 42}Example Response
Section titled “Example Response”{ "result": "Relation 42 expired successfully"}Behavior Notes
Section titled “Behavior Notes”- Sets
active=0andended_at=nowon the specified relation - For
contiene/parte_depairs, both directions are expired automatically - The relation is not deleted — it remains in the database with
active=0for historical reference - If the relation is already expired (active=0), the operation is a no-op
- Use this instead of
delete_relationswhen you want to preserve the historical fact that a relation existed
18. add_reflection
Section titled “18. add_reflection”Add a narrative reflection to give context and meaning to a memory. Reflections are free-form prose attached to entities, sessions, relations, or globally, with author and mood metadata.
Signature: add_reflection(target_type: str, content: str, author: str = "sofia", mood: str | None = None, target_id: int | None = None) → dict[str, Any]
Parameters
Section titled “Parameters”| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
target_type | str | Yes | — | What this reflection targets — 'entity', 'session', 'relation', or 'global' |
content | str | Yes | — | The reflection text (free prose, no prefixes) |
author | str | No | "sofia" | Who wrote this — 'nolan' or 'sofia' |
mood | str or null | No | null | Optional mood — 'frustracion', 'satisfaccion', 'curiosidad', 'duda', 'insight' |
target_id | int or null | No | null | ID of the target entity/relation. Required for entity and relation types. null for session and global. |
Example Request
Section titled “Example Request”{ "target_type": "entity", "target_id": 15, "content": "This entity represents the core architectural decision that shaped the entire project.", "author": "nolan", "mood": "insight"}Example Response
Section titled “Example Response”{ "result": { "id": 5, "target_type": "entity", "target_id": 15, "author": "nolan", "content": "This entity represents the core architectural decision that shaped the entire project.", "mood": "insight", "created_at": "2026-04-14 22:47:23" }}Behavior Notes
Section titled “Behavior Notes”- Reflections are stored in a separate
reflectionstable — independent from observations - Each reflection is indexed by parallel FTS5 (
reflection_fts) and vector embeddings (reflection_embeddings) for hybrid search target_type='entity'or'relation'requires a validtarget_id.target_type='session'or'global'usestarget_id=null- The
contentfield is free-form prose — no structured format required - The
moodfield is optional and uses Spanish terms (consistent with the project’s conventions) - Embeddings for reflections use the same ONNX model as entities, stored in a separate
vec0table
19. search_reflections
Section titled “19. search_reflections”Search reflections by semantic similarity and optional filters. Combines semantic (KNN) and text (FTS5) results via Reciprocal Rank Fusion — the same hybrid pipeline used for entity search.
Signature: search_reflections(query: str, author: str | None = None, mood: str | None = None, target_type: str | None = None, limit: int = 10) → dict[str, Any]
Parameters
Section titled “Parameters”| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
query | str | Yes | — | Search text |
author | str or null | No | null | Filter by author ('nolan' or 'sofia') |
mood | str or null | No | null | Filter by mood |
target_type | str or null | No | null | Filter by target type ('entity', 'session', 'relation', 'global') |
limit | int | No | 10 | Max results (default 10) |
Example Request
Section titled “Example Request”{ "query": "architecture decisions that shaped the project", "author": "nolan", "limit": 5}Example Response
Section titled “Example Response”{ "results": [ { "id": 5, "target_type": "entity", "target_id": 15, "author": "nolan", "content": "This entity represents the core architectural decision that shaped the entire project.", "mood": "insight", "created_at": "2026-04-14 22:47:23", "score": 0.8456 } ]}Behavior Notes
Section titled “Behavior Notes”- Uses the same RRF hybrid pipeline as entity search: KNN (semantic) + FTS5 (text) merged via Reciprocal Rank Fusion
- Searches the reflections index only — does not search entity observations
- All filters (
author,mood,target_type) are optional — combine them for targeted queries - Requires the embedding model (same as
search_semantic) - Returns a
scorefield representing the combined RRF + semantic similarity
Embedding Behavior by Operation
Section titled “Embedding Behavior by Operation”Every write operation that changes entity content triggers an embedding update. The following table summarizes how each tool interacts with the embedding system:
| Operation | Embedding Action | Detail |
|---|---|---|
create_entities | ✅ Generate/Update | Full snapshot: name + type + all observations |
add_observations | ✅ Regenerate | Regenerates with all observations (old + new) |
delete_observations | ✅ Regenerate | Regenerates without the deleted observations |
delete_entities | 🗑️ Delete | Manual deletion before CASCADE (vec0 limitation) |
create_relations | ❌ None | Relations don’t participate in semantic search |
delete_relations | ❌ None | Same as above |
search_nodes | ❌ None | LIKE-based search, no embeddings needed |
open_nodes | ❌ None | Direct lookup by exact name |
search_semantic | 📖 Read-only | Encodes query, searches by cosine distance, re-ranks with Limbic Scoring. Records access + co-occurrences post-response. |
migrate | ✅ Batch | Generates embeddings for all imported entities at the end of migration |
analyze_entity_split | ❌ None | Analysis only, no modifications |
propose_entity_split | ❌ None | Returns proposal, no modifications |
execute_entity_split | ✅ Regenerate | Creates new entities with fresh embeddings |
find_split_candidates | ❌ None | Scans without modifying |
find_duplicate_observations | 📖 Read-only | Encodes observations, computes pairwise cosine + containment similarity. Requires embedding model. |
consolidation_report | ❌ None | Analysis only, no modifications |
end_relation | ❌ None | Updates relation metadata only |
add_reflection | ✅ Generate | Embedding stored in parallel reflection_embeddings vec0 table |
search_reflections | 📖 Read-only | Encodes query, searches reflection embeddings via KNN + FTS5 RRF |