Open Source

Memory for
AI agents

AI agents forget everything between sessions. Engram fixes that. Local-first persistent memory with semantic search and knowledge graphs.

You

"Remember that I prefer TypeScript over JavaScript"

Engram extracts, stores, connects

Next session

"I know you prefer TypeScript. Should I use it for this project?"

Why Engram over mem0?

Built on mem0, extended for agentic workflows

Feature mem0 Engram
API REST only REST + MCP
Semantic Links Basic pgvector
Hosting Cloud or self-host Local-first
Capture Manual Auto-hooks
Multi-agent Single user Squad isolation
Observability Limited Langfuse

How it works

Four stages from conversation to connected knowledge

1

Capture

Hooks record your conversations automatically

2

Extract

LLM identifies key facts from raw text

3

Store

Facts are embedded and indexed for semantic search

4

Link

Knowledge graph connects related memories

Built for privacy and power

Everything runs locally. Your memories never leave your machine.

Local-first

Your memories stay on your machine. No cloud dependency, no data leaving your network.

Semantic search

Find memories by meaning, not keywords. Ask "what did we discuss about auth?" and get relevant context.

Semantic connections

Vector similarity finds related memories automatically. Build understanding that compounds over time.

Auto-capture

Hooks automatically record conversations. No manual saving, memories just accumulate.

Claude Code native

Built-in MCP integration. Claude just knows what you worked on before.

Squad isolation

Each squad gets isolated memory with optional cross-squad sharing. Built for teams.

Observability

Langfuse integration for monitoring memory operations, tracking costs, and analyzing behavior.

MemoryML

Declarative memory modeling language. Define schemas in YAML, validate before storage.

Simple architecture

Docker-based stack with PostgreSQL + pgvector for semantic search, and a REST API that integrates directly with Claude Code via MCP.

PostgreSQL + pgvector — Vector storage and semantic search
REST API — Memory operations
MCP Server — Claude Code integration
Langfuse — Observability and cost tracking
# docker-compose.yml
services:
postgres: # pgvector
api: # REST endpoints
mcp: # Claude integration
langfuse: # observability
# memory.yaml
schema: project_context
version: 1.0
fields:
name: string
stack: string[]
decisions: relation[decision]
retrieval:
vector: 0.4
graph: 0.4
recency: 0.2
Coming Soon

MemoryML

A declarative memory modeling language. Define how your agent remembers in YAML, validate before storage, and export portable JSON-LD.

Schema validation — Catch errors before they become memories
Backend agnostic — pgvector, SQLite, or Redis
Hybrid retrieval — Tune vector, graph, and recency weights
Portable — Export/import as JSON-LD
Read the MemoryML spec →

Get started in minutes

Requires Docker, Docker Compose, and Ollama

1 git clone https://github.com/agents-squads/engram Clone the repo
2 cd engram && ./scripts/start.sh Start services
3 ./scripts/generate-token.sh Generate auth token
4 claude mcp add engram Add to Claude Code

Give your agents memory

Open source. Self-hosted. Your data stays yours.