Your private AI assistant with offline memory

It uses offline models to store memory on your device, remembers what you share, and helps you find it later. Memory is saved as Markdown and organized into a clear, connected graph for you and your agents to navigate.

Available for macOS 14+ with Apple Silicon (M1 or better).

Run your private AI with the CLI

Everything stays local, and you can start a memory-aware chat session instantly from the terminal. The experience is powered by open weights models fine-tuned specifically for memory.

CLI terminal showing memory-aware chat session
Explore your memory graph in Obsidian
Interactive graph showing memories and agent skills

Interactive graph that lets you explore your memories and agent skills in a clean, connected view.

Download memory models and extensions

Cataloged in Registry, these models are trained with RL for memory operations on Markdown files so the agent can maintain and refine persistent knowledge across sessions, while optional LoRA memory extensions let users and organizations bring their own data and shape the model's personality.

Tiles Registry showing memory models and extensions

Powered by Tilekit

Modelfile implementation

Explore the design of our Rust based Modelfile SDK for private, cross-platform model customization and access.

Learn more