On-device,
AI memory runtime for
continual learning
One of the most important frontiers in aligning large language models with human-like cognition is enabling true episodic memory, the ability to recall and adapt to temporally grounded, personal experiences. While LLMs have made progress with working memory via large context windows and long-term memory through static weights, they still lack the capacity to update and retain knowledge from lived interactions.
In humans, episodic memory is what allows us to remember past conversations and learn continuously, but LLMs remain stateless and amnesiac between sessions. Recent work on fast modular weight updates in federated heterogenous compute environments suggests a promising direction.
We envision a future shaped by many small, task-specific models, each finely tuned to its purpose and context. Tiles is our step toward enabling this vision. We are building on-device AI infrastructure designed for continual learning through model chaining and parameter-efficient fine-tuning. Continual learning only makes sense when it runs on edge and consumer devices, where the model can adapt to each user privately, contextually, and with additional redundancy layers that safeguard their data.With advancements like MatFormer architecture, PLE caching, and Mixture of Experts (MoE), this is now practical on consumer devices.
For our first generation Rust based SDK prototype, we are fine-tuning an on-device language model (Gemma3n E4B) using a reasoning dataset built from ~15k curated highlights exported from a read-later app. Here’s the updated version with the new Iroh link: For reasoning, we are using Unternet Kernel, with mistral.rs as the inference engine, Unsloth for fine-tuning, and Iroh as the networking infrastructure.
We’re in the prototype stage and seeking design partners among early-stage companies. Chat with us in the #tiles channel on the User & Agents Discord, or hello@tiles.run. Subscribe to our blog Neuron for updates on on-device AI and personalization research, and explore additional resources on our GitHub.