Memori Labs has surpassed 13,000 GitHub stars, signaling rapid adoption of its agent-native memory infrastructure. At the same time, the company is setting a new industry standard with leading LoCoMo benchmark performance—delivering higher accuracy at a fraction of the cost. With a new release focused on memory from agent execution (not just conversation), Memori is redefining how AI systems retain and use context.
SAN FRANCISCO, April 7, 2026 /PRNewswire-PRWeb/ -- Memori Labs today announced that its open-source memory infrastructure platform has surpassed 13,000 stars on GitHub, marking a major milestone in the rapid adoption of agent-native memory systems.
The achievement reflects a broader shift among developers building production AI systems - moving away from stateless prompt-based architectures toward persistent, structured memory that enables agents to retain and evolve context across sessions.
"Reaching 13,000 stars is a testament to the power of community-driven innovation," said Adam B. Struck, CEO and Co-Founder of Memori Labs. "By combining our SQL-native ease of use with benchmark-breaking performance, we are proving that developers do not have to choose between simplicity and power. We are very grateful to every developer who has pushed us to reach this milestone."
In parallel with its open-source growth, Memori continues to demonstrate industry-leading performance on the LoCoMo benchmark, the most widely cited evaluation for long-context memory systems. Memori achieved 81.95% overall accuracy, outperforming competing systems including Zep (~79%), LangMem (~78%), and Mem0 (~62%).
Critically, Memori achieved this while using only ~1,294 tokens per query—representing approximately 4.98% of the cost of full-context prompting and more than 20× lower context cost. Compared to alternatives, Memori also reduced token usage by roughly 67% versus Zep, demonstrating that higher accuracy does not require larger context windows.
These results highlight a fundamental shift in how memory systems are built. Rather than scaling context size, Memori structures memory into semantic representations—enabling more precise retrieval, lower latency, and improved reasoning across multi-session interactions.
As AI systems become more complex, traditional approaches—such as flat file storage, static embeddings, and prompt stuffing—begin to break down under production workloads. Memori addresses these challenges by creating structured memory that persists across sessions and adapts over time, enabling agents to operate with greater consistency, efficiency, and reliability.
Looking ahead, Memori Labs is preparing to release a major product update that expands its capabilities beyond conversational memory. The upcoming release introduces structured memory derived not only from agent interactions, but also from agent trace and execution—capturing decisions, tool usage, and outcomes to create a more complete and durable representation of state.
This advancement is expected to unlock a new generation of agent-native applications, where memory is built from what agents do—not just what they say.
About Memori Labs
Memori is agent-native memory infrastructure—a SQL-native, LLM-agnostic layer that turns agent execution and interactions into structured, persistent state for production systems. The platform continuously captures activity, extracts structured knowledge, and intelligently retrieves relevant memory, enabling agents to operate with durable, evolving context across sessions. Memori offers both Memori Cloud (fully managed) and flexible enterprise deployment options, including BYODB, VPC, and on-premises configurations.
Media Contact
Press Relations, Memori Labs, 1 5612890486, [email protected], https://memorilabs.ai/
SOURCE Memori Labs
Share this article