Popular on TelAve
- Crunchbase Ranks Phinge CEO #1 Globally: Meet Him At "Phinge Unveil", Preview Netverse Patented App-less Platform, Hardware & Netverse Intelligence NI
- Evermore Bliss Launches AI Wedding Speech Writer to Help Users Create Personalized, Heartfelt Toasts
- Umbrella Becomes First FinOps Platform to Support AWS Billing Transfer Onboarding
- Virginia Moving Company Nearly Doubles Customer Calls in Two Weeks After Switching to CARL — the Bold New Alternative to WordPress
- Appliance EMT Presents Multi-Thousand Dollar Donation to Kids Motel Ministry to Support Local Families
- New Report Reveals Plane Crashes Are Not Where You'd Think
- HRC Fertility's Dr. Christo G. Zouves Appointed to San Mateo County Medical Association Board of Directors
- Finding the Best Lawyer: What Really Matters When Your Case Is on the Line
- Spring Into Your New Home at Heritage at South Brunswick
- HarryPotterObamaSonic10Inu Celebrates World Record 1,000+ Days Livestream with Record-Breaking Merchandise Launch
Similar on TelAve
- iatroX surpasses 500,000 clinical queries and expands specialist exam coverage
- Lnk.Bio Becomes the First Link-in-Bio Service Fully Manageable from Inside ChatGPT
- RAS AP Consulting Advances to RFP Stage in Heidelberg Materials' SAP Vendor & Customer Master Data Modernization Initiative
- Sycor.Rental Named Among 2026 Best Microsoft Dynamics ERP Supply Chain Solutions
- ICTPBX Released: White-Label, Multi-Tenant Open Source PBX Platform for ITSPs
- HealthBook+ and Stonebrook Risk Solutions Partner to Bring Predictive Intelligence to Healthcare Risk
- Umbrella Becomes First FinOps Platform to Support AWS Billing Transfer Onboarding
- SUMOFIBER Fuels Explosive Growth With netElastic vBNG
- Virginia Moving Company Nearly Doubles Customer Calls in Two Weeks After Switching to CARL — the Bold New Alternative to WordPress
- Project Pretzel Introduces a New System for Running Renovation Projects with Built In Contracts and Real Time Execution
Lumetra Launches Engram, an MCP-Native Memory Layer Scoring 91.6% on LongMemEval
TelAve News/10896222
Memory that shows its work. Every recall traces back to a stored memory and a knowledge-graph edge. Bring your own model. Plug in via MCP, REST, or official SDKs.
SEATTLE - TelAve -- Lumetra today announced the general availability of Engram, a memory layer for AI agents. After a year of invitation-only beta access, Engram is opening its doors to all developers.
TL;DR
What Engram Does
An agent that talked with a user last week recalls their preferences today, and surfaces the exact stored memory and graph edge that produced the answer. Engram ingests conversation, extracts atomic facts and relationships, and stores them where they can be retrieved semantically and explained structurally.
Retrieval fuses three signals: keyword (BM25), semantic vector search, and traversal of the knowledge graph. Recall doesn't fail when a question is rephrased or when the answer depends on an implicit connection between memories.
More on TelAve News
For developers, that means memory stops being a black box. The recall path is inspectable end-to-end: every retrieved fact is grounded in a stored memory; every connection is grounded in a graph edge. If a recall is wrong, you can see why.
Memory That Shows Its Work
Memory products that bundle inference and hide the retrieval path require trusting the vendor about what their system did and why. Engram's design choice is the opposite: the system shows its work. This is the difference between a memory product and memory infrastructure.
Bring Your Own Model
Engram is BYOM by default. Developers connect their preferred LLM (frontier, open-source, or self-hosted), and Engram handles extraction, storage, and retrieval. No inference lock-in, no markup on tokens you could have bought directly.
Plug In Anywhere
Engram launches with three integration paths:
Pricing
Engram launches with usage-based pricing. No per-seat, no per-project surcharges:
Paid tiers meter only on memories stored and retrievals served. There are no per-call inference fees that scale with your success, and no per-token surcharges layered on top of the model you already pay for.
More on TelAve News
A Year Behind Closed Doors
Engram spent the past year in invitation-only beta with design partner NeonBay (neonbay.ai), running in production while the team hardened the retrieval pipeline, ingest path, and recall quality. Today's launch opens that same system to every developer.
Quotes
"MCP changed the math on memory. Once a client speaks MCP, adding long-term memory is a config change instead of a rewrite. We built Engram MCP-native from day one because we think that's where the ecosystem is going."
Ben Meyerson, Co-Founder, Lumetra
"Most memory products are black boxes. You hand over your data and trust that the right thing comes back. Engram is built so every recall points to a stored memory and a graph edge. If you don't like an answer, you can see exactly where it came from."
Jacob Davis, Co-Founder, Lumetra
About Lumetra
Lumetra builds memory infrastructure for AI agents. Founded in 2025 by Ben Meyerson and Jacob Davis (previously on AWS IoT at Amazon Web Services), the company is headquartered in Seattle, WA. Engram is its first product.
Start free: https://lumetra.io
Documentation: https://lumetra.io/docs
LongMemEval methodology and results:https://lumetra.io/engram-on-longmemeval
TL;DR
- 91.6% on LongMemEval-S (458/500) out of the box. Methodology and results published openly.
- Every recall is auditable: semantic retrieval plus an automatically maintained knowledge graph, so you can see which memory and which edge produced an answer.
- BYOM by default: frontier, open-source, or self-hosted models. No inference lock-in.
- Plug in three ways: MCP server, REST API, or official TypeScript, Python, and Go SDKs.
What Engram Does
An agent that talked with a user last week recalls their preferences today, and surfaces the exact stored memory and graph edge that produced the answer. Engram ingests conversation, extracts atomic facts and relationships, and stores them where they can be retrieved semantically and explained structurally.
Retrieval fuses three signals: keyword (BM25), semantic vector search, and traversal of the knowledge graph. Recall doesn't fail when a question is rephrased or when the answer depends on an implicit connection between memories.
More on TelAve News
- MSBG Corporation Acquires GridWatch US Telemetry Automation System
- TAYP Expands Athlete Exposure Platform Beyond Georgia With New Push Into Virginia and the 757
- KT Medical Staffing Expands Concierge Nursing and Private Duty Nursing Services in Orange County
- The Millennium Alliance Achieves Great Place To Work® Certification™ Amid Continued Growth
- The Millennium Alliance Appoints Former Adweek Executive Eric Hayden Shakun as Chief Financial Officer to Accelerate Next Phase of Growth
For developers, that means memory stops being a black box. The recall path is inspectable end-to-end: every retrieved fact is grounded in a stored memory; every connection is grounded in a graph edge. If a recall is wrong, you can see why.
Memory That Shows Its Work
Memory products that bundle inference and hide the retrieval path require trusting the vendor about what their system did and why. Engram's design choice is the opposite: the system shows its work. This is the difference between a memory product and memory infrastructure.
Bring Your Own Model
Engram is BYOM by default. Developers connect their preferred LLM (frontier, open-source, or self-hosted), and Engram handles extraction, storage, and retrieval. No inference lock-in, no markup on tokens you could have bought directly.
Plug In Anywhere
Engram launches with three integration paths:
- The MCP server works with Claude.ai web, Claude Desktop, Claude Code, Cursor, Windsurf, Codex, ChatGPT, and OpenClaw out of the box.
- The REST API provides standard HTTP endpoints for ingest, query, memory management, and usage stats.
- Official SDKs are available for TypeScript (@lumetra/engram on npm), Python (lumetra-engram on PyPI), and Go.
Pricing
Engram launches with usage-based pricing. No per-seat, no per-project surcharges:
- Free is for evaluation and hobby projects.
- $29 per month covers indie developers and small teams.
- $99 per month covers production workloads.
- Enterprise is custom and includes dedicated support.
Paid tiers meter only on memories stored and retrievals served. There are no per-call inference fees that scale with your success, and no per-token surcharges layered on top of the model you already pay for.
More on TelAve News
- T. Jones Group Named Finalist Across Multiple Categories at the 2026 Georgie Awards
- The Simplest Small Business You're Probably Not Thinking About
- San Francisco Writer Wins Webby Award, Internet's Highest Honor, for Website Based on her Novel
- EDC Weekend Comedy Special Featuring Don Barnhart & Friends — Use Promo Code FRIEND for 50% Off
- N Y S E: OTH Off The Hook YS Is Building a Vertically Integrated Marine Empire — And Investors Are Starting to Notice
A Year Behind Closed Doors
Engram spent the past year in invitation-only beta with design partner NeonBay (neonbay.ai), running in production while the team hardened the retrieval pipeline, ingest path, and recall quality. Today's launch opens that same system to every developer.
Quotes
"MCP changed the math on memory. Once a client speaks MCP, adding long-term memory is a config change instead of a rewrite. We built Engram MCP-native from day one because we think that's where the ecosystem is going."
Ben Meyerson, Co-Founder, Lumetra
"Most memory products are black boxes. You hand over your data and trust that the right thing comes back. Engram is built so every recall points to a stored memory and a graph edge. If you don't like an answer, you can see exactly where it came from."
Jacob Davis, Co-Founder, Lumetra
About Lumetra
Lumetra builds memory infrastructure for AI agents. Founded in 2025 by Ben Meyerson and Jacob Davis (previously on AWS IoT at Amazon Web Services), the company is headquartered in Seattle, WA. Engram is its first product.
Start free: https://lumetra.io
Documentation: https://lumetra.io/docs
LongMemEval methodology and results:https://lumetra.io/engram-on-longmemeval
Source: Lumetra, LLC
Filed Under: Software
0 Comments
Latest on TelAve News
- With a Dream and a Team, Monalisa Okojie Is Empowering the Next Generation Through EXPOSE NGO
- American Properties Realty, Inc. Celebrates 2026 FAME Awards - Community of the Year - Heritage at South Brunswick
- Mel Blackwell to Keynote 2026 NSSF Marketing and Leadership Summit
- SmartCone and Samsung Launch RoadDefender™ to Enhance Real-Time Safety for Roadside Workers
- The Personal Development Industry Has a Blind Spot Says Global Personal Success Guru Omar L. Harris
- Kevin "Mr. Wonderful" O'Leary Begins New Universal Coin & Bullion Promotion of Gold and Silver
- Flamingo Compliance Launches Schengen Area Trip Planning Tools as New Digital Border Controls Take Effect
- HHS Announces Major Push to Address Psychiatric Drug Risks: CCHR Applauds Focus on Informed Consent and Safe Tapering
- PhaseZero Launches Eight AI Agents for Manufacturers and Distributors - Connecting Sales, Support, and Operations Teams Across Full Commerce Journey
- @tickerbitcoinbb and @girl_still_cute Announce the Arrival of SPROTO AEON BABY 1.0 – A New Chapter for the HarryPotterObamaSonic10Inu Universe
- Michigan Fitness Foundation Gifts EPEC Moves K–5 PE Curriculum Program to Educators during Michigan Moves Month
- Sidow Sobrino, the One and Only World's No.1 Superstar®, Launches Dangerous Joy
- Slotozilla Expands Bonus Portfolio and Affiliate Reach Following iGB Barcelona 2026
- XMax Inc. (N A S D A Q) Accelerates AI Expansion With $4.8 Million Contracted Revenue, $30+ Million Enterprise Pipeline and Strategic SpaceX Exposure
- Lnk.Bio Becomes the First Link-in-Bio Service Fully Manageable from Inside ChatGPT
- Did Drake Just Find His Next Signee? Peoria Rapper Rhymi Gifts "ICEMANDRAKE" Domains, Drops Debut Album Same Day
- Andrew Tate Says Los Angeles Is "Where I Belong" as He Hints at USA Move
- RAS AP Consulting Advances to RFP Stage in Heidelberg Materials' SAP Vendor & Customer Master Data Modernization Initiative
- Expert E-Bike Safety Advocate Issues Urgent Warning Following Recent Southern California Fatalities
- VeneerVibe Releases 2026 Snap-On Veneers Market Report