Thesis

Every AI Remembers.
None of Them Follow Through.

Last Tuesday I was three hours into a refactor across ten worktrees. Claude Code held the plan in one. ChatGPT held the API spec in another. Cursor had the test stubs. Then I switched contexts and every tool forgot what it promised to do next.

The models remembered facts. They recalled my preferences. But the commitment I made at 9 a.m. to ship the onboarding flow by Friday? Gone. No tool tracked it. No tool surfaced it when I got pulled into something else. The recall was perfect. The follow-through was zero.

That gap is what 3ngram exists to close.

The landscape

Memory tools are everywhere. Follow-through is nowhere.

The memory layer for AI is a crowded space. Every approach solves recall. None of them solve accountability.

CLAUDE.md and project files

What it solves: Stores instructions and context per-repo. Works well for a single session in a single tool.

What it misses: Static. Does not sync across tools, track deadlines, or surface what fell through the cracks. Scales linearly with repo count.

Mem0

What it solves: Production-grade memory API with strong retrieval. Largest developer community in the space.

What it misses: Flat memory. No lifecycle semantics. A remembered commitment and a remembered preference look identical.

Zep

What it solves: Temporal knowledge graphs with excellent benchmark scores. Sophisticated entity extraction.

What it misses: Pure infrastructure. Knows what happened, not what should happen next. No accountability layer.

Native AI memory

What it solves: Built into Claude, ChatGPT, Gemini. Zero-config recall of facts and preferences.

What it misses: Locked to one provider. No structured types. No deadlines, no nudges, no cross-tool continuity.

These tools are good. I use some of them. But “good recall” and “follow-through” are different problems. Recall answers the question “what did I say?” Follow-through answers “did I do it?”

Positioning

Build on labs, not against them.

Frontier models improve every quarter. Building another chat interface means racing Anthropic, OpenAI, and Google on UX, speed, and model quality. That race is unwinnable.

3ngram takes the other path. It plugs into Claude, ChatGPT, Cursor, Windsurf, and any MCP-compatible client as a memory and accountability layer. The protocol is Model Context Protocol. One connection, 29 tools, provider-agnostic by design. When a lab ships a better model, you get that upgrade for free. 3ngram handles what they do not build: structured persistence and follow-through.

The feature

What follow-through actually means.

Most memory systems store flat text. A commitment and a preference are the same blob. 3ngram gives memories types: commitments carry deadlines, decisions accumulate rationale, blockers track resolution, context captures background.

Types create lifecycle. A commitment can be overdue. A blocker can be resolved. A decision can be referenced six months later with its full reasoning chain. Flat memory cannot do this because there is nothing to hang lifecycle on.

The result: your AI does not just recall what you said. It knows what you owe, what is blocked, and what slipped.

Claude Code
connected
remember --type=commitment
content:“Ship the onboarding flow by Friday”
due: 2026-04-18
scope: work
savedid: 3042 · commitment saved

Later, in ChatGPT

“What's overdue?”

3ngram

1 overdue commitment: Ship the onboarding flow (due: Apr 18, 2d overdue)

3ngram MCP·29 tools·streamable HTTP
Division of labor

Labs own the model. 3ngram owns the memory.

Each layer does what it does best. No overlap, no competition.

CapabilityAI Labs3ngram
Reasoning, vision, code genWorld-class, improving quarterlyUse theirs directly
Conversation UIPolished, native appsUse theirs directly
Memory across sessionsBasic facts and preferencesTyped memories with lifecycle
AccountabilityNoneCommitments, deadlines, alerts
Cross-provider continuityLocked to one providerAny MCP client
Document searchLimited or enterprise onlyGitHub, markdown, semantic search
Proactive follow-throughNoneDigests, nudges, blocker surfacing
Objection

The CLAUDE.md test.

If a single CLAUDE.md file in a single repo gives you everything you need, you probably do not need 3ngram yet. Static project files work for one person, one repo, one tool.

But that setup breaks when the surface area grows. Ten worktrees running in parallel. Three AI providers touching the same project. Commitments with deadlines that no static file will enforce. Context that needs to follow you from Claude Code to ChatGPT to Cursor without manual copy-paste.

The question is not whether project files are useful. They are. The question is whether they are sufficient when work spans multiple tools, multiple sessions, and multiple weeks. In my experience, they are not. That is where typed, persistent, cross-provider memory with accountability starts to matter.

Memory that follows through.

Connect via MCP. Keep your tools. Add the layer they are missing.

We use analytics to improve the product. Cookie Policy