Sovereign Depth-Adaptive Memory for AI
Every conversation you have with an AI begins from zero. submarine changes that. A persistent memory layer that runs on your machine, works with any model, and costs nothing per recall.
Founding edition · v1.0.5 · Open source under AGPL-3.0
The problem
Your preferences, your patterns, the residue of every unguarded conversation — all of it sits on infrastructure you do not control. Terms change. Policies shift. Breaches surface months late. The most personal dataset of your life is a guest in another company's house.
Switch vendors and the "you" that had been learned is gone. The industry calls this personalization. A more honest word is gravity. Memory is the gravity well that keeps you from leaving. It is not a side effect. It is the business model.
Cloud memory systems invoke a large language model for every read, every write, every reconciliation. Each recall is a billed inference. Each inference is another chance to misremember you — at your expense.
A memory layer that is expensive, unreliable, vendor-owned, and model-locked is not a memory layer. It is a liability wearing a product's clothing.
The shift
Frontier models have commoditized raw knowledge. The new advantage is depth of context about you — and depth is bought only with time.
Two people ask the same model the same question. One gets a generic answer. The other gets an answer shaped by six months of accumulated decisions, reversals, and evolving principles. Same model. Same prompt. Radically different output.
The gap does not close. It widens every day. Whoever started earlier is deeper. Time is the one currency no fund can front you.
The answer
Sovereign. Model-agnostic. Self-hosted. Yours.
Your data lives on your hardware. Not "encrypted in our cloud." On your machine. The most personal dataset of your life should have exactly one set of keys, and they should be in your pocket.
Not a feature of a model. A layer above models. Switch frontier labs tomorrow and your context comes with you, untouched. Any memory tied to a single vendor is a memory you are renting.
No LLM calls power the memory. Embeddings run locally. Context generation costs $0 per invocation. You are not metered on remembering yourself.
Not trying to store more. Trying to understand more deeply. That is a different engineering problem, and it has a different answer. We solved it.
The silhouette
Who you are
Identity, principles, values. Persistent by design. The architectural foundation everything else rests on.
How you work
Active decisions, open projects, evolving strategies. Managed lifecycle.
What is happening
Operational facts of this week, this hour. Naturally cycles through relevance.
One generated document. Distilled, structured context about you — who you are, what you are focused on, what you have decided, the threads you are still pulling. Hand it to any model. Any model. Full depth.
Every decision is linked to where it came from. A memory that understands you, not just quotes you.
Notices when you change your mind. Treats it as signal, not collision.
Protects load-bearing truths from careless overwrites and hallucinated updates.
The irrelevant fades. The important consolidates. Like a healthy mind.
The proof
Models are the engine — they will come and go. Your deep, adaptive memory stays with you.
Those who adopt this standard first turn their data into an asset. Every day without a sovereign layer is a day of depth someone else is accumulating. Not metaphorically. Literally.
The landscape
The industry solves memory as search. submarine solves what search does not see.
| Category | Typical limitation | submarine |
|---|---|---|
| Cloud-native memory | Vendor-hosted by default, model-locked | Sovereign, model-agnostic, runs on your machine |
| LLM-per-operation memory | Every read and write triggers a billed inference | Zero LLM calls in the memory loop |
| Vector-store wrappers | Search without structure, no causal layer | Causal ranking, contradiction detection, immune protection |
| Vendor-specific memory features | Tied to one model — context dies on switch | Layer above models — your depth travels with you |
No widely available system combines causal ranking, contradiction detection, immune protection, purposeful forgetting, model-agnostic sovereignty, and $0 per recall.
Two exits. One clock.
Begin accumulating your sovereign layer today. In three months, you will feel it. In a year, you will not remember how you worked without it.
Honorable instinct. But while you are architecting your layers — the people who started are getting deeper. By months. By the kind of distance that does not close.
git clone https://github.com/UseSubmarine/submarine.git
Clone the repository, follow docs/QUICKSTART.md, and paste the generated Crystal into any model's system prompt. Watch what happens when an intelligence finally meets a version of you that has been allowed to accumulate.
Not merely ahead of the industry frontier. Revealing a new standard.
The information age is over.
The depth age has started.
The current is already moving.
In the beginning was the word — and your data deserves to be the beginning of something greater.
The origin
submarine was built in the Netherlands on a single machine — no research lab, no cloud budget, no pitch deck. Just an open-source codebase, a specification anyone can audit, and a conviction that the industry was solving the wrong problem at the wrong layer.
A small team grew around the work, not around funding. What came out of that process is now live, tested, and running. This is the founding edition.
D. Ashford — Founder, KordinapsLab