Sovereign Depth-Adaptive Memory for AI

The depth of your AI's
memory decides
everything.

Every conversation you have with an AI begins from zero. submarine changes that. A persistent memory layer that runs on your machine, works with any model, and costs nothing per recall.

Founding edition · v1.0.5 · Open source under AGPL-3.0

The problem

Three structural failures.
Not bugs. Architecture.

01

Your context is a tenant

Your preferences, your patterns, the residue of every unguarded conversation — all of it sits on infrastructure you do not control. Terms change. Policies shift. Breaches surface months late. The most personal dataset of your life is a guest in another company's house.

02

Your memory is chained

Switch vendors and the "you" that had been learned is gone. The industry calls this personalization. A more honest word is gravity. Memory is the gravity well that keeps you from leaving. It is not a side effect. It is the business model.

03

Recall that hallucinates

Cloud memory systems invoke a large language model for every read, every write, every reconciliation. Each recall is a billed inference. Each inference is another chance to misremember you — at your expense.

A memory layer that is expensive, unreliable, vendor-owned, and model-locked is not a memory layer. It is a liability wearing a product's clothing.

The shift

Information is no longer the leader.

Frontier models have commoditized raw knowledge. The new advantage is depth of context about you — and depth is bought only with time.

Two people ask the same model the same question. One gets a generic answer. The other gets an answer shaped by six months of accumulated decisions, reversals, and evolving principles. Same model. Same prompt. Radically different output.

The gap does not close. It widens every day. Whoever started earlier is deeper. Time is the one currency no fund can front you.

The answer

submarine

Sovereign. Model-agnostic. Self-hosted. Yours.

Sovereignty

Your data lives on your hardware. Not "encrypted in our cloud." On your machine. The most personal dataset of your life should have exactly one set of keys, and they should be in your pocket.

Model-agnostic

Not a feature of a model. A layer above models. Switch frontier labs tomorrow and your context comes with you, untouched. Any memory tied to a single vendor is a memory you are renting.

Zero cost per recall

No LLM calls power the memory. Embeddings run locally. Context generation costs $0 per invocation. You are not metered on remembering yourself.

Depth over volume

Not trying to store more. Trying to understand more deeply. That is a different engineering problem, and it has a different answer. We solved it.

The silhouette

Three layers. One Crystal.

Soul

Who you are

Identity, principles, values. Persistent by design. The architectural foundation everything else rests on.

Core

How you work

Active decisions, open projects, evolving strategies. Managed lifecycle.

Cortex

What is happening

Operational facts of this week, this hour. Naturally cycles through relevance.

The Crystal

One generated document. Distilled, structured context about you — who you are, what you are focused on, what you have decided, the threads you are still pulling. Hand it to any model. Any model. Full depth.

Causal ranking

Every decision is linked to where it came from. A memory that understands you, not just quotes you.

Contradiction detection

Notices when you change your mind. Treats it as signal, not collision.

Immune system

Protects load-bearing truths from careless overwrites and hallucinated updates.

Purposeful forgetting

The irrelevant fades. The important consolidates. Like a healthy mind.

The proof

Not a prototype. Not a waitlist.

v1.0.5 Founding edition shipped
3 Memory layers active
$0 Cost per recall
AGPL-3.0 Open and auditable

Models are the engine — they will come and go. Your deep, adaptive memory stays with you.

Those who adopt this standard first turn their data into an asset. Every day without a sovereign layer is a day of depth someone else is accumulating. Not metaphorically. Literally.

The landscape

A different floor entirely.

The industry solves memory as search. submarine solves what search does not see.

Category Typical limitation submarine
Cloud-native memory Vendor-hosted by default, model-locked Sovereign, model-agnostic, runs on your machine
LLM-per-operation memory Every read and write triggers a billed inference Zero LLM calls in the memory loop
Vector-store wrappers Search without structure, no causal layer Causal ranking, contradiction detection, immune protection
Vendor-specific memory features Tied to one model — context dies on switch Layer above models — your depth travels with you

No widely available system combines causal ranking, contradiction detection, immune protection, purposeful forgetting, model-agnostic sovereignty, and $0 per recall.

Two exits. One clock.

The clock is not neutral.

Adopt the standard

Begin accumulating your sovereign layer today. In three months, you will feel it. In a year, you will not remember how you worked without it.

Build your own

Honorable instinct. But while you are architecting your layers — the people who started are getting deeper. By months. By the kind of distance that does not close.

Quick start

git clone https://github.com/UseSubmarine/submarine.git

Clone the repository, follow docs/QUICKSTART.md, and paste the generated Crystal into any model's system prompt. Watch what happens when an intelligence finally meets a version of you that has been allowed to accumulate.

Not merely ahead of the industry frontier. Revealing a new standard.

The information age is over.
The depth age has started.
The current is already moving.

In the beginning was the word — and your data deserves to be the beginning of something greater.

The origin

One laptop. No venture capital.

submarine was built in the Netherlands on a single machine — no research lab, no cloud budget, no pitch deck. Just an open-source codebase, a specification anyone can audit, and a conviction that the industry was solving the wrong problem at the wrong layer.

A small team grew around the work, not around funding. What came out of that process is now live, tested, and running. This is the founding edition.

D. Ashford — Founder, KordinapsLab