summaryrefslogtreecommitdiffstats
path: root/news/phase2-replication/index.html
blob: 777bf422c957a3b700f2feb35d1cda80aec953bd (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <title>Phase 2: Memories Survive — Tesseras</title>
    <meta name="description" content="Tesseras now fragments, distributes, and automatically repairs data across the network using Reed-Solomon erasure coding and a bilateral reciprocity ledger.">
    <!-- Open Graph -->
    <meta property="og:type" content="article">
    <meta property="og:title" content="Phase 2: Memories Survive">
    <meta property="og:description" content="Tesseras now fragments, distributes, and automatically repairs data across the network using Reed-Solomon erasure coding and a bilateral reciprocity ledger.">
    <meta property="og:image" content="https://tesseras.net/images/social.jpg">
    <meta property="og:image:width" content="1200">
    <meta property="og:image:height" content="630">
    <meta property="og:site_name" content="Tesseras">
    <!-- Twitter Card -->
    <meta name="twitter:card" content="summary_large_image">
    <meta name="twitter:title" content="Phase 2: Memories Survive">
    <meta name="twitter:description" content="Tesseras now fragments, distributes, and automatically repairs data across the network using Reed-Solomon erasure coding and a bilateral reciprocity ledger.">
    <meta name="twitter:image" content="https://tesseras.net/images/social.jpg">
    <link rel="stylesheet" href="https://tesseras.net/style.css?h=21f0f32121928ee5c690">
    
        
            <link rel="alternate" type="application/atom+xml" title="Tesseras" href="https://tesseras.net/atom.xml">
        
    
    <link rel="icon" type="image/png" sizes="32x32" href="https://tesseras.net/images/favicon.png?h=be4e123a23393b1a027d">
    
</head>
<body>
    <header>
        <h1>
            <a href="https:&#x2F;&#x2F;tesseras.net/">
                <img src="https://tesseras.net/images/logo-64.png?h=c1b8d0c4c5f93b49d40b" alt="Tesseras" width="40" height="40" class="logo">
                Tesseras
            </a>
        </h1>
        <nav>
            
                <a href="https://tesseras.net/about/">About</a>
                <a href="https://tesseras.net/news/">News</a>
                <a href="https://tesseras.net/releases/">Releases</a>
                <a href="https://tesseras.net/faq/">FAQ</a>
                <a href="https://tesseras.net/subscriptions/">Subscriptions</a>
                <a href="https://tesseras.net/contact/">Contact</a>
            
        </nav>
        <nav class="lang-switch">
            
                <strong>English</strong> | <a href="/pt-br&#x2F;news&#x2F;phase2-replication&#x2F;">Português</a>
            
        </nav>
    </header>

    <main>
        
<article>
    <h2>Phase 2: Memories Survive</h2>
    <p class="news-date">2026-02-14</p>
    <p>A tessera is no longer tied to a single machine. Phase 2 delivers the
replication layer: data is split into erasure-coded fragments, distributed
across multiple peers, and automatically repaired when nodes go offline. A
bilateral reciprocity ledger ensures fair storage exchange — no blockchain, no
tokens.</p>
<h2 id="what-was-built">What was built</h2>
<p><strong>tesseras-core</strong> (updated) — New replication domain types: <code>FragmentPlan</code>
(selects fragmentation tier based on tessera size), <code>FragmentId</code> (tessera hash +
index + shard count + checksum), <code>FragmentEnvelope</code> (fragment with its metadata
for wire transport), <code>FragmentationTier</code> (Small/Medium/Large), <code>Attestation</code>
(proof that a node holds a fragment at a given time), and <code>ReplicateAck</code>
(acknowledgement of fragment receipt). Three new port traits define the
hexagonal boundaries: <code>DhtPort</code> (find peers, replicate fragments, request
attestations, ping), <code>FragmentStore</code> (store/read/delete/list/verify fragments),
and <code>ReciprocityLedger</code> (record storage exchanges, query balances, find best
peers). Maximum tessera size is 1 GB.</p>
<p><strong>tesseras-crypto</strong> (updated) — The existing <code>ReedSolomonCoder</code> now powers
fragment encoding. Data is split into shards, parity shards are computed, and
any combination of data shards can reconstruct the original — as long as the
number of missing shards does not exceed the parity count.</p>
<p><strong>tesseras-storage</strong> (updated) — Two new adapters:</p>
<ul>
<li><code>FsFragmentStore</code> — stores fragment data as files on disk
(<code>{root}/{tessera_hash}/{index:03}.shard</code>) with a SQLite metadata index
tracking tessera hash, shard index, shard count, checksum, and byte size.
Verification recomputes the BLAKE3 hash and compares it to the stored
checksum.</li>
<li><code>SqliteReciprocityLedger</code> — bilateral storage accounting in SQLite. Each peer
has a row tracking bytes stored for them and bytes they store for us. The
<code>balance</code> column is a generated column
(<code>bytes_they_store_for_us - bytes_stored_for_them</code>). UPSERT ensures atomic
increment of counters.</li>
</ul>
<p>New migration (<code>002_replication.sql</code>) adds tables for fragments, fragment plans,
holders, holder-fragment mappings, and reciprocity balances.</p>
<p><strong>tesseras-dht</strong> (updated) — Four new message variants: <code>Replicate</code> (send a
fragment envelope), <code>ReplicateAck</code> (confirm receipt), <code>AttestRequest</code> (ask a
node to prove it holds a tessera's fragments), and <code>AttestResponse</code> (return
attestation with checksums and timestamp). The engine handles these in its
message dispatch loop.</p>
<p><strong>tesseras-replication</strong> — The new crate, with five modules:</p>
<ul>
<li>
<p><em>Fragment encoding</em> (<code>fragment.rs</code>): <code>encode_tessera()</code> selects the
fragmentation tier based on size, then calls Reed-Solomon encoding for Medium
and Large tiers. Three tiers:</p>
<ul>
<li><strong>Small</strong> (&lt; 4 MB): whole-file replication to r=7 peers, no erasure coding</li>
<li><strong>Medium</strong> (4–256 MB): 16 data + 8 parity shards, distributed across r=7
peers</li>
<li><strong>Large</strong> (≥ 256 MB): 48 data + 24 parity shards, distributed across r=7
peers</li>
</ul>
</li>
<li>
<p><em>Distribution</em> (<code>distributor.rs</code>): subnet diversity filtering limits peers per
/24 IPv4 subnet (or /48 IPv6 prefix) to avoid correlated failures. If all your
fragments land on the same rack, a single power outage kills them all.</p>
</li>
<li>
<p><em>Service</em> (<code>service.rs</code>): <code>ReplicationService</code> is the orchestrator.
<code>replicate_tessera()</code> encodes the data, finds the closest peers via DHT,
applies subnet diversity, and distributes fragments round-robin.
<code>receive_fragment()</code> validates the BLAKE3 checksum, checks reciprocity balance
(rejects if the sender's deficit exceeds the configured threshold), stores the
fragment, and updates the ledger. <code>handle_attestation_request()</code> lists local
fragments and computes their checksums as proof of possession.</p>
</li>
<li>
<p><em>Repair</em> (<code>repair.rs</code>): <code>check_tessera_health()</code> requests attestations from
known holders, falls back to ping for unresponsive nodes, verifies local
fragment integrity, and returns one of three actions: <code>Healthy</code>,
<code>NeedsReplication { deficit }</code>, or <code>CorruptLocal { fragment_index }</code>. The
repair loop runs every 24 hours (with 2-hour jitter) via <code>tokio::select!</code> with
shutdown integration.</p>
</li>
<li>
<p><em>Configuration</em> (<code>config.rs</code>): <code>ReplicationConfig</code> with defaults for repair
interval (24h), jitter (2h), concurrent transfers (4), minimum free space (1
GB), deficit allowance (256 MB), and per-peer storage limit (1 GB).</p>
</li>
</ul>
<p><strong>tesd</strong> (updated) — The daemon now opens a SQLite database (<code>db/tesseras.db</code>),
runs migrations, creates <code>FsFragmentStore</code>, <code>SqliteReciprocityLedger</code>, and
<code>FsBlobStore</code> instances, wraps the DHT engine in a <code>DhtPortAdapter</code>, builds a
<code>ReplicationService</code>, and spawns the repair loop as a background task with
graceful shutdown.</p>
<p><strong>Testing</strong> — 193 tests across the workspace:</p>
<ul>
<li>15 unit tests in tesseras-replication (fragment encoding tiers, checksum
validation, subnet diversity, repair health checks, service receive/replicate
flows)</li>
<li>3 integration tests with real storage (full encode→distribute→receive cycle
for medium tessera, small whole-file replication, tampered fragment rejection)</li>
<li>Tests use in-memory SQLite + tempdir fragments with mockall mocks for DHT and
BlobStore</li>
<li>Zero clippy warnings, clean formatting</li>
</ul>
<h2 id="architecture-decisions">Architecture decisions</h2>
<ul>
<li><strong>Three-tier fragmentation</strong>: small files don't need erasure coding — the
overhead isn't worth it. Medium and large files get progressively more parity
shards. This avoids wasting storage on small tesseras while providing strong
redundancy for large ones.</li>
<li><strong>Owner-push distribution</strong>: the tessera owner encodes fragments and pushes
them to peers, rather than peers pulling. This simplifies the protocol (no
negotiation phase) and ensures fragments are distributed immediately.</li>
<li><strong>Bilateral reciprocity without consensus</strong>: each node tracks its own balance
with each peer locally. No global ledger, no token, no blockchain. If peer A
stores 500 MB for peer B, peer B should store roughly 500 MB for peer A. Free
riders lose redundancy gradually — their fragments are deprioritized for
repair, but never deleted.</li>
<li><strong>Subnet diversity</strong>: fragments are spread across different network subnets to
survive correlated failures. A datacenter outage shouldn't take out all copies
of a tessera.</li>
<li><strong>Attestation-first health checks</strong>: the repair loop asks holders to prove
possession (attestation with checksums) before declaring a tessera degraded.
Only when attestation fails does it fall back to a simple ping. This catches
silent data corruption, not just node departure.</li>
</ul>
<h2 id="what-comes-next">What comes next</h2>
<ul>
<li><strong>Phase 3: API and Apps</strong> — Flutter mobile/desktop app via
flutter_rust_bridge, GraphQL API (async-graphql), WASM browser node</li>
<li><strong>Phase 4: Resilience and Scale</strong> — ML-DSA post-quantum signatures, advanced
NAT traversal, Shamir's Secret Sharing for heirs, packaging for
Alpine/Arch/Debian/FreeBSD/OpenBSD, CI on SourceHut</li>
<li><strong>Phase 5: Exploration and Culture</strong> — public tessera browser, institutional
curation, genealogy integration, physical media export</li>
</ul>
<p>Nodes can find each other and keep each other's memories alive. Next, we give
people a way to hold their memories in their hands.</p>

</article>

    </main>

    <footer>
        <p>&copy; 2026 Tesseras Project. <a href="/atom.xml">News Feed</a> · <a href="https://git.sr.ht/~ijanc/tesseras">Source</a></p>
    </footer>
</body>
</html>