summaryrefslogtreecommitdiffstats
path: root/news/phase2-replication/index.html
diff options
context:
space:
mode:
Diffstat (limited to 'news/phase2-replication/index.html')
-rw-r--r--news/phase2-replication/index.html201
1 files changed, 0 insertions, 201 deletions
diff --git a/news/phase2-replication/index.html b/news/phase2-replication/index.html
deleted file mode 100644
index 777bf42..0000000
--- a/news/phase2-replication/index.html
+++ /dev/null
@@ -1,201 +0,0 @@
-<!DOCTYPE html>
-<html lang="en">
-<head>
- <meta charset="utf-8">
- <meta name="viewport" content="width=device-width, initial-scale=1">
- <title>Phase 2: Memories Survive — Tesseras</title>
- <meta name="description" content="Tesseras now fragments, distributes, and automatically repairs data across the network using Reed-Solomon erasure coding and a bilateral reciprocity ledger.">
- <!-- Open Graph -->
- <meta property="og:type" content="article">
- <meta property="og:title" content="Phase 2: Memories Survive">
- <meta property="og:description" content="Tesseras now fragments, distributes, and automatically repairs data across the network using Reed-Solomon erasure coding and a bilateral reciprocity ledger.">
- <meta property="og:image" content="https://tesseras.net/images/social.jpg">
- <meta property="og:image:width" content="1200">
- <meta property="og:image:height" content="630">
- <meta property="og:site_name" content="Tesseras">
- <!-- Twitter Card -->
- <meta name="twitter:card" content="summary_large_image">
- <meta name="twitter:title" content="Phase 2: Memories Survive">
- <meta name="twitter:description" content="Tesseras now fragments, distributes, and automatically repairs data across the network using Reed-Solomon erasure coding and a bilateral reciprocity ledger.">
- <meta name="twitter:image" content="https://tesseras.net/images/social.jpg">
- <link rel="stylesheet" href="https://tesseras.net/style.css?h=21f0f32121928ee5c690">
-
-
- <link rel="alternate" type="application/atom+xml" title="Tesseras" href="https://tesseras.net/atom.xml">
-
-
- <link rel="icon" type="image/png" sizes="32x32" href="https://tesseras.net/images/favicon.png?h=be4e123a23393b1a027d">
-
-</head>
-<body>
- <header>
- <h1>
- <a href="https:&#x2F;&#x2F;tesseras.net/">
- <img src="https://tesseras.net/images/logo-64.png?h=c1b8d0c4c5f93b49d40b" alt="Tesseras" width="40" height="40" class="logo">
- Tesseras
- </a>
- </h1>
- <nav>
-
- <a href="https://tesseras.net/about/">About</a>
- <a href="https://tesseras.net/news/">News</a>
- <a href="https://tesseras.net/releases/">Releases</a>
- <a href="https://tesseras.net/faq/">FAQ</a>
- <a href="https://tesseras.net/subscriptions/">Subscriptions</a>
- <a href="https://tesseras.net/contact/">Contact</a>
-
- </nav>
- <nav class="lang-switch">
-
- <strong>English</strong> | <a href="/pt-br&#x2F;news&#x2F;phase2-replication&#x2F;">Português</a>
-
- </nav>
- </header>
-
- <main>
-
-<article>
- <h2>Phase 2: Memories Survive</h2>
- <p class="news-date">2026-02-14</p>
- <p>A tessera is no longer tied to a single machine. Phase 2 delivers the
-replication layer: data is split into erasure-coded fragments, distributed
-across multiple peers, and automatically repaired when nodes go offline. A
-bilateral reciprocity ledger ensures fair storage exchange — no blockchain, no
-tokens.</p>
-<h2 id="what-was-built">What was built</h2>
-<p><strong>tesseras-core</strong> (updated) — New replication domain types: <code>FragmentPlan</code>
-(selects fragmentation tier based on tessera size), <code>FragmentId</code> (tessera hash +
-index + shard count + checksum), <code>FragmentEnvelope</code> (fragment with its metadata
-for wire transport), <code>FragmentationTier</code> (Small/Medium/Large), <code>Attestation</code>
-(proof that a node holds a fragment at a given time), and <code>ReplicateAck</code>
-(acknowledgement of fragment receipt). Three new port traits define the
-hexagonal boundaries: <code>DhtPort</code> (find peers, replicate fragments, request
-attestations, ping), <code>FragmentStore</code> (store/read/delete/list/verify fragments),
-and <code>ReciprocityLedger</code> (record storage exchanges, query balances, find best
-peers). Maximum tessera size is 1 GB.</p>
-<p><strong>tesseras-crypto</strong> (updated) — The existing <code>ReedSolomonCoder</code> now powers
-fragment encoding. Data is split into shards, parity shards are computed, and
-any combination of data shards can reconstruct the original — as long as the
-number of missing shards does not exceed the parity count.</p>
-<p><strong>tesseras-storage</strong> (updated) — Two new adapters:</p>
-<ul>
-<li><code>FsFragmentStore</code> — stores fragment data as files on disk
-(<code>{root}/{tessera_hash}/{index:03}.shard</code>) with a SQLite metadata index
-tracking tessera hash, shard index, shard count, checksum, and byte size.
-Verification recomputes the BLAKE3 hash and compares it to the stored
-checksum.</li>
-<li><code>SqliteReciprocityLedger</code> — bilateral storage accounting in SQLite. Each peer
-has a row tracking bytes stored for them and bytes they store for us. The
-<code>balance</code> column is a generated column
-(<code>bytes_they_store_for_us - bytes_stored_for_them</code>). UPSERT ensures atomic
-increment of counters.</li>
-</ul>
-<p>New migration (<code>002_replication.sql</code>) adds tables for fragments, fragment plans,
-holders, holder-fragment mappings, and reciprocity balances.</p>
-<p><strong>tesseras-dht</strong> (updated) — Four new message variants: <code>Replicate</code> (send a
-fragment envelope), <code>ReplicateAck</code> (confirm receipt), <code>AttestRequest</code> (ask a
-node to prove it holds a tessera's fragments), and <code>AttestResponse</code> (return
-attestation with checksums and timestamp). The engine handles these in its
-message dispatch loop.</p>
-<p><strong>tesseras-replication</strong> — The new crate, with five modules:</p>
-<ul>
-<li>
-<p><em>Fragment encoding</em> (<code>fragment.rs</code>): <code>encode_tessera()</code> selects the
-fragmentation tier based on size, then calls Reed-Solomon encoding for Medium
-and Large tiers. Three tiers:</p>
-<ul>
-<li><strong>Small</strong> (&lt; 4 MB): whole-file replication to r=7 peers, no erasure coding</li>
-<li><strong>Medium</strong> (4–256 MB): 16 data + 8 parity shards, distributed across r=7
-peers</li>
-<li><strong>Large</strong> (≥ 256 MB): 48 data + 24 parity shards, distributed across r=7
-peers</li>
-</ul>
-</li>
-<li>
-<p><em>Distribution</em> (<code>distributor.rs</code>): subnet diversity filtering limits peers per
-/24 IPv4 subnet (or /48 IPv6 prefix) to avoid correlated failures. If all your
-fragments land on the same rack, a single power outage kills them all.</p>
-</li>
-<li>
-<p><em>Service</em> (<code>service.rs</code>): <code>ReplicationService</code> is the orchestrator.
-<code>replicate_tessera()</code> encodes the data, finds the closest peers via DHT,
-applies subnet diversity, and distributes fragments round-robin.
-<code>receive_fragment()</code> validates the BLAKE3 checksum, checks reciprocity balance
-(rejects if the sender's deficit exceeds the configured threshold), stores the
-fragment, and updates the ledger. <code>handle_attestation_request()</code> lists local
-fragments and computes their checksums as proof of possession.</p>
-</li>
-<li>
-<p><em>Repair</em> (<code>repair.rs</code>): <code>check_tessera_health()</code> requests attestations from
-known holders, falls back to ping for unresponsive nodes, verifies local
-fragment integrity, and returns one of three actions: <code>Healthy</code>,
-<code>NeedsReplication { deficit }</code>, or <code>CorruptLocal { fragment_index }</code>. The
-repair loop runs every 24 hours (with 2-hour jitter) via <code>tokio::select!</code> with
-shutdown integration.</p>
-</li>
-<li>
-<p><em>Configuration</em> (<code>config.rs</code>): <code>ReplicationConfig</code> with defaults for repair
-interval (24h), jitter (2h), concurrent transfers (4), minimum free space (1
-GB), deficit allowance (256 MB), and per-peer storage limit (1 GB).</p>
-</li>
-</ul>
-<p><strong>tesd</strong> (updated) — The daemon now opens a SQLite database (<code>db/tesseras.db</code>),
-runs migrations, creates <code>FsFragmentStore</code>, <code>SqliteReciprocityLedger</code>, and
-<code>FsBlobStore</code> instances, wraps the DHT engine in a <code>DhtPortAdapter</code>, builds a
-<code>ReplicationService</code>, and spawns the repair loop as a background task with
-graceful shutdown.</p>
-<p><strong>Testing</strong> — 193 tests across the workspace:</p>
-<ul>
-<li>15 unit tests in tesseras-replication (fragment encoding tiers, checksum
-validation, subnet diversity, repair health checks, service receive/replicate
-flows)</li>
-<li>3 integration tests with real storage (full encode→distribute→receive cycle
-for medium tessera, small whole-file replication, tampered fragment rejection)</li>
-<li>Tests use in-memory SQLite + tempdir fragments with mockall mocks for DHT and
-BlobStore</li>
-<li>Zero clippy warnings, clean formatting</li>
-</ul>
-<h2 id="architecture-decisions">Architecture decisions</h2>
-<ul>
-<li><strong>Three-tier fragmentation</strong>: small files don't need erasure coding — the
-overhead isn't worth it. Medium and large files get progressively more parity
-shards. This avoids wasting storage on small tesseras while providing strong
-redundancy for large ones.</li>
-<li><strong>Owner-push distribution</strong>: the tessera owner encodes fragments and pushes
-them to peers, rather than peers pulling. This simplifies the protocol (no
-negotiation phase) and ensures fragments are distributed immediately.</li>
-<li><strong>Bilateral reciprocity without consensus</strong>: each node tracks its own balance
-with each peer locally. No global ledger, no token, no blockchain. If peer A
-stores 500 MB for peer B, peer B should store roughly 500 MB for peer A. Free
-riders lose redundancy gradually — their fragments are deprioritized for
-repair, but never deleted.</li>
-<li><strong>Subnet diversity</strong>: fragments are spread across different network subnets to
-survive correlated failures. A datacenter outage shouldn't take out all copies
-of a tessera.</li>
-<li><strong>Attestation-first health checks</strong>: the repair loop asks holders to prove
-possession (attestation with checksums) before declaring a tessera degraded.
-Only when attestation fails does it fall back to a simple ping. This catches
-silent data corruption, not just node departure.</li>
-</ul>
-<h2 id="what-comes-next">What comes next</h2>
-<ul>
-<li><strong>Phase 3: API and Apps</strong> — Flutter mobile/desktop app via
-flutter_rust_bridge, GraphQL API (async-graphql), WASM browser node</li>
-<li><strong>Phase 4: Resilience and Scale</strong> — ML-DSA post-quantum signatures, advanced
-NAT traversal, Shamir's Secret Sharing for heirs, packaging for
-Alpine/Arch/Debian/FreeBSD/OpenBSD, CI on SourceHut</li>
-<li><strong>Phase 5: Exploration and Culture</strong> — public tessera browser, institutional
-curation, genealogy integration, physical media export</li>
-</ul>
-<p>Nodes can find each other and keep each other's memories alive. Next, we give
-people a way to hold their memories in their hands.</p>
-
-</article>
-
- </main>
-
- <footer>
- <p>&copy; 2026 Tesseras Project. <a href="/atom.xml">News Feed</a> · <a href="https://git.sr.ht/~ijanc/tesseras">Source</a></p>
- </footer>
-</body>
-</html>