From 01c17c68277ff88fab812920732d9bbe9e6bb571 Mon Sep 17 00:00:00 2001
From: murilo ijanc
Date: Tue, 24 Mar 2026 21:45:05 -0300
Subject: Simplify website to single-page
Remove old Zola-generated content, keep only the essential
landing page with about, contact, and license sections.
---
news/atom.xml | 1991 --------------------
news/atom.xml.gz | Bin 39593 -> 0 bytes
news/cli-daemon-rpc/index.html | 142 --
news/cli-daemon-rpc/index.html.gz | Bin 3219 -> 0 bytes
news/hello-world/index.html | 80 -
news/hello-world/index.html.gz | Bin 1316 -> 0 bytes
news/index.html | 198 --
news/index.html.gz | Bin 2530 -> 0 bytes
news/packaging-archlinux/index.html | 123 --
news/packaging-archlinux/index.html.gz | Bin 2143 -> 0 bytes
news/packaging-debian/index.html | 157 --
news/packaging-debian/index.html.gz | Bin 2548 -> 0 bytes
news/phase0-foundation/index.html | 125 --
news/phase0-foundation/index.html.gz | Bin 2680 -> 0 bytes
news/phase1-basic-network/index.html | 173 --
news/phase1-basic-network/index.html.gz | Bin 4056 -> 0 bytes
news/phase2-replication/index.html | 201 --
news/phase2-replication/index.html.gz | Bin 4319 -> 0 bytes
news/phase3-api-and-apps/index.html | 163 --
news/phase3-api-and-apps/index.html.gz | Bin 3930 -> 0 bytes
news/phase4-encryption-sealed/index.html | 178 --
news/phase4-encryption-sealed/index.html.gz | Bin 4246 -> 0 bytes
news/phase4-institutional-onboarding/index.html | 239 ---
news/phase4-institutional-onboarding/index.html.gz | Bin 5539 -> 0 bytes
news/phase4-nat-traversal/index.html | 228 ---
news/phase4-nat-traversal/index.html.gz | Bin 5328 -> 0 bytes
news/phase4-performance-tuning/index.html | 164 --
news/phase4-performance-tuning/index.html.gz | Bin 3854 -> 0 bytes
news/phase4-shamir-heir-recovery/index.html | 199 --
news/phase4-shamir-heir-recovery/index.html.gz | Bin 4586 -> 0 bytes
news/phase4-storage-deduplication/index.html | 217 ---
news/phase4-storage-deduplication/index.html.gz | Bin 4807 -> 0 bytes
news/phase4-wasm-browser-verification/index.html | 192 --
.../phase4-wasm-browser-verification/index.html.gz | Bin 4765 -> 0 bytes
news/reed-solomon/index.html | 200 --
news/reed-solomon/index.html.gz | Bin 4736 -> 0 bytes
36 files changed, 4970 deletions(-)
delete mode 100644 news/atom.xml
delete mode 100644 news/atom.xml.gz
delete mode 100644 news/cli-daemon-rpc/index.html
delete mode 100644 news/cli-daemon-rpc/index.html.gz
delete mode 100644 news/hello-world/index.html
delete mode 100644 news/hello-world/index.html.gz
delete mode 100644 news/index.html
delete mode 100644 news/index.html.gz
delete mode 100644 news/packaging-archlinux/index.html
delete mode 100644 news/packaging-archlinux/index.html.gz
delete mode 100644 news/packaging-debian/index.html
delete mode 100644 news/packaging-debian/index.html.gz
delete mode 100644 news/phase0-foundation/index.html
delete mode 100644 news/phase0-foundation/index.html.gz
delete mode 100644 news/phase1-basic-network/index.html
delete mode 100644 news/phase1-basic-network/index.html.gz
delete mode 100644 news/phase2-replication/index.html
delete mode 100644 news/phase2-replication/index.html.gz
delete mode 100644 news/phase3-api-and-apps/index.html
delete mode 100644 news/phase3-api-and-apps/index.html.gz
delete mode 100644 news/phase4-encryption-sealed/index.html
delete mode 100644 news/phase4-encryption-sealed/index.html.gz
delete mode 100644 news/phase4-institutional-onboarding/index.html
delete mode 100644 news/phase4-institutional-onboarding/index.html.gz
delete mode 100644 news/phase4-nat-traversal/index.html
delete mode 100644 news/phase4-nat-traversal/index.html.gz
delete mode 100644 news/phase4-performance-tuning/index.html
delete mode 100644 news/phase4-performance-tuning/index.html.gz
delete mode 100644 news/phase4-shamir-heir-recovery/index.html
delete mode 100644 news/phase4-shamir-heir-recovery/index.html.gz
delete mode 100644 news/phase4-storage-deduplication/index.html
delete mode 100644 news/phase4-storage-deduplication/index.html.gz
delete mode 100644 news/phase4-wasm-browser-verification/index.html
delete mode 100644 news/phase4-wasm-browser-verification/index.html.gz
delete mode 100644 news/reed-solomon/index.html
delete mode 100644 news/reed-solomon/index.html.gz
(limited to 'news')
diff --git a/news/atom.xml b/news/atom.xml
deleted file mode 100644
index 660ecac..0000000
--- a/news/atom.xml
+++ /dev/null
@@ -1,1991 +0,0 @@
-
-
- Tesseras - News
- P2P network for preserving human memories across millennia
-
-
- Zola
- 2026-02-16T10:00:00+00:00
- https://tesseras.net/news/atom.xml
-
- Packaging Tesseras for Debian
- 2026-02-16T10:00:00+00:00
- 2026-02-16T10:00:00+00:00
-
-
-
-
- Unknown
-
-
-
-
-
- https://tesseras.net/news/packaging-debian/
-
- <p>Tesseras now ships a <code>.deb</code> package for Debian and Ubuntu. This post walks
-through building and installing the package from source using <code>cargo-deb</code>.</p>
-<h2 id="prerequisites">Prerequisites</h2>
-<p>You need a working Rust toolchain and the required system libraries:</p>
-<pre><code data-lang="sh">sudo apt install build-essential pkg-config libsqlite3-dev
-rustup toolchain install stable
-cargo install cargo-deb
-</code></pre>
-<h2 id="building">Building</h2>
-<p>Clone the repository and run the <code>just deb</code> recipe:</p>
-<pre><code data-lang="sh">git clone https://git.sr.ht/~ijanc/tesseras
-cd tesseras
-just deb
-</code></pre>
-<p>This recipe does three things:</p>
-<ol>
-<li><strong>Compiles</strong> <code>tesd</code> (the daemon) and <code>tes</code> (the CLI) in release mode with
-<code>cargo build --release</code></li>
-<li><strong>Generates shell completions</strong> for bash, zsh, and fish from the <code>tes</code> binary</li>
-<li><strong>Packages</strong> everything into a <code>.deb</code> file with
-<code>cargo deb -p tesseras-daemon --no-build</code></li>
-</ol>
-<p>The result is a <code>.deb</code> file in <code>target/debian/</code>.</p>
-<h2 id="installing">Installing</h2>
-<pre><code data-lang="sh">sudo dpkg -i target/debian/tesseras-daemon_*.deb
-</code></pre>
-<p>If there are missing dependencies, fix them with:</p>
-<pre><code data-lang="sh">sudo apt install -f
-</code></pre>
-<h2 id="post-install-setup">Post-install setup</h2>
-<p>The <code>postinst</code> script automatically creates a <code>tesseras</code> system user and the
-data directory <code>/var/lib/tesseras</code>. To use the CLI without sudo, add yourself to
-the group:</p>
-<pre><code data-lang="sh">sudo usermod -aG tesseras $USER
-</code></pre>
-<p>Log out and back in, then start the daemon:</p>
-<pre><code data-lang="sh">sudo systemctl enable --now tesd
-</code></pre>
-<h2 id="what-the-package-includes">What the package includes</h2>
-<table><thead><tr><th>Path</th><th>Description</th></tr></thead><tbody>
-<tr><td><code>/usr/bin/tesd</code></td><td>Full node daemon</td></tr>
-<tr><td><code>/usr/bin/tes</code></td><td>CLI client</td></tr>
-<tr><td><code>/etc/tesseras/config.toml</code></td><td>Default configuration (marked as conffile)</td></tr>
-<tr><td><code>/lib/systemd/system/tesd.service</code></td><td>Systemd unit with security hardening</td></tr>
-<tr><td>Shell completions</td><td>bash, zsh, and fish</td></tr>
-</tbody></table>
-<h2 id="how-cargo-deb-works">How cargo-deb works</h2>
-<p>The packaging metadata lives in <code>crates/tesseras-daemon/Cargo.toml</code> under
-<code>[package.metadata.deb]</code>. This section defines:</p>
-<ul>
-<li><strong>depends</strong> — runtime dependencies: <code>libc6</code> and <code>libsqlite3-0</code></li>
-<li><strong>assets</strong> — files to include in the package (binaries, config, systemd unit,
-shell completions)</li>
-<li><strong>conf-files</strong> — files treated as configuration (preserved on upgrade)</li>
-<li><strong>maintainer-scripts</strong> — <code>postinst</code> and <code>postrm</code> scripts in
-<code>packaging/debian/scripts/</code></li>
-<li><strong>systemd-units</strong> — automatic systemd integration</li>
-</ul>
-<p>The <code>postinst</code> script creates the <code>tesseras</code> system user and data directory on
-install. The <code>postrm</code> script cleans up the user, group, and data directory only
-on <code>purge</code> (not on simple removal).</p>
-<h2 id="systemd-hardening">Systemd hardening</h2>
-<p>The <code>tesd.service</code> unit includes security hardening directives:</p>
-<pre><code data-lang="ini">NoNewPrivileges=true
-ProtectSystem=strict
-ProtectHome=true
-ReadWritePaths=/var/lib/tesseras
-PrivateTmp=true
-PrivateDevices=true
-ProtectKernelTunables=true
-ProtectControlGroups=true
-RestrictSUIDSGID=true
-MemoryDenyWriteExecute=true
-</code></pre>
-<p>The daemon runs as the unprivileged <code>tesseras</code> user and can only write to
-<code>/var/lib/tesseras</code>.</p>
-<h2 id="deploying-to-a-remote-server">Deploying to a remote server</h2>
-<p>The justfile includes a <code>deploy</code> recipe for pushing the <code>.deb</code> to a remote host:</p>
-<pre><code data-lang="sh">just deploy bootstrap1.tesseras.net
-</code></pre>
-<p>This builds the <code>.deb</code>, copies it via <code>scp</code>, installs it with <code>dpkg -i</code>, and
-restarts the <code>tesd</code> service.</p>
-<h2 id="updating">Updating</h2>
-<p>After pulling new changes, simply run <code>just deb</code> again and reinstall:</p>
-<pre><code data-lang="sh">git pull
-just deb
-sudo dpkg -i target/debian/tesseras-daemon_*.deb
-</code></pre>
-
-
-
-
- Packaging Tesseras for Arch Linux
- 2026-02-16T09:00:00+00:00
- 2026-02-16T09:00:00+00:00
-
-
-
-
- Unknown
-
-
-
-
-
- https://tesseras.net/news/packaging-archlinux/
-
- <p>Tesseras now ships a PKGBUILD for Arch Linux. This post walks through building
-and installing the package from source.</p>
-<h2 id="prerequisites">Prerequisites</h2>
-<p>You need a working Rust toolchain and the base-devel group:</p>
-<pre><code data-lang="sh">sudo pacman -S --needed base-devel sqlite
-rustup toolchain install stable
-</code></pre>
-<h2 id="building">Building</h2>
-<p>Clone the repository and run the <code>just arch</code> recipe:</p>
-<pre><code data-lang="sh">git clone https://git.sr.ht/~ijanc/tesseras
-cd tesseras
-just arch
-</code></pre>
-<p>This runs <code>makepkg -sf</code> inside <code>packaging/archlinux/</code>, which:</p>
-<ol>
-<li><strong>prepare</strong> — fetches Cargo dependencies with <code>cargo fetch --locked</code></li>
-<li><strong>build</strong> — compiles <code>tesd</code> and <code>tes</code> (the CLI) in release mode</li>
-<li><strong>package</strong> — installs binaries, systemd service, sysusers/tmpfiles configs,
-shell completions (bash, zsh, fish), and a default config file</li>
-</ol>
-<p>The result is a <code>.pkg.tar.zst</code> file in <code>packaging/archlinux/</code>.</p>
-<h2 id="installing">Installing</h2>
-<pre><code data-lang="sh">sudo pacman -U packaging/archlinux/tesseras-*.pkg.tar.zst
-</code></pre>
-<h2 id="post-install-setup">Post-install setup</h2>
-<p>The package creates a <code>tesseras</code> system user and group automatically via
-systemd-sysusers. To use the CLI without sudo, add yourself to the group:</p>
-<pre><code data-lang="sh">sudo usermod -aG tesseras $USER
-</code></pre>
-<p>Log out and back in, then start the daemon:</p>
-<pre><code data-lang="sh">sudo systemctl enable --now tesd
-</code></pre>
-<h2 id="what-the-package-includes">What the package includes</h2>
-<table><thead><tr><th>Path</th><th>Description</th></tr></thead><tbody>
-<tr><td><code>/usr/bin/tesd</code></td><td>Full node daemon</td></tr>
-<tr><td><code>/usr/bin/tes</code></td><td>CLI client</td></tr>
-<tr><td><code>/etc/tesseras/config.toml</code></td><td>Default configuration (marked as backup)</td></tr>
-<tr><td><code>/usr/lib/systemd/system/tesd.service</code></td><td>Systemd unit with security hardening</td></tr>
-<tr><td><code>/usr/lib/sysusers.d/tesseras.conf</code></td><td>System user definition</td></tr>
-<tr><td><code>/usr/lib/tmpfiles.d/tesseras.conf</code></td><td>Data directory <code>/var/lib/tesseras</code></td></tr>
-<tr><td>Shell completions</td><td>bash, zsh, and fish</td></tr>
-</tbody></table>
-<h2 id="pkgbuild-details">PKGBUILD details</h2>
-<p>The PKGBUILD builds directly from the local git checkout rather than downloading
-a source tarball. The <code>TESSERAS_ROOT</code> environment variable points makepkg to the
-workspace root. Cargo's target directory is set to <code>$srcdir/target</code> to keep
-build artifacts inside the makepkg sandbox.</p>
-<p>The package depends only on <code>sqlite</code> at runtime and <code>cargo</code> at build time.</p>
-<h2 id="updating">Updating</h2>
-<p>After pulling new changes, simply run <code>just arch</code> again and reinstall:</p>
-<pre><code data-lang="sh">git pull
-just arch
-sudo pacman -U packaging/archlinux/tesseras-*.pkg.tar.zst
-</code></pre>
-
-
-
-
- Phase 4: Storage Deduplication
- 2026-02-15T23:00:00+00:00
- 2026-02-15T23:00:00+00:00
-
-
-
-
- Unknown
-
-
-
-
-
- https://tesseras.net/news/phase4-storage-deduplication/
-
- <p>When multiple tesseras share the same photo, the same audio clip, or the same
-fragment data, the old storage layer kept separate copies of each. On a node
-storing thousands of tesseras for the network, this duplication adds up fast.
-Phase 4 continues with storage deduplication: a content-addressable store (CAS)
-that ensures every unique piece of data is stored exactly once on disk,
-regardless of how many tesseras reference it.</p>
-<p>The design is simple and proven: hash the content with BLAKE3, use the hash as
-the filename, and maintain a reference count in SQLite. When two tesseras
-include the same 5 MB photo, one file exists on disk with a refcount of 2. When
-one tessera is deleted, the refcount drops to 1 and the file stays. When the
-last reference is released, a periodic sweep cleans up the orphan.</p>
-<h2 id="what-was-built">What was built</h2>
-<p><strong>CAS schema migration</strong> (<code>tesseras-storage/migrations/004_dedup.sql</code>) — Three
-new tables:</p>
-<ul>
-<li><code>cas_objects</code> — tracks every object in the store: BLAKE3 hash (primary key),
-byte size, reference count, and creation timestamp</li>
-<li><code>blob_refs</code> — maps logical blob identifiers (tessera hash + memory hash +
-filename) to CAS hashes, replacing the old filesystem path convention</li>
-<li><code>fragment_refs</code> — maps logical fragment identifiers (tessera hash + fragment
-index) to CAS hashes, replacing the old <code>fragments/</code> directory layout</li>
-</ul>
-<p>Indexes on the hash columns ensure O(1) lookups during reads and reference
-counting.</p>
-<p><strong>CasStore</strong> (<code>tesseras-storage/src/cas.rs</code>) — The core content-addressable
-storage engine. Files are stored under a two-level prefix directory:
-<code><root>/<2-char-hex-prefix>/<full-hash>.blob</code>. The store provides five
-operations:</p>
-<ul>
-<li><code>put(hash, data)</code> — writes data to disk if not already present, increments
-refcount. Returns whether a dedup hit occurred.</li>
-<li><code>get(hash)</code> — reads data from disk by hash</li>
-<li><code>release(hash)</code> — decrements refcount. If it reaches zero, the on-disk file is
-deleted immediately.</li>
-<li><code>contains(hash)</code> — checks existence without reading</li>
-<li><code>ref_count(hash)</code> — returns the current reference count</li>
-</ul>
-<p>All operations are atomic within a single SQLite transaction. The refcount is
-the source of truth — if the refcount says the object exists, the file must be
-on disk.</p>
-<p><strong>CAS-backed FsBlobStore</strong> (<code>tesseras-storage/src/blob.rs</code>) — Rewritten to
-delegate all storage to the CAS. When a blob is written, its BLAKE3 hash is
-computed and passed to <code>cas.put()</code>. A row in <code>blob_refs</code> maps the logical path
-(tessera + memory + filename) to the CAS hash. Reads look up the CAS hash via
-<code>blob_refs</code> and fetch from <code>cas.get()</code>. Deleting a tessera releases all its blob
-references in a single transaction.</p>
-<p><strong>CAS-backed FsFragmentStore</strong> (<code>tesseras-storage/src/fragment.rs</code>) — Same
-pattern for erasure-coded fragments. Each fragment's BLAKE3 checksum is already
-computed during Reed-Solomon encoding, so it's used directly as the CAS key.
-Fragment verification now checks the CAS hash instead of recomputing from
-scratch — if the CAS says the data is intact, it is.</p>
-<p><strong>Sweep garbage collector</strong> (<code>cas.rs:sweep()</code>) — A periodic GC pass that handles
-three edge cases the normal refcount path can't:</p>
-<ol>
-<li><strong>Orphan files</strong> — files on disk with no corresponding row in <code>cas_objects</code>.
-Can happen after a crash mid-write. Files younger than 1 hour are skipped
-(grace period for in-flight writes); older orphans are deleted.</li>
-<li><strong>Leaked refcounts</strong> — rows in <code>cas_objects</code> with refcount zero that weren't
-cleaned up (e.g., if the process died between decrementing and deleting).
-These rows are removed.</li>
-<li><strong>Idempotent</strong> — running sweep twice produces the same result.</li>
-</ol>
-<p>The sweep is wired into the existing repair loop in <code>tesseras-replication</code>, so
-it runs automatically every 24 hours alongside fragment health checks.</p>
-<p><strong>Migration from old layout</strong> (<code>tesseras-storage/src/migration.rs</code>) — A
-copy-first migration strategy that moves data from the old directory-based
-layout (<code>blobs/<tessera>/<memory>/<file></code> and
-<code>fragments/<tessera>/<index>.shard</code>) into the CAS. The migration:</p>
-<ol>
-<li>Checks the storage version in <code>storage_meta</code> (version 1 = old layout, version
-2 = CAS)</li>
-<li>Walks the old <code>blobs/</code> and <code>fragments/</code> directories</li>
-<li>Computes BLAKE3 hashes and inserts into CAS via <code>put()</code> — duplicates are
-automatically deduplicated</li>
-<li>Creates corresponding <code>blob_refs</code> / <code>fragment_refs</code> entries</li>
-<li>Removes old directories only after all data is safely in CAS</li>
-<li>Updates the storage version to 2</li>
-</ol>
-<p>The migration runs on daemon startup, is idempotent (safe to re-run), and
-reports statistics: files migrated, duplicates found, bytes saved.</p>
-<p><strong>Prometheus metrics</strong> (<code>tesseras-storage/src/metrics.rs</code>) — Ten new metrics for
-observability:</p>
-<table><thead><tr><th>Metric</th><th>Description</th></tr></thead><tbody>
-<tr><td><code>cas_objects_total</code></td><td>Total unique objects in the CAS</td></tr>
-<tr><td><code>cas_bytes_total</code></td><td>Total bytes stored</td></tr>
-<tr><td><code>cas_dedup_hits_total</code></td><td>Number of writes that found an existing object</td></tr>
-<tr><td><code>cas_bytes_saved_total</code></td><td>Bytes saved by deduplication</td></tr>
-<tr><td><code>cas_gc_refcount_deletions_total</code></td><td>Objects deleted when refcount reached zero</td></tr>
-<tr><td><code>cas_gc_sweep_orphans_cleaned_total</code></td><td>Orphan files removed by sweep</td></tr>
-<tr><td><code>cas_gc_sweep_leaked_refs_cleaned_total</code></td><td>Leaked refcount rows cleaned</td></tr>
-<tr><td><code>cas_gc_sweep_skipped_young_total</code></td><td>Young orphans skipped (grace period)</td></tr>
-<tr><td><code>cas_gc_sweep_duration_seconds</code></td><td>Time spent in sweep GC</td></tr>
-</tbody></table>
-<p><strong>Property-based tests</strong> — Two proptest tests verify CAS invariants under random
-inputs:</p>
-<ul>
-<li><code>refcount_matches_actual_refs</code> — after N random put/release operations, the
-refcount always matches the actual number of outstanding references</li>
-<li><code>cas_path_is_deterministic</code> — the same hash always produces the same
-filesystem path</li>
-</ul>
-<p><strong>Integration test updates</strong> — All integration tests across <code>tesseras-core</code>,
-<code>tesseras-replication</code>, <code>tesseras-embedded</code>, and <code>tesseras-cli</code> updated for the
-new CAS-backed constructors. Tamper-detection tests updated to work with the CAS
-directory layout.</p>
-<p>347 tests pass across the workspace. Clippy clean with <code>-D warnings</code>.</p>
-<h2 id="architecture-decisions">Architecture decisions</h2>
-<ul>
-<li><strong>BLAKE3 as CAS key</strong>: the content hash we already compute for integrity
-verification doubles as the deduplication key. No additional hashing step —
-the hash computed during <code>create</code> or <code>replicate</code> is reused as the CAS address.</li>
-<li><strong>SQLite refcount over filesystem reflinks</strong>: we considered using
-filesystem-level copy-on-write (reflinks on btrfs/XFS), but that would tie
-Tesseras to specific filesystems. SQLite refcounting works on any filesystem,
-including FAT32 on cheap USB drives and ext4 on Raspberry Pis.</li>
-<li><strong>Two-level hex prefix directories</strong>: storing all CAS objects in a flat
-directory would slow down filesystems with millions of entries. The
-<code><2-char prefix>/</code> split limits any single directory to ~65k entries before a
-second prefix level is needed. This matches the approach used by Git's object
-store.</li>
-<li><strong>Grace period for orphan files</strong>: the sweep GC skips files younger than 1
-hour to avoid deleting objects that are being written by a concurrent
-operation. This is a pragmatic choice — it trades a small window of potential
-orphans for crash safety without requiring fsync or two-phase commit.</li>
-<li><strong>Copy-first migration</strong>: the migration copies data to CAS before removing old
-directories. If the process is interrupted, the old data is still intact and
-migration can be re-run. This is slower than moving files but guarantees no
-data loss.</li>
-<li><strong>Sweep in repair loop</strong>: rather than adding a separate GC timer, the CAS
-sweep piggybacks on the existing 24-hour repair loop. This keeps the daemon
-simple — one background maintenance cycle handles both fragment health and
-storage cleanup.</li>
-</ul>
-<h2 id="what-comes-next">What comes next</h2>
-<ul>
-<li><strong>Phase 4 continued</strong> — security audits, OS packaging (Alpine, Arch, Debian,
-OpenBSD, FreeBSD)</li>
-<li><strong>Phase 5: Exploration and Culture</strong> — public tessera browser by
-era/location/theme/language, institutional curation, genealogy integration
-(FamilySearch, Ancestry), physical media export (M-DISC, microfilm, acid-free
-paper with QR), AI-assisted context</li>
-</ul>
-<p>Storage deduplication completes the storage efficiency story for Tesseras. A
-node that stores fragments for thousands of users — common for institutional
-nodes and always-on full nodes — now pays the disk cost of unique data only.
-Combined with Reed-Solomon erasure coding (which already minimizes redundancy at
-the network level), the system achieves efficient storage at both the local and
-distributed layers.</p>
-
-
-
-
- Phase 4: Institutional Node Onboarding
- 2026-02-15T22:00:00+00:00
- 2026-02-15T22:00:00+00:00
-
-
-
-
- Unknown
-
-
-
-
-
- https://tesseras.net/news/phase4-institutional-onboarding/
-
- <p>A P2P network of individuals is fragile. Hard drives die, phones get lost,
-people lose interest. The long-term survival of humanity's memories depends on
-institutions — libraries, archives, museums, universities — that measure their
-lifetimes in centuries. Phase 4 continues with institutional node onboarding:
-verified organizations can now pledge storage, run searchable indexes, and
-participate in the network with a distinct identity.</p>
-<p>The design follows a principle of trust but verify: institutions identify
-themselves via DNS TXT records (the same mechanism used by SPF, DKIM, and DMARC
-for email), pledge a storage budget, and receive reciprocity exemptions so they
-can store fragments for others without expecting anything in return. In
-exchange, the network treats their fragments as higher-quality replicas and
-limits over-reliance on any single institution through diversity constraints.</p>
-<h2 id="what-was-built">What was built</h2>
-<p><strong>Capability bits</strong> (<code>tesseras-core/src/network.rs</code>) — Two new flags added to
-the <code>Capabilities</code> bitfield: <code>INSTITUTIONAL</code> (bit 7) and <code>SEARCH_INDEX</code> (bit 8).
-A new <code>institutional_default()</code> constructor returns the full Phase 2 capability
-set plus these two bits and <code>RELAY</code>. Normal nodes advertise <code>phase2_default()</code>
-which lacks institutional flags. Serialization roundtrip tests verify the new
-bits survive MessagePack encoding.</p>
-<p><strong>Search types</strong> (<code>tesseras-core/src/search.rs</code>) — Three new domain types for
-the search subsystem:</p>
-<ul>
-<li><code>SearchFilters</code> — query parameters: <code>memory_type</code>, <code>visibility</code>, <code>language</code>,
-<code>date_range</code>, <code>geo</code> (bounding box), <code>page</code>, <code>page_size</code></li>
-<li><code>SearchHit</code> — a single result: content hash plus a <code>MetadataExcerpt</code> (title,
-description, memory type, creation date, visibility, language, tags)</li>
-<li><code>GeoFilter</code> — bounding box with <code>min_lat</code>, <code>max_lat</code>, <code>min_lon</code>, <code>max_lon</code> for
-spatial queries</li>
-</ul>
-<p>All types derive <code>Serialize</code>/<code>Deserialize</code> for wire transport and
-<code>Clone</code>/<code>Debug</code> for diagnostics.</p>
-<p><strong>Institutional daemon config</strong> (<code>tesd/src/config.rs</code>) — A new <code>[institutional]</code>
-TOML section with <code>domain</code> (the DNS domain to verify), <code>pledge_bytes</code> (storage
-commitment in bytes), and <code>search_enabled</code> (toggle for the FTS5 index). The
-<code>to_dht_config()</code> method now sets <code>Capabilities::institutional_default()</code> when
-institutional config is present, so institutional nodes advertise the right
-capability bits in Pong responses.</p>
-<p><strong>DNS TXT verification</strong> (<code>tesd/src/institutional.rs</code>) — Async DNS resolution
-using <code>hickory-resolver</code> to verify institutional identity. The daemon looks up
-<code>_tesseras.<domain></code> TXT records and parses key-value fields: <code>v</code> (version),
-<code>node</code> (hex-encoded node ID), and <code>pledge</code> (storage pledge in bytes).
-Verification checks:</p>
-<ol>
-<li>A TXT record exists at <code>_tesseras.<domain></code></li>
-<li>The <code>node</code> field matches the daemon's own node ID</li>
-<li>The <code>pledge</code> field is present and valid</li>
-</ol>
-<p>On startup, the daemon attempts DNS verification. If it succeeds, the node runs
-with institutional capabilities. If it fails, the node logs a warning and
-downgrades to a normal full node — no crash, no manual intervention.</p>
-<p><strong>CLI setup command</strong> (<code>tesseras-cli/src/institutional.rs</code>) — A new
-<code>institutional setup</code> subcommand that guides operators through onboarding:</p>
-<ol>
-<li>Reads the node's identity from the data directory</li>
-<li>Prompts for domain name and pledge size</li>
-<li>Generates the exact DNS TXT record to add:
-<code>v=tesseras1 node=<hex> pledge=<bytes></code></li>
-<li>Writes the institutional section to the daemon's config file</li>
-<li>Prints next steps: add the TXT record, restart the daemon</li>
-</ol>
-<p><strong>SQLite search index</strong> (<code>tesseras-storage</code>) — A migration
-(<code>003_institutional.sql</code>) that creates three structures:</p>
-<ul>
-<li><code>search_content</code> — an FTS5 virtual table for full-text search over tessera
-metadata (title, description, creator, tags, language)</li>
-<li><code>geo_index</code> — an R-tree virtual table for spatial bounding-box queries over
-latitude/longitude</li>
-<li><code>geo_map</code> — a mapping table linking R-tree row IDs to content hashes</li>
-</ul>
-<p>The <code>SqliteSearchIndex</code> adapter implements the <code>SearchIndex</code> port trait with
-<code>index_tessera()</code> (insert/update) and <code>search()</code> (query with filters). FTS5
-queries support natural language search; geo queries use R-tree <code>INTERSECT</code> for
-bounding box lookups. Results are ranked by FTS5 relevance score.</p>
-<p>The migration also adds an <code>is_institutional</code> column to the <code>reciprocity</code> table,
-handled idempotently via <code>pragma_table_info</code> checks (SQLite's
-<code>ALTER TABLE ADD COLUMN</code> lacks <code>IF NOT EXISTS</code>).</p>
-<p><strong>Reciprocity bypass</strong> (<code>tesseras-replication/src/service.rs</code>) — Institutional
-nodes are exempt from reciprocity checks. When <code>receive_fragment()</code> is called,
-if the sender's node ID is marked as institutional in the reciprocity ledger,
-the balance check is skipped entirely. This means institutions can store
-fragments for the entire network without needing to "earn" credits first — their
-DNS-verified identity and storage pledge serve as their credential.</p>
-<p><strong>Node-type diversity constraint</strong> (<code>tesseras-replication/src/distributor.rs</code>) —
-A new <code>apply_institutional_diversity()</code> function limits how many replicas of a
-single tessera can land on institutional nodes. The cap is
-<code>ceil(replication_factor / 3.5)</code> — with the default <code>r=7</code>, at most 2 of 7
-replicas go to institutions. This prevents the network from becoming dependent
-on a small number of large institutions: if a university's servers go down, at
-least 5 replicas remain on independent nodes.</p>
-<p><strong>DHT message extensions</strong> (<code>tesseras-dht/src/message.rs</code>) — Two new message
-variants:</p>
-<table><thead><tr><th>Message</th><th>Purpose</th></tr></thead><tbody>
-<tr><td><code>Search</code></td><td>Client sends query string, filters, and page number</td></tr>
-<tr><td><code>SearchResult</code></td><td>Institutional node responds with hits and total count</td></tr>
-</tbody></table>
-<p>The <code>encode()</code> function was switched from positional to named MessagePack
-serialization (<code>rmp_serde::to_vec_named</code>) to handle <code>SearchFilters</code>' optional
-fields correctly — positional encoding breaks when <code>skip_serializing_if</code> omits
-fields.</p>
-<p><strong>Prometheus metrics</strong> (<code>tesd/src/metrics.rs</code>) — Eight institutional-specific
-metrics:</p>
-<ul>
-<li><code>tesseras_institutional_pledge_bytes</code> — configured storage pledge</li>
-<li><code>tesseras_institutional_stored_bytes</code> — actual bytes stored</li>
-<li><code>tesseras_institutional_pledge_utilization_ratio</code> — stored/pledged ratio</li>
-<li><code>tesseras_institutional_peers_served</code> — unique peers served fragments</li>
-<li><code>tesseras_institutional_search_index_total</code> — tesseras in the search index</li>
-<li><code>tesseras_institutional_search_queries_total</code> — search queries received</li>
-<li><code>tesseras_institutional_dns_verification_status</code> — 1 if DNS verified, 0
-otherwise</li>
-<li><code>tesseras_institutional_dns_verification_last</code> — Unix timestamp of last
-verification</li>
-</ul>
-<p><strong>Integration tests</strong> — Two tests in
-<code>tesseras-replication/tests/integration.rs</code>:</p>
-<ul>
-<li><code>institutional_peer_bypasses_reciprocity</code> — verifies that an institutional
-peer with a massive deficit (-999,999 balance) is still allowed to store
-fragments, while a non-institutional peer with the same deficit is rejected</li>
-<li><code>institutional_node_accepts_fragment_despite_deficit</code> — full async test using
-<code>ReplicationService</code> with mocked DHT, fragment store, reciprocity ledger, and
-blob store: sends a fragment from an institutional sender and verifies it's
-accepted</li>
-</ul>
-<p>322 tests pass across the workspace. Clippy clean with <code>-D warnings</code>.</p>
-<h2 id="architecture-decisions">Architecture decisions</h2>
-<ul>
-<li><strong>DNS TXT over PKI or blockchain</strong>: DNS is universally deployed, universally
-understood, and already used for domain verification (SPF, DKIM, Let's
-Encrypt). Institutions already manage DNS. No certificate authority, no token,
-no on-chain transaction — just a TXT record. If an institution loses control
-of their domain, the verification naturally fails on the next check.</li>
-<li><strong>Graceful degradation on DNS failure</strong>: if DNS verification fails at startup,
-the daemon downgrades to a normal full node instead of refusing to start. This
-prevents operational incidents — a DNS misconfiguration shouldn't take a node
-offline.</li>
-<li><strong>Diversity cap at <code>ceil(r / 3.5)</code></strong>: with <code>r=7</code>, at most 2 replicas go to
-institutions. This is conservative — it ensures the network never depends on
-institutions for majority quorum, while still benefiting from their storage
-capacity and uptime.</li>
-<li><strong>Named MessagePack encoding</strong>: switching from positional to named encoding
-adds ~15% overhead per message but eliminates a class of serialization bugs
-when optional fields are present. The DHT is not bandwidth-constrained at the
-message level, so the tradeoff is worth it.</li>
-<li><strong>Reciprocity exemption over credit grants</strong>: rather than giving institutions
-a large initial credit balance (which is arbitrary and needs tuning), we
-exempt them entirely. Their DNS-verified identity and public storage pledge
-replace the bilateral reciprocity mechanism.</li>
-<li><strong>FTS5 + R-tree in SQLite</strong>: full-text search and spatial indexing are built
-into SQLite as loadable extensions. No external search engine (Elasticsearch,
-Meilisearch) needed. This keeps the deployment a single binary with a single
-database file — critical for institutional operators who may not have a DevOps
-team.</li>
-</ul>
-<h2 id="what-comes-next">What comes next</h2>
-<ul>
-<li><strong>Phase 4 continued</strong> — storage deduplication (content-addressable store with
-BLAKE3 keying), security audits, OS packaging (Alpine, Arch, Debian, OpenBSD,
-FreeBSD)</li>
-<li><strong>Phase 5: Exploration and Culture</strong> — public tessera browser by
-era/location/theme/language, institutional curation, genealogy integration
-(FamilySearch, Ancestry), physical media export (M-DISC, microfilm, acid-free
-paper with QR), AI-assisted context</li>
-</ul>
-<p>Institutional onboarding closes a critical gap in Tesseras' preservation model.
-Individual nodes provide grassroots resilience — thousands of devices across the
-globe, each storing a few fragments. Institutional nodes provide anchoring —
-organizations with professional infrastructure, redundant storage, and
-multi-decade operational horizons. Together, they form a network where memories
-can outlast both individual devices and individual institutions.</p>
-
-
-
-
- Phase 4: Performance Tuning
- 2026-02-15T20:00:00+00:00
- 2026-02-15T20:00:00+00:00
-
-
-
-
- Unknown
-
-
-
-
-
- https://tesseras.net/news/phase4-performance-tuning/
-
- <p>A P2P network that can traverse NATs but chokes on its own I/O is not much use.
-Phase 4 continues with performance tuning: centralizing database configuration,
-caching fragment blobs in memory, managing QUIC connection lifecycles, and
-eliminating unnecessary disk reads from the attestation hot path.</p>
-<p>The guiding principle was the same as the rest of Tesseras: do the simplest
-thing that actually works. No custom allocators, no lock-free data structures,
-no premature complexity. A centralized <code>StorageConfig</code>, an LRU cache, a
-connection reaper, and a targeted fix to avoid re-reading blobs that were
-already checksummed.</p>
-<h2 id="what-was-built">What was built</h2>
-<p><strong>Centralized SQLite configuration</strong> (<code>tesseras-storage/src/database.rs</code>) — A
-new <code>StorageConfig</code> struct and <code>open_database()</code> / <code>open_in_memory()</code> functions
-that apply all SQLite pragmas in one place: WAL journal mode, foreign keys,
-synchronous mode (NORMAL by default, FULL for unstable hardware like RPi + SD
-card), busy timeout, page cache size, and WAL autocheckpoint interval.
-Previously, each call site opened a connection and applied pragmas ad hoc. Now
-the daemon, CLI, and tests all go through the same path. 7 tests covering
-foreign keys, busy timeout, journal mode, migrations, synchronous modes, and
-on-disk WAL file creation.</p>
-<p><strong>LRU fragment cache</strong> (<code>tesseras-storage/src/cache.rs</code>) — A
-<code>CachedFragmentStore</code> that wraps any <code>FragmentStore</code> with a byte-aware LRU
-cache. Fragment blobs are cached on read and invalidated on write or delete.
-When the cache exceeds its configured byte limit, the least recently used
-entries are evicted. The cache is transparent: it implements <code>FragmentStore</code>
-itself, so the rest of the stack doesn't know it's there. Optional Prometheus
-metrics track hits, misses, and current byte usage. 3 tests: cache hit avoids
-inner read, store invalidates cache, eviction when over max bytes.</p>
-<p><strong>Prometheus storage metrics</strong> (<code>tesseras-storage/src/metrics.rs</code>) — A
-<code>StorageMetrics</code> struct with three counters/gauges: <code>fragment_cache_hits</code>,
-<code>fragment_cache_misses</code>, and <code>fragment_cache_bytes</code>. Registered with the
-Prometheus registry and wired into the fragment cache via <code>with_metrics()</code>.</p>
-<p><strong>Attestation hot path fix</strong> (<code>tesseras-replication/src/service.rs</code>) — The
-attestation flow previously read every fragment blob from disk and recomputed
-its BLAKE3 checksum. Since <code>list_fragments()</code> already returns <code>FragmentId</code> with
-a stored checksum, the fix is trivial: use <code>frag.checksum</code> instead of
-<code>blake3::hash(&data)</code>. This eliminates one disk read per fragment during
-attestation — for a tessera with 100 fragments, that's 100 fewer reads. A test
-with <code>expect_read_fragment().never()</code> verifies no blob reads happen during
-attestation.</p>
-<p><strong>QUIC connection pool lifecycle</strong> (<code>tesseras-net/src/quinn_transport.rs</code>) — A
-<code>PoolConfig</code> struct controlling max connections, idle timeout, and reaper
-interval. <code>PooledConnection</code> wraps each <code>quinn::Connection</code> with a <code>last_used</code>
-timestamp. When the pool reaches capacity, the oldest idle connection is evicted
-before opening a new one. A background reaper task (Tokio spawn) periodically
-closes connections that have been idle beyond the timeout. 4 new pool metrics:
-<code>tesseras_conn_pool_size</code>, <code>pool_hits_total</code>, <code>pool_misses_total</code>,
-<code>pool_evictions_total</code>.</p>
-<p><strong>Daemon integration</strong> (<code>tesd/src/config.rs</code>, <code>main.rs</code>) — A new <code>[performance]</code>
-section in the TOML config with fields for SQLite cache size, synchronous mode,
-busy timeout, fragment cache size, max connections, idle timeout, and reaper
-interval. The daemon's <code>main()</code> now calls <code>open_database()</code> with the configured
-<code>StorageConfig</code>, wraps <code>FsFragmentStore</code> with <code>CachedFragmentStore</code>, and binds
-QUIC with the configured <code>PoolConfig</code>. The direct <code>rusqlite</code> dependency was
-removed from the daemon crate.</p>
-<p><strong>CLI migration</strong> (<code>tesseras-cli/src/commands/init.rs</code>, <code>create.rs</code>) — Both
-<code>init</code> and <code>create</code> commands now use <code>tesseras_storage::open_database()</code> with
-the default <code>StorageConfig</code> instead of opening raw <code>rusqlite</code> connections. The
-<code>rusqlite</code> dependency was removed from the CLI crate.</p>
-<h2 id="architecture-decisions">Architecture decisions</h2>
-<ul>
-<li><strong>Decorator pattern for caching</strong>: <code>CachedFragmentStore</code> wraps
-<code>Box<dyn FragmentStore></code> and implements <code>FragmentStore</code> itself. This means
-caching is opt-in, composable, and invisible to consumers. The daemon enables
-it; tests can skip it.</li>
-<li><strong>Byte-aware eviction</strong>: the LRU cache tracks total bytes, not entry count.
-Fragment blobs vary wildly in size (a 4KB text fragment vs a 2MB photo shard),
-so counting entries would give a misleading picture of memory usage.</li>
-<li><strong>No connection pool crate</strong>: instead of pulling in a generic pool library,
-the connection pool is a thin wrapper around
-<code>DashMap<SocketAddr, PooledConnection></code> with a Tokio reaper. QUIC connections
-are multiplexed, so the "pool" is really about lifecycle management (idle
-cleanup, max connections) rather than borrowing/returning.</li>
-<li><strong>Stored checksums over re-reads</strong>: the attestation fix is intentionally
-minimal — one line changed, one disk read removed per fragment. The checksums
-were already stored in SQLite by <code>store_fragment()</code>, they just weren't being
-used.</li>
-<li><strong>Centralized pragma configuration</strong>: a single <code>StorageConfig</code> struct replaces
-scattered <code>PRAGMA</code> calls. The <code>sqlite_synchronous_full</code> flag exists
-specifically for Raspberry Pi deployments where the kernel can crash and lose
-un-checkpointed WAL transactions.</li>
-</ul>
-<h2 id="what-comes-next">What comes next</h2>
-<ul>
-<li><strong>Phase 4 continued</strong> — Shamir's Secret Sharing for heirs, sealed tesseras
-(time-lock encryption), security audits, institutional node onboarding,
-storage deduplication, OS packaging</li>
-<li><strong>Phase 5: Exploration and Culture</strong> — public tessera browser by
-era/location/theme/language, institutional curation, genealogy integration,
-physical media export (M-DISC, microfilm, acid-free paper with QR)</li>
-</ul>
-<p>With performance tuning in place, Tesseras handles the common case efficiently:
-fragment reads hit the LRU cache, attestation skips disk I/O, idle QUIC
-connections are reaped automatically, and SQLite is configured consistently
-across the entire stack. The next steps focus on cryptographic features (Shamir,
-time-lock) and hardening for production deployment.</p>
-
-
-
-
- Phase 4: Verify Without Installing Anything
- 2026-02-15T20:00:00+00:00
- 2026-02-15T20:00:00+00:00
-
-
-
-
- Unknown
-
-
-
-
-
- https://tesseras.net/news/phase4-wasm-browser-verification/
-
- <p>Trust shouldn't require installing software. If someone sends you a tessera — a
-bundle of preserved memories — you should be able to verify it's genuine and
-unmodified without downloading an app, creating an account, or trusting a
-server. That's what <code>tesseras-wasm</code> delivers: drag a tessera archive into a web
-page, and cryptographic verification happens entirely in your browser.</p>
-<h2 id="what-was-built">What was built</h2>
-<p><strong>tesseras-wasm</strong> — A Rust crate that compiles to WebAssembly via wasm-pack,
-exposing four stateless functions to JavaScript. The crate depends on
-<code>tesseras-core</code> for manifest parsing and calls cryptographic primitives directly
-(blake3, ed25519-dalek) rather than depending on <code>tesseras-crypto</code>, which pulls
-in C-based post-quantum libraries that don't compile to
-<code>wasm32-unknown-unknown</code>.</p>
-<p><code>parse_manifest</code> takes raw MANIFEST bytes (UTF-8 plain text, not MessagePack),
-delegates to <code>tesseras_core::manifest::Manifest::parse()</code>, and returns a JSON
-string with the creator's Ed25519 public key, signature file paths, and a list
-of files with their expected BLAKE3 hashes, sizes, and MIME types. Internal
-structs (<code>ManifestJson</code>, <code>CreatorPubkey</code>, <code>SignatureFiles</code>, <code>FileEntry</code>) are
-serialized with serde_json. The ML-DSA public key and signature file fields are
-present in the JSON contract but set to <code>null</code> — ready for when post-quantum
-signing is implemented on the native side.</p>
-<p><code>hash_blake3</code> computes a BLAKE3 hash of arbitrary bytes and returns a
-64-character hex string. It's called once per file in the tessera to verify
-integrity against the MANIFEST.</p>
-<p><code>verify_ed25519</code> takes a message, a 64-byte signature, and a 32-byte public key,
-constructs an <code>ed25519_dalek::VerifyingKey</code>, and returns whether the signature
-is valid. Length validation returns descriptive errors ("Ed25519 public key must
-be 32 bytes") rather than panicking.</p>
-<p><code>verify_ml_dsa</code> is a stub that returns an error explaining ML-DSA verification
-is not yet available. This is deliberate: the <code>ml-dsa</code> crate on crates.io is
-v0.1.0-rc.7 (pre-release), and <code>tesseras-crypto</code> uses <code>pqcrypto-dilithium</code>
-(C-based CRYSTALS-Dilithium) which is byte-incompatible with FIPS 204 ML-DSA.
-Both sides need to use the same pure Rust implementation before
-cross-verification works. Ed25519 verification is sufficient — every tessera is
-Ed25519-signed.</p>
-<p>All four functions use a two-layer pattern for testability: inner functions
-return <code>Result<T, String></code> and are tested natively, while thin <code>#[wasm_bindgen]</code>
-wrappers convert errors to <code>JsError</code>. This avoids <code>JsError::new()</code> panicking on
-non-WASM targets during testing.</p>
-<p>The compiled WASM binary is 109 KB raw and 44 KB gzipped — well under the 200 KB
-budget. wasm-opt applies <code>-Oz</code> optimization after wasm-pack builds with
-<code>opt-level = "z"</code>, LTO, and single codegen unit.</p>
-<p><strong>@tesseras/verify</strong> — A TypeScript npm package (<code>crates/tesseras-wasm/js/</code>)
-that orchestrates browser-side verification. The public API is a single
-function:</p>
-<pre><code data-lang="typescript">async function verifyTessera(
- archive: Uint8Array,
- onProgress?: (current: number, total: number, file: string) => void
-): Promise<VerificationResult>
-</code></pre>
-<p>The <code>VerificationResult</code> type provides everything a UI needs: overall validity,
-tessera hash, creator public keys, signature status (valid/invalid/missing for
-both Ed25519 and ML-DSA), per-file integrity results with expected and actual
-hashes, a list of unexpected files not in the MANIFEST, and an errors array.</p>
-<p>Archive unpacking (<code>unpack.ts</code>) handles three formats: gzip-compressed tar
-(detected by <code>\x1f\x8b</code> magic bytes, decompressed with fflate then parsed as
-tar), ZIP (<code>PK\x03\x04</code> magic, unpacked with fflate's <code>unzipSync</code>), and raw tar
-(<code>ustar</code> at offset 257). A <code>normalizePath</code> function strips the leading
-<code>tessera-<hash>/</code> prefix so internal paths match MANIFEST entries.</p>
-<p>Verification runs in a Web Worker (<code>worker.ts</code>) to keep the UI thread
-responsive. The worker initializes the WASM module, unpacks the archive, parses
-the MANIFEST, verifies the Ed25519 signature against the creator's public key,
-then hashes each file with BLAKE3 and compares against expected values. Progress
-messages stream back to the main thread after each file. If any signature is
-invalid, verification stops early without hashing files — failing fast on the
-most critical check.</p>
-<p>The archive is transferred to the worker with zero-copy
-(<code>worker.postMessage({ type: "verify", archive }, [archive.buffer])</code>) to avoid
-duplicating potentially large tessera files in memory.</p>
-<p><strong>Build pipeline</strong> — Three new justfile targets: <code>wasm-build</code> runs wasm-pack
-with <code>--target web --release</code> and optimizes with wasm-opt; <code>wasm-size</code> reports
-raw and gzipped binary size; <code>test-wasm</code> runs the native test suite.</p>
-<p><strong>Tests</strong> — 9 native unit tests cover BLAKE3 hashing (empty input, known value),
-Ed25519 verification (valid signature, invalid signature, wrong key, bad key
-length), and MANIFEST parsing (valid manifest, invalid UTF-8, garbage input). 3
-WASM integration tests run in headless Chrome via
-<code>wasm-pack test --headless --chrome</code>, verifying that <code>hash_blake3</code>,
-<code>verify_ed25519</code>, and <code>parse_manifest</code> work correctly when compiled to
-<code>wasm32-unknown-unknown</code>.</p>
-<h2 id="architecture-decisions">Architecture decisions</h2>
-<ul>
-<li><strong>No tesseras-crypto dependency</strong>: the WASM crate calls blake3 and
-ed25519-dalek directly. <code>tesseras-crypto</code> depends on <code>pqcrypto-kyber</code> (C-based
-ML-KEM via pqcrypto-traits) which requires a C compiler toolchain and doesn't
-target wasm32. By depending only on pure Rust crates, the WASM build has zero
-C dependencies and compiles cleanly to WebAssembly.</li>
-<li><strong>ML-DSA deferred, not faked</strong>: rather than silently skipping post-quantum
-verification, the stub returns an explicit error. This ensures that if a
-tessera contains an ML-DSA signature, the verification result will report
-<code>ml_dsa: "missing"</code> rather than pretending it was checked. The JS orchestrator
-handles this gracefully — a tessera is valid if Ed25519 passes and ML-DSA is
-missing (not yet implemented on either side).</li>
-<li><strong>Inner function pattern</strong>: <code>JsError</code> cannot be constructed on non-WASM
-targets (it panics). Splitting each function into
-<code>foo_inner() -> Result<T, String></code> and <code>foo() -> Result<T, JsError></code> lets the
-native test suite exercise all logic without touching JavaScript types. The
-WASM integration tests in headless Chrome test the full <code>#[wasm_bindgen]</code>
-surface.</li>
-<li><strong>Web Worker isolation</strong>: cryptographic operations (especially BLAKE3 over
-large media files) can take hundreds of milliseconds. Running in a Worker
-prevents UI jank. The streaming progress protocol
-(<code>{ type: "progress", current, total, file }</code>) lets the UI show a progress bar
-during verification of tesseras with many files.</li>
-<li><strong>Zero-copy transfer</strong>: <code>archive.buffer</code> is transferred to the Worker, not
-copied. For a 50 MB tessera archive, this avoids doubling memory usage during
-verification.</li>
-<li><strong>Plain text MANIFEST, not MessagePack</strong>: the WASM crate parses the same
-plain-text MANIFEST format as the CLI. This is by design — the MANIFEST is the
-tessera's Rosetta Stone, readable by anyone with a text editor. The
-<code>rmp-serde</code> dependency in the Cargo.toml is not used and will be removed.</li>
-</ul>
-<h2 id="what-comes-next">What comes next</h2>
-<ul>
-<li><strong>Phase 4: Resilience and Scale</strong> — OS packaging (Alpine, Arch, Debian,
-FreeBSD, OpenBSD), CI on SourceHut and GitHub Actions, security audits,
-browser-based tessera explorer at tesseras.net using @tesseras/verify</li>
-<li><strong>Phase 5: Exploration and Culture</strong> — Public tessera browser by
-era/location/theme/language, institutional curation, genealogy integration,
-physical media export (M-DISC, microfilm, acid-free paper with QR)</li>
-</ul>
-<p>Verification no longer requires trust in software. A tessera archive dropped
-into a browser is verified with the same cryptographic rigor as the CLI — same
-BLAKE3 hashes, same Ed25519 signatures, same MANIFEST parser. The difference is
-that now anyone can do it.</p>
-
-
-
-
- Phase 4: Punching Through NATs
- 2026-02-15T18:00:00+00:00
- 2026-02-15T18:00:00+00:00
-
-
-
-
- Unknown
-
-
-
-
-
- https://tesseras.net/news/phase4-nat-traversal/
-
- <p>Most people's devices sit behind a NAT — a network address translator that lets
-them reach the internet but prevents incoming connections. For a P2P network,
-this is an existential problem: if two nodes behind NATs can't talk to each
-other, the network fragments. Phase 4 continues with a full NAT traversal stack:
-STUN-based discovery, coordinated hole punching, and relay fallback.</p>
-<p>The approach follows the same pattern as most battle-tested P2P systems (WebRTC,
-BitTorrent, IPFS): try the cheapest option first, escalate only when necessary.
-Direct connectivity costs nothing. Hole punching costs a few coordinated
-packets. Relaying costs sustained bandwidth from a third party. Tesseras tries
-them in that order.</p>
-<h2 id="what-was-built">What was built</h2>
-<p><strong>NatType classification</strong> (<code>tesseras-core/src/network.rs</code>) — A new <code>NatType</code>
-enum (Public, Cone, Symmetric, Unknown) added to the core domain layer. This
-type is shared across the entire stack: the STUN client writes it, the DHT
-advertises it in Pong messages, and the punch coordinator reads it to decide
-whether hole punching is even worth attempting (Cone-to-Cone works ~80% of the
-time; Symmetric-to-Symmetric almost never works).</p>
-<p><strong>STUN client</strong> (<code>tesseras-net/src/stun.rs</code>) — A minimal STUN implementation
-(RFC 5389 Binding Request/Response) that discovers a node's external address.
-The codec encodes 20-byte binding requests with a random transaction ID and
-decodes XOR-MAPPED-ADDRESS responses. The <code>discover_nat()</code> function queries
-multiple STUN servers in parallel (Google, Cloudflare by default), compares the
-mapped addresses, and classifies the NAT type:</p>
-<ul>
-<li>Same IP and port from all servers → <strong>Public</strong> (no NAT)</li>
-<li>Same mapped address from all servers → <strong>Cone</strong> (hole punching works)</li>
-<li>Different mapped addresses → <strong>Symmetric</strong> (hole punching unreliable)</li>
-<li>No responses → <strong>Unknown</strong></li>
-</ul>
-<p>Retries with exponential backoff and configurable timeouts. 12 tests covering
-codec roundtrips, all classification paths, and async loopback queries.</p>
-<p><strong>Signed punch coordination</strong> (<code>tesseras-net/src/punch.rs</code>) — Ed25519 signing
-and verification for <code>PunchIntro</code>, <code>RelayRequest</code>, and <code>RelayMigrate</code> messages.
-Every introduction is signed by the initiator with a 30-second timestamp window,
-preventing reflection attacks (where an attacker replays an old introduction to
-redirect traffic). The payload format is <code>target || external_addr || timestamp</code>
-— changing any field invalidates the signature. 6 unit tests plus 3
-property-based tests with proptest (arbitrary node IDs, ports, and session
-tokens).</p>
-<p><strong>Relay session manager</strong> (<code>tesseras-net/src/relay.rs</code>) — Manages transparent
-UDP relay sessions between NATed peers. Each session has a random 16-byte token;
-peers prefix their packets with the token, the relay strips it and forwards.
-Features:</p>
-<ul>
-<li>Bidirectional forwarding (A→R→B and B→R→A)</li>
-<li>Rate limiting: 256 KB/s for reciprocal peers, 64 KB/s for non-reciprocal</li>
-<li>10-minute maximum duration for bootstrap (non-reciprocal) sessions</li>
-<li>Address migration: when a peer's IP changes (Wi-Fi to cellular), a signed
-<code>RelayMigrate</code> updates the session without tearing it down</li>
-<li>Idle cleanup with configurable timeout</li>
-<li>8 unit tests plus 2 property-based tests</li>
-</ul>
-<p><strong>DHT message extensions</strong> (<code>tesseras-dht/src/message.rs</code>) — Seven new message
-variants added to the DHT protocol:</p>
-<table><thead><tr><th>Message</th><th>Purpose</th></tr></thead><tbody>
-<tr><td><code>PunchIntro</code></td><td>"I want to connect to node X, here's my signed external address"</td></tr>
-<tr><td><code>PunchRequest</code></td><td>Introducer forwards the request to the target</td></tr>
-<tr><td><code>PunchReady</code></td><td>Target confirms readiness, sends its external address</td></tr>
-<tr><td><code>RelayRequest</code></td><td>"Create a relay session to node X"</td></tr>
-<tr><td><code>RelayOffer</code></td><td>Relay responds with its address and session token</td></tr>
-<tr><td><code>RelayClose</code></td><td>Tear down a relay session</td></tr>
-<tr><td><code>RelayMigrate</code></td><td>Update session after network change</td></tr>
-</tbody></table>
-<p>The <code>Pong</code> message was extended with NAT metadata: <code>nat_type</code>,
-<code>relay_slots_available</code>, and <code>relay_bandwidth_used_kbps</code>. All new fields use
-<code>#[serde(default)]</code> for backward compatibility — old nodes ignore what they
-don't recognize, new nodes fall back to defaults. 9 new serialization roundtrip
-tests.</p>
-<p><strong>NatHandler trait and dispatch</strong> (<code>tesseras-dht/src/engine.rs</code>) — A new
-<code>NatHandler</code> async trait (5 methods) injected into the DHT engine, following the
-same dependency injection pattern as the existing <code>ReplicationHandler</code>. The
-engine's message dispatch loop now routes all punch/relay messages to the
-handler. This keeps the DHT engine protocol-agnostic while allowing the NAT
-traversal logic to live in <code>tesseras-net</code>.</p>
-<p><strong>Mobile reconnection types</strong> (<code>tesseras-embedded/src/reconnect.rs</code>) — A
-three-phase reconnection state machine for mobile devices:</p>
-<ol>
-<li><strong>QuicMigration</strong> (0-2s) — try QUIC connection migration for all active peers</li>
-<li><strong>ReStun</strong> (2-5s) — re-discover external address via STUN</li>
-<li><strong>ReEstablish</strong> (5-10s) — reconnect peers that migration couldn't save</li>
-</ol>
-<p>Peers are reconnected in priority order: bootstrap nodes first, then nodes
-holding our fragments, then nodes whose fragments we hold, then general DHT
-neighbors. A new <code>NetworkChanged</code> event variant was added to the FFI event
-stream so the Flutter app can show reconnection progress.</p>
-<p><strong>Daemon NAT configuration</strong> (<code>tesd/src/config.rs</code>) — A new <code>[nat]</code> section in
-the TOML config with STUN server list, relay toggle, max relay sessions,
-bandwidth limits (reciprocal vs bootstrap), and idle timeout. All fields have
-sensible defaults; relay is disabled by default.</p>
-<p><strong>Prometheus metrics</strong> (<code>tesseras-net/src/metrics.rs</code>) — 16 metrics across four
-subsystems:</p>
-<ul>
-<li><strong>STUN</strong>: requests, failures, latency histogram</li>
-<li><strong>Punch</strong>: attempts/successes/failures (by NAT type pair), latency histogram</li>
-<li><strong>Relay</strong>: active sessions, total sessions, bytes forwarded, idle timeouts,
-rate limit hits</li>
-<li><strong>Reconnect</strong>: network changes, attempts/successes by phase, duration
-histogram</li>
-</ul>
-<p>6 tests verifying registration, increment, label cardinality, and
-double-registration detection.</p>
-<p><strong>Integration tests</strong> — Two end-to-end tests using <code>MemTransport</code> (in-memory
-simulated network):</p>
-<ul>
-<li><code>punch_integration.rs</code> — Full 3-node hole-punch flow: A sends signed
-<code>PunchIntro</code> to introducer I, I verifies and forwards <code>PunchRequest</code> to B, B
-verifies the original signature and sends <code>PunchReady</code> back, A and B exchange
-messages directly. Also tests that a bad signature is correctly rejected.</li>
-<li><code>relay_integration.rs</code> — Full 3-node relay flow: A requests relay from R, R
-creates session and sends <code>RelayOffer</code> to both peers, A and B exchange
-token-prefixed packets through R, A migrates to a new address mid-session, A
-closes the session, and the test verifies the session is torn down and further
-forwarding fails.</li>
-</ul>
-<p><strong>Property tests</strong> — 7 proptest-based tests covering: signature round-trips for
-all three signed message types (arbitrary node IDs, ports, tokens), NAT
-classification determinism (same inputs always produce same output), STUN
-binding request validity, session token uniqueness, and relay rejection of
-too-short packets.</p>
-<p><strong>Justfile targets</strong> — <code>just test-nat</code> runs all NAT traversal tests across
-<code>tesseras-net</code> and <code>tesseras-dht</code>. <code>just test-chaos</code> is a placeholder for future
-Docker Compose chaos tests with <code>tc netem</code>.</p>
-<h2 id="architecture-decisions">Architecture decisions</h2>
-<ul>
-<li><strong>STUN over TURN</strong>: we implement STUN (discovery) and custom relay rather than
-full TURN. TURN requires authenticated allocation and is designed for media
-relay; our relay is simpler — token-prefixed UDP forwarding with rate limits.
-This keeps the protocol minimal and avoids depending on external TURN servers.</li>
-<li><strong>Signatures on introductions</strong>: every <code>PunchIntro</code> is signed by the
-initiator. Without this, an attacker could send forged introductions to
-redirect a node's hole-punch attempts to an attacker-controlled address (a
-reflection attack). The 30-second timestamp window limits replay.</li>
-<li><strong>Reciprocal bandwidth tiers</strong>: relay nodes give 4x more bandwidth (256 vs 64
-KB/s) to peers with good reciprocity scores. This incentivizes nodes to store
-fragments for others — if you contribute, you get better relay service when
-you need it.</li>
-<li><strong>Backward-compatible Pong extension</strong>: new NAT fields in <code>Pong</code> use
-<code>#[serde(default)]</code> and <code>Option<T></code>. Old nodes that don't understand these
-fields simply skip them during deserialization. No protocol version bump
-needed.</li>
-<li><strong>NatHandler as async trait</strong>: the NAT traversal logic is injected into the
-DHT engine via a trait, just like <code>ReplicationHandler</code>. This keeps the DHT
-engine focused on routing and peer management, and allows the NAT
-implementation to be swapped or disabled without touching core DHT code.</li>
-</ul>
-<h2 id="what-comes-next">What comes next</h2>
-<ul>
-<li><strong>Phase 4 continued</strong> — performance tuning (connection pooling, fragment
-caching, SQLite WAL), security audits, institutional node onboarding, OS
-packaging</li>
-<li><strong>Phase 5: Exploration and Culture</strong> — public tessera browser by
-era/location/theme/language, institutional curation, genealogy integration,
-physical media export (M-DISC, microfilm, acid-free paper with QR)</li>
-</ul>
-<p>With NAT traversal, Tesseras can connect nodes regardless of their network
-topology. Public nodes talk directly. Cone-NATed nodes punch through with an
-introducer's help. Symmetric-NATed or firewalled nodes relay through willing
-peers. The network adapts to the real world, where most devices are behind a NAT
-and network conditions change constantly.</p>
-
-
-
-
- CLI Meets Network: Publish, Fetch, and Status Commands
- 2026-02-15T00:00:00+00:00
- 2026-02-15T00:00:00+00:00
-
-
-
-
- Unknown
-
-
-
-
-
- https://tesseras.net/news/cli-daemon-rpc/
-
- <p>Until now the CLI operated in isolation: create a tessera, verify it, export it,
-list what you have. Everything stayed on your machine. With this release, <code>tes</code>
-gains three commands that bridge the gap between local storage and the P2P
-network — <code>publish</code>, <code>fetch</code>, and <code>status</code> — by talking to a running <code>tesd</code> over
-a Unix socket.</p>
-<h2 id="what-was-built">What was built</h2>
-<p><strong><code>tesseras-rpc</code> crate</strong> — A new shared crate that both the CLI and daemon
-depend on. It defines the RPC protocol using MessagePack serialization with
-length-prefixed framing (4-byte big-endian size header, 64 MiB max). Three
-request types (<code>Publish</code>, <code>Fetch</code>, <code>Status</code>) and their corresponding responses.
-A sync <code>DaemonClient</code> handles the Unix socket connection with configurable
-timeouts. The protocol is deliberately simple — one request, one response,
-connection closed — to keep the implementation auditable.</p>
-<p><strong><code>tes publish <hash></code></strong> — Publishes a tessera to the network. Accepts full
-hashes or short prefixes (e.g., <code>tes publish a1b2</code>), which are resolved against
-the local database. The daemon reads all tessera files from storage, packs them
-into a single MessagePack buffer, and hands them to the replication engine.
-Small tesseras (< 4 MB) are replicated as a single fragment; larger ones go
-through Reed-Solomon erasure coding. Output shows the short hash and fragment
-count:</p>
-<pre><code>Published tessera 9f2c4a1b (24 fragments created)
-Distribution in progress — use `tes status 9f2c4a1b` to track.
-</code></pre>
-<p><strong><code>tes fetch <hash></code></strong> — Retrieves a tessera from the network using its full
-content hash. The daemon collects locally available fragments, reconstructs the
-original data via erasure decoding if needed, unpacks the files, and stores them
-in the content-addressable store. Returns the number of memories and total size
-fetched.</p>
-<p><strong><code>tes status <hash></code></strong> — Displays the replication health of a tessera. The
-output maps directly to the replication engine's internal health model:</p>
-<table><thead><tr><th>State</th><th>Meaning</th></tr></thead><tbody>
-<tr><td>Local</td><td>Not yet published — exists only on your machine</td></tr>
-<tr><td>Publishing</td><td>Fragments being distributed, critical redundancy</td></tr>
-<tr><td>Replicated</td><td>Distributed but below target redundancy</td></tr>
-<tr><td>Healthy</td><td>Full redundancy achieved</td></tr>
-</tbody></table>
-<p><strong>Daemon RPC listener</strong> — The daemon now binds a Unix socket (default:
-<code>$XDG_RUNTIME_DIR/tesseras/daemon.sock</code>) with proper directory permissions
-(0700), stale socket cleanup, and graceful shutdown. Each connection is handled
-in a Tokio task — the listener converts the async stream to sync I/O for the
-framing layer, dispatches to the RPC handler, and writes the response back.</p>
-<p><strong>Pack/unpack in <code>tesseras-core</code></strong> — A small module that serializes a list of
-file entries (path + data) into a single MessagePack buffer and back. This is
-the bridge between the tessera's directory structure and the replication
-engine's opaque byte blobs.</p>
-<h2 id="architecture-decisions">Architecture decisions</h2>
-<ul>
-<li><strong>Unix socket over TCP</strong>: RPC between CLI and daemon happens on the same
-machine. Unix sockets are faster, don't need port allocation, and filesystem
-permissions provide access control without TLS.</li>
-<li><strong>MessagePack over JSON</strong>: the same wire format used everywhere else in
-Tesseras. Compact, schema-less, and already a workspace dependency. A typical
-publish request/response round-trip is under 200 bytes.</li>
-<li><strong>Sync client, async daemon</strong>: the <code>DaemonClient</code> uses blocking I/O because
-the CLI doesn't need concurrency — it sends one request and waits. The daemon
-listener is async (Tokio) to handle multiple connections. The framing layer
-works with any <code>Read</code>/<code>Write</code> impl, bridging both worlds.</li>
-<li><strong>Hash prefix resolution on the client side</strong>: <code>publish</code> and <code>status</code> resolve
-short prefixes locally before sending the full hash to the daemon. This keeps
-the daemon stateless — it doesn't need access to the CLI's database.</li>
-<li><strong>Default data directory alignment</strong>: the CLI default changed from
-<code>~/.tesseras</code> to <code>~/.local/share/tesseras</code> (via <code>dirs::data_dir()</code>) to match
-the daemon. A migration hint is printed when legacy data is detected.</li>
-</ul>
-<h2 id="what-comes-next">What comes next</h2>
-<ul>
-<li><strong>DHT peer count</strong>: the <code>status</code> command currently reports 0 peers — wiring
-the actual peer count from the DHT is the next step</li>
-<li><strong><code>tes show</code></strong>: display the contents of a tessera (memories, metadata) without
-exporting</li>
-<li><strong>Streaming fetch</strong>: for large tesseras, stream fragments as they arrive
-rather than waiting for all of them</li>
-</ul>
-
-
-
-
- Phase 4: Heir Key Recovery with Shamir's Secret Sharing
- 2026-02-15T00:00:00+00:00
- 2026-02-15T00:00:00+00:00
-
-
-
-
- Unknown
-
-
-
-
-
- https://tesseras.net/news/phase4-shamir-heir-recovery/
-
- <p>What happens to your memories when you die? Until now, Tesseras could preserve
-content across millennia — but the private and sealed keys died with their
-owner. Phase 4 continues with a solution: Shamir's Secret Sharing, a
-cryptographic scheme that lets you split your identity into shares and
-distribute them to the people you trust most.</p>
-<p>The math is elegant: you choose a threshold T and a total N. Any T shares
-reconstruct the full secret; T-1 shares reveal absolutely nothing. This is not
-"almost nothing" — it is information-theoretically secure. An attacker with one
-fewer share than the threshold has exactly zero bits of information about the
-secret, no matter how much computing power they have.</p>
-<h2 id="what-was-built">What was built</h2>
-<p><strong>GF(256) finite field arithmetic</strong> (<code>tesseras-crypto/src/shamir/gf256.rs</code>) —
-Shamir's Secret Sharing requires arithmetic in a finite field. We implement
-GF(256) using the same irreducible polynomial as AES (x^8 + x^4 + x^3 + x + 1),
-with compile-time lookup tables for logarithm and exponentiation. All operations
-are constant-time via table lookups — no branches on secret data. The module
-includes Horner's method for polynomial evaluation and Lagrange interpolation at
-x=0 for secret recovery. 233 lines, exhaustively tested: all 256 elements for
-identity/inverse properties, commutativity, and associativity.</p>
-<p><strong>ShamirSplitter</strong> (<code>tesseras-crypto/src/shamir/mod.rs</code>) — The core
-split/reconstruct API. <code>split()</code> takes a secret byte slice, a configuration
-(threshold T, total N), and the owner's Ed25519 public key. For each byte of the
-secret, it constructs a random polynomial of degree T-1 over GF(256) with the
-secret byte as the constant term, then evaluates it at N distinct points.
-<code>reconstruct()</code> takes T or more shares and recovers the secret via Lagrange
-interpolation. Both operations include extensive validation: threshold bounds,
-session consistency, owner fingerprint matching, and BLAKE3 checksum
-verification.</p>
-<p><strong>HeirShare format</strong> — Each share is a self-contained, serializable artifact
-with:</p>
-<ul>
-<li>Format version (v1) for forward compatibility</li>
-<li>Share index (1..N) and threshold/total metadata</li>
-<li>Session ID (random 8 bytes) — prevents mixing shares from different split
-sessions</li>
-<li>Owner fingerprint (first 8 bytes of BLAKE3 hash of the Ed25519 public key)</li>
-<li>Share data (the Shamir y-values, same length as the secret)</li>
-<li>BLAKE3 checksum over all preceding fields</li>
-</ul>
-<p>Shares are serialized in two formats: <strong>MessagePack</strong> (compact binary, for
-programmatic use) and <strong>base64 text</strong> (human-readable, for printing and physical
-storage). The text format includes a header with metadata and delimiters:</p>
-<pre><code>--- TESSERAS HEIR SHARE ---
-Format: v1
-Owner: a1b2c3d4e5f6a7b8 (fingerprint)
-Share: 1 of 3 (threshold: 2)
-Session: 9f8e7d6c5b4a3210
-Created: 2026-02-15
-
-<base64-encoded MessagePack data>
---- END HEIR SHARE ---
-</code></pre>
-<p>This format is designed to be printed on paper, stored in a safe deposit box, or
-engraved on metal. The header is informational — only the base64 payload is
-parsed during reconstruction.</p>
-<p><strong>CLI integration</strong> (<code>tesseras-cli/src/commands/heir.rs</code>) — Three new
-subcommands:</p>
-<ul>
-<li><code>tes heir create</code> — splits your Ed25519 identity into heir shares. Prompts for
-confirmation (your full identity is at stake), generates both <code>.bin</code> and
-<code>.txt</code> files for each share, and writes <code>heir_meta.json</code> to your identity
-directory.</li>
-<li><code>tes heir reconstruct</code> — loads share files (auto-detects binary vs text
-format), validates consistency, reconstructs the secret, derives the Ed25519
-keypair, and optionally installs it to <code>~/.tesseras/identity/</code> (with automatic
-backup of the existing identity).</li>
-<li><code>tes heir info</code> — displays share metadata and verifies the checksum without
-exposing any secret material.</li>
-</ul>
-<p><strong>Secret blob format</strong> — Identity keys are serialized into a versioned blob
-before splitting: a version byte (0x01), a flags byte (0x00 for Ed25519-only),
-followed by the 32-byte Ed25519 secret key. This leaves room for future
-expansion when X25519 and ML-KEM-768 private keys are integrated into the heir
-share system.</p>
-<p><strong>Testing</strong> — 20 unit tests for ShamirSplitter (roundtrip, all share
-combinations, insufficient shares, wrong owner, wrong session, threshold-1
-boundary, large secrets up to ML-KEM-768 key size). 7 unit tests for GF(256)
-arithmetic (exhaustive field properties). 3 property-based tests with proptest
-(arbitrary secrets up to 5000 bytes, arbitrary T-of-N configurations,
-information-theoretic security verification). Serialization roundtrip tests for
-both MessagePack and base64 text formats. 2 integration tests covering the
-complete heir lifecycle: generate identity, split into shares, serialize,
-deserialize, reconstruct, verify keypair, and sign/verify with reconstructed
-keys.</p>
-<h2 id="architecture-decisions">Architecture decisions</h2>
-<ul>
-<li><strong>GF(256) over GF(prime)</strong>: we use GF(256) rather than a prime field because
-it maps naturally to bytes — each element is a single byte, each share is the
-same length as the secret. No big-integer arithmetic, no modular reduction, no
-padding. This is the same approach used by most real-world Shamir
-implementations including SSSS and Hashicorp Vault.</li>
-<li><strong>Compile-time lookup tables</strong>: the LOG and EXP tables for GF(256) are
-computed at compile time using <code>const fn</code>. This means zero runtime
-initialization cost and constant-time operations via table lookups rather than
-loops.</li>
-<li><strong>Session ID prevents cross-session mixing</strong>: each call to <code>split()</code> generates
-a fresh random session ID. If an heir accidentally uses shares from two
-different split sessions (e.g., before and after a key rotation),
-reconstruction fails cleanly with a validation error rather than producing
-garbage output.</li>
-<li><strong>BLAKE3 checksums detect corruption</strong>: each share includes a BLAKE3 checksum
-over its contents. This catches bit rot, transmission errors, and accidental
-truncation before any reconstruction attempt. A share printed on paper and
-scanned back via OCR will fail the checksum if a single character is wrong.</li>
-<li><strong>Owner fingerprint for identification</strong>: shares include the first 8 bytes of
-BLAKE3(Ed25519 public key) as a fingerprint. This lets heirs verify which
-identity a share belongs to without revealing the full public key. During
-reconstruction, the fingerprint is cross-checked against the recovered key.</li>
-<li><strong>Dual format for resilience</strong>: both binary (MessagePack) and text (base64)
-formats are generated because physical media has different failure modes than
-digital storage. A USB drive might fail; paper survives. A QR code might be
-unreadable; base64 text can be manually typed.</li>
-<li><strong>Blob versioning</strong>: the secret is wrapped in a versioned blob (version +
-flags + key material) so future versions can include additional keys (X25519,
-ML-KEM-768) without breaking backward compatibility with existing shares.</li>
-</ul>
-<h2 id="what-comes-next">What comes next</h2>
-<ul>
-<li><strong>Phase 4 continued: Resilience and Scale</strong> — advanced NAT traversal
-(STUN/TURN), performance tuning (connection pooling, fragment caching, SQLite
-WAL), security audits, institutional node onboarding, OS packaging</li>
-<li><strong>Phase 5: Exploration and Culture</strong> — public tessera browser by
-era/location/theme/language, institutional curation, genealogy integration,
-physical media export (M-DISC, microfilm, acid-free paper with QR)</li>
-</ul>
-<p>With Shamir's Secret Sharing, Tesseras closes the last critical gap in long-term
-preservation. Your memories survive infrastructure failures through erasure
-coding. Your privacy survives quantum computers through hybrid encryption. And
-now, your identity survives you — passed on to the people you chose, requiring
-their cooperation to unlock what you left behind.</p>
-
-
-
-
- Phase 4: Encryption and Sealed Tesseras
- 2026-02-14T16:00:00+00:00
- 2026-02-14T16:00:00+00:00
-
-
-
-
- Unknown
-
-
-
-
-
- https://tesseras.net/news/phase4-encryption-sealed/
-
- <p>Some memories are not meant for everyone. A private journal, a letter to be
-opened in 2050, a family secret sealed until the grandchildren are old enough.
-Until now, every tessera on the network was open. Phase 4 changes that: Tesseras
-now encrypts private and sealed content with a hybrid cryptographic scheme
-designed to resist both classical and quantum attacks.</p>
-<p>The principle remains the same — encrypt as little as possible. Public memories
-need availability, not secrecy. But when someone creates a private or sealed
-tessera, the content is now locked behind AES-256-GCM encryption with keys
-protected by a hybrid key encapsulation mechanism combining X25519 and
-ML-KEM-768. Both algorithms must be broken to access the content.</p>
-<h2 id="what-was-built">What was built</h2>
-<p><strong>AES-256-GCM encryptor</strong> (<code>tesseras-crypto/src/encryption.rs</code>) — Symmetric
-content encryption with random 12-byte nonces and authenticated associated data
-(AAD). The AAD binds ciphertext to its context: for private tesseras, the
-content hash is included; for sealed tesseras, both the content hash and the
-<code>open_after</code> timestamp are bound into the AAD. This means moving ciphertext
-between tesseras with different open dates causes decryption failure — you
-cannot trick the system into opening a sealed memory early by swapping its
-ciphertext into a tessera with an earlier seal date.</p>
-<p><strong>Hybrid Key Encapsulation Mechanism</strong> (<code>tesseras-crypto/src/kem.rs</code>) — Key
-exchange using X25519 (classical elliptic curve Diffie-Hellman) combined with
-ML-KEM-768 (the NIST-standardized post-quantum lattice-based KEM, formerly
-Kyber). Both shared secrets are combined via <code>blake3::derive_key</code> with a fixed
-context string ("tesseras hybrid kem v1") to produce a single 256-bit content
-encryption key. This follows the same "dual from day one" philosophy as the
-project's dual signing (Ed25519 + ML-DSA): if either algorithm is broken in the
-future, the other still protects the content.</p>
-<p><strong>Sealed Key Envelope</strong> (<code>tesseras-crypto/src/sealed.rs</code>) — Wraps a content
-encryption key using the hybrid KEM, so only the tessera owner can recover it.
-The KEM produces a transport key, which is XORed with the content key to produce
-a wrapped key stored alongside the KEM ciphertext. On unsealing, the owner
-decapsulates the KEM ciphertext to recover the transport key, then XORs again to
-recover the content key.</p>
-<p><strong>Key Publication</strong> (<code>tesseras-crypto/src/sealed.rs</code>) — A standalone signed
-artifact for publishing a sealed tessera's content key after its <code>open_after</code>
-date has passed. The owner signs the content key, tessera hash, and publication
-timestamp with their dual keys (Ed25519, with ML-DSA placeholder). The manifest
-stays immutable — the key publication is a separate document. Other nodes verify
-the signature against the owner's public key before using the published key to
-decrypt the content.</p>
-<p><strong>EncryptionContext</strong> (<code>tesseras-core/src/enums.rs</code>) — A domain type that
-represents the AAD context for encryption. It lives in tesseras-core rather than
-tesseras-crypto because it's a domain concept (not a crypto implementation
-detail). The <code>to_aad_bytes()</code> method produces deterministic serialization: a tag
-byte (0x00 for Private, 0x01 for Sealed), followed by the content hash, and for
-Sealed, the <code>open_after</code> timestamp as little-endian i64.</p>
-<p><strong>Domain validation</strong> (<code>tesseras-core/src/service.rs</code>) —
-<code>TesseraService::create()</code> now rejects Sealed and Private tesseras that don't
-provide encryption keys. This is a domain-level validation: the service layer
-enforces that you cannot create a sealed memory without the cryptographic
-machinery to protect it. The error message is clear: "missing encryption keys
-for visibility sealed until 2050-01-01."</p>
-<p><strong>Core type updates</strong> — <code>TesseraIdentity</code> now includes an optional
-<code>encryption_public: Option<HybridEncryptionPublic></code> field containing both the
-X25519 and ML-KEM-768 public keys. <code>KeyAlgorithm</code> gained <code>X25519</code> and <code>MlKem768</code>
-variants. The identity filesystem layout now supports <code>node.x25519.key</code>/<code>.pub</code>
-and <code>node.mlkem768.key</code>/<code>.pub</code>.</p>
-<p><strong>Testing</strong> — 8 unit tests for AES-256-GCM (roundtrip, wrong key, tampered
-ciphertext, wrong AAD, cross-context decryption failure, unique nonces, plus 2
-property-based tests for arbitrary payloads and nonce uniqueness). 5 unit tests
-for HybridKem (roundtrip, wrong keypair, tampered X25519, KDF determinism, plus
-1 property-based test). 4 unit tests for SealedKeyEnvelope and KeyPublication. 2
-integration tests covering the complete sealed and private tessera lifecycle:
-generate keys, create content key, encrypt, seal, unseal, decrypt, publish key,
-and verify — the full cycle.</p>
-<h2 id="architecture-decisions">Architecture decisions</h2>
-<ul>
-<li><strong>Hybrid KEM from day one</strong>: X25519 + ML-KEM-768 follows the same philosophy
-as dual signing. We don't know which cryptographic assumptions will hold over
-millennia, so we combine classical and post-quantum algorithms. The cost is
-~1.2 KB of additional key material per identity — trivial compared to the
-photos and videos in a tessera.</li>
-<li><strong>BLAKE3 for KDF</strong>: rather than adding <code>hkdf</code> + <code>sha2</code> as new dependencies, we
-use <code>blake3::derive_key</code> with a fixed context string. BLAKE3's key derivation
-mode is specifically designed for this use case, and the project already
-depends on BLAKE3 for content hashing.</li>
-<li><strong>Immutable manifests</strong>: when a sealed tessera's <code>open_after</code> date passes, the
-content key is published as a separate signed artifact (<code>KeyPublication</code>), not
-by modifying the manifest. This preserves the append-only, content-addressed
-nature of tesseras. The manifest was signed at creation time and never
-changes.</li>
-<li><strong>AAD binding prevents ciphertext swapping</strong>: the <code>EncryptionContext</code> binds
-both the content hash and (for sealed tesseras) the <code>open_after</code> timestamp
-into the AES-GCM authenticated data. An attacker who copies encrypted content
-from a "sealed until 2050" tessera into a "sealed until 2025" tessera will
-find that decryption fails — the AAD no longer matches.</li>
-<li><strong>XOR key wrapping</strong>: the sealed key envelope uses a simple XOR of the content
-key with the KEM-derived transport key, rather than an additional layer of
-AES-GCM. Since the transport key is a fresh random value from the KEM and is
-used exactly once, XOR is information-theoretically secure for this specific
-use case and avoids unnecessary complexity.</li>
-<li><strong>Domain validation, not storage validation</strong>: the "missing encryption keys"
-check lives in <code>TesseraService::create()</code>, not in the storage layer. This
-follows the hexagonal architecture pattern: domain rules are enforced at the
-service boundary, not scattered across adapters.</li>
-</ul>
-<h2 id="what-comes-next">What comes next</h2>
-<ul>
-<li><strong>Phase 4 continued: Resilience and Scale</strong> — Shamir's Secret Sharing for heir
-key distribution, advanced NAT traversal (STUN/TURN), performance tuning,
-security audits, OS packaging</li>
-<li><strong>Phase 5: Exploration and Culture</strong> — Public tessera browser by
-era/location/theme/language, institutional curation, genealogy integration,
-physical media export (M-DISC, microfilm, acid-free paper with QR)</li>
-</ul>
-<p>Sealed tesseras make Tesseras a true time capsule. A father can now record a
-message for his unborn grandchild, seal it until 2060, and know that the
-cryptographic envelope will hold — even if the quantum computers of the future
-try to break it open early.</p>
-
-
-
-
- Phase 3: Memories in Your Hands
- 2026-02-14T14:00:00+00:00
- 2026-02-14T14:00:00+00:00
-
-
-
-
- Unknown
-
-
-
-
-
- https://tesseras.net/news/phase3-api-and-apps/
-
- <p>People can now hold their memories in their hands. Phase 3 delivers what the
-previous phases built toward: a mobile app where someone downloads Tesseras,
-creates an identity, takes a photo, and that memory enters the preservation
-network. No cloud accounts, no subscriptions, no company between you and your
-memories.</p>
-<h2 id="what-was-built">What was built</h2>
-<p><strong>tesseras-embedded</strong> — A full P2P node that runs inside a mobile app. The
-<code>EmbeddedNode</code> struct owns a Tokio runtime, SQLite database, QUIC transport,
-Kademlia DHT engine, replication service, and tessera service — the same stack
-as the desktop daemon, compiled into a shared library. A global singleton
-pattern (<code>Mutex<Option<EmbeddedNode>></code>) ensures one node per app lifecycle. On
-start, it opens the database, runs migrations, loads or generates an Ed25519
-identity with proof-of-work node ID, binds QUIC on an ephemeral port, wires up
-DHT and replication, and spawns the repair loop. On stop, it sends a shutdown
-signal and drains gracefully.</p>
-<p>Eleven FFI functions are exposed to Dart via flutter_rust_bridge: lifecycle
-(<code>node_start</code>, <code>node_stop</code>, <code>node_is_running</code>), identity (<code>create_identity</code>,
-<code>get_identity</code>), memories (<code>create_memory</code>, <code>get_timeline</code>, <code>get_memory</code>), and
-network status (<code>get_network_stats</code>, <code>get_replication_status</code>). All types
-crossing the FFI boundary are flat structs with only <code>String</code>, <code>Option<String></code>,
-<code>Vec<String></code>, and primitives — no trait objects, no generics, no lifetimes.</p>
-<p>Four adapter modules bridge core ports to concrete implementations:
-<code>Blake3HasherAdapter</code>, <code>Ed25519SignerAdapter</code>/<code>Ed25519VerifierAdapter</code> for
-cryptography, <code>DhtPortAdapter</code> for DHT operations, and
-<code>ReplicationHandlerAdapter</code> for incoming fragment and attestation RPCs.</p>
-<p>The <code>bundled-sqlite</code> feature flag compiles SQLite from source, required for
-Android and iOS where the system library may not be available. Cargokit
-configuration passes this flag automatically in both debug and release builds.</p>
-<p><strong>Flutter app</strong> — A Material Design 3 application with Riverpod state
-management, targeting Android, iOS, Linux, macOS, and Windows from a single
-codebase.</p>
-<p>The <em>onboarding flow</em> is three screens: a welcome screen explaining the project
-in one sentence ("Preserve your memories across millennia. No cloud. No
-company."), an identity creation screen that triggers Ed25519 keypair generation
-in Rust, and a confirmation screen showing the user's name and cryptographic
-identity.</p>
-<p>The <em>timeline screen</em> displays memories in reverse chronological order with
-image previews, context text, and chips for memory type and visibility.
-Pull-to-refresh reloads from the Rust node. A floating action button opens the
-<em>memory creation screen</em>, which supports photo selection from gallery or camera
-via <code>image_picker</code>, optional context text, memory type and visibility dropdowns,
-and comma-separated tags. Creating a memory calls the Rust FFI synchronously,
-then returns to the timeline.</p>
-<p>The <em>network screen</em> shows two cards: node status (peer count, DHT size,
-bootstrap state, uptime) and replication health (total fragments, healthy
-fragments, repairing fragments, replication factor). The <em>settings screen</em>
-displays the user's identity — name, truncated node ID, truncated public key,
-and creation date.</p>
-<p>Three Riverpod providers manage state: <code>nodeProvider</code> starts the embedded node
-on app launch using the app documents directory and stops it on dispose;
-<code>identityProvider</code> loads the existing profile or creates a new one;
-<code>timelineProvider</code> fetches the memory list with pagination.</p>
-<p><strong>Testing</strong> — 9 Rust unit tests in tesseras-embedded covering node lifecycle
-(start/stop without panic), identity persistence across restarts, restart cycles
-without SQLite corruption, network event streaming, stats retrieval, memory
-creation and timeline retrieval, and single memory lookup by hash. 2 Flutter
-tests: an integration test verifying Rust initialization and app startup, and a
-widget smoke test.</p>
-<h2 id="architecture-decisions">Architecture decisions</h2>
-<ul>
-<li><strong>Embedded node, not client-server</strong>: the phone runs the full P2P stack, not a
-thin client talking to a remote daemon. This means memories are preserved even
-without internet. Users with a Raspberry Pi or VPS can optionally connect the
-app to their daemon via GraphQL for higher availability, but it's not
-required.</li>
-<li><strong>Synchronous FFI</strong>: all flutter_rust_bridge functions are marked
-<code>#[frb(sync)]</code> and block on the internal Tokio runtime. This simplifies the
-Dart side (no async bridge complexity) while the Rust side handles concurrency
-internally. Flutter's UI thread stays responsive because Riverpod wraps calls
-in async providers.</li>
-<li><strong>Global singleton</strong>: a <code>Mutex<Option<EmbeddedNode>></code> global ensures the node
-lifecycle is predictable — one start, one stop, no races. Mobile platforms
-kill processes aggressively, so simplicity in lifecycle management is a
-feature.</li>
-<li><strong>Flat FFI types</strong>: no Rust abstractions leak across the FFI boundary. Every
-type is a plain struct with strings and numbers. This makes the auto-generated
-Dart bindings reliable and easy to debug.</li>
-<li><strong>Three-screen onboarding</strong>: identity creation is the only required step. No
-email, no password, no server registration. The app generates a cryptographic
-identity locally and is ready to use.</li>
-</ul>
-<h2 id="what-comes-next">What comes next</h2>
-<ul>
-<li><strong>Phase 4: Resilience and Scale</strong> — Advanced NAT traversal (STUN/TURN),
-Shamir's Secret Sharing for heirs, sealed tesseras with time-lock encryption,
-performance tuning, security audits, OS packaging for
-Alpine/Arch/Debian/FreeBSD/OpenBSD</li>
-<li><strong>Phase 5: Exploration and Culture</strong> — Public tessera browser by
-era/location/theme/language, institutional curation, genealogy integration,
-physical media export (M-DISC, microfilm, acid-free paper with QR)</li>
-</ul>
-<p>The infrastructure is complete. The network exists, replication works, and now
-anyone with a phone can participate. What remains is hardening what we have and
-opening it to the world.</p>
-
-
-
-
- Reed-Solomon: How Tesseras Survives Data Loss
- 2026-02-14T14:00:00+00:00
- 2026-02-14T14:00:00+00:00
-
-
-
-
- Unknown
-
-
-
-
-
- https://tesseras.net/news/reed-solomon/
-
- <p>Your hard drive will die. Your cloud provider will pivot. The RAID array in your
-closet will outlive its controller but not its owner. If a memory is stored in
-exactly one place, it has exactly one way to be lost forever.</p>
-<p>Tesseras is a network that keeps human memories alive through mutual aid. The
-core survival mechanism is <strong>Reed-Solomon erasure coding</strong> — a technique
-borrowed from deep-space communication that lets us reconstruct data even when
-pieces go missing.</p>
-<h2 id="what-is-reed-solomon">What is Reed-Solomon?</h2>
-<p>Reed-Solomon is a family of error-correcting codes invented by Irving Reed and
-Gustave Solomon in 1960. The original use case was correcting errors in data
-transmitted over noisy channels — think Voyager sending photos from Jupiter, or
-a CD playing despite scratches.</p>
-<p>The key insight: if you add carefully computed redundancy to your data <em>before</em>
-something goes wrong, you can recover the original even after losing some
-pieces.</p>
-<p>Here's the intuition. Suppose you have a polynomial of degree 2 — a parabola.
-You need 3 points to define it uniquely. But if you evaluate it at 5 points, you
-can lose any 2 of those 5 and still reconstruct the polynomial from the
-remaining 3. Reed-Solomon generalizes this idea to work over finite fields
-(Galois fields), where the "polynomial" is your data and the "evaluation points"
-are your fragments.</p>
-<p>In concrete terms:</p>
-<ol>
-<li><strong>Split</strong> your data into <em>k</em> data shards</li>
-<li><strong>Compute</strong> <em>m</em> parity shards from the data shards</li>
-<li><strong>Distribute</strong> all <em>k + m</em> shards across different locations</li>
-<li><strong>Reconstruct</strong> the original data from any <em>k</em> of the <em>k + m</em> shards</li>
-</ol>
-<p>You can lose up to <em>m</em> shards — any <em>m</em>, data or parity, in any combination —
-and still recover everything.</p>
-<h2 id="why-not-just-make-copies">Why not just make copies?</h2>
-<p>The naive approach to redundancy is replication: make 3 copies, store them in 3
-places. This gives you tolerance for 2 failures at the cost of 3x your storage.</p>
-<p>Reed-Solomon is dramatically more efficient:</p>
-<table><thead><tr><th>Strategy</th><th style="text-align: right">Storage overhead</th><th style="text-align: right">Failures tolerated</th></tr></thead><tbody>
-<tr><td>3x replication</td><td style="text-align: right">200%</td><td style="text-align: right">2 out of 3</td></tr>
-<tr><td>Reed-Solomon (16,8)</td><td style="text-align: right">50%</td><td style="text-align: right">8 out of 24</td></tr>
-<tr><td>Reed-Solomon (48,24)</td><td style="text-align: right">50%</td><td style="text-align: right">24 out of 72</td></tr>
-</tbody></table>
-<p>With 16 data shards and 8 parity shards, you use 50% extra storage but can
-survive losing a third of all fragments. To achieve the same fault tolerance
-with replication alone, you'd need 3x the storage.</p>
-<p>For a network that aims to preserve memories across decades and centuries, this
-efficiency isn't a nice-to-have — it's the difference between a viable system
-and one that drowns in its own overhead.</p>
-<h2 id="how-tesseras-uses-reed-solomon">How Tesseras uses Reed-Solomon</h2>
-<p>Not all data deserves the same treatment. A 500-byte text memory and a 100 MB
-video have very different redundancy needs. Tesseras uses a three-tier
-fragmentation strategy:</p>
-<p><strong>Small (< 4 MB)</strong> — Whole-file replication to 7 peers. For small tesseras, the
-overhead of erasure coding (encoding time, fragment management, reconstruction
-logic) outweighs its benefits. Simple copies are faster and simpler.</p>
-<p><strong>Medium (4–256 MB)</strong> — 16 data shards + 8 parity shards = 24 total fragments.
-Each fragment is roughly 1/16th of the original size. Any 16 of the 24 fragments
-reconstruct the original. Distributed across 7 peers.</p>
-<p><strong>Large (≥ 256 MB)</strong> — 48 data shards + 24 parity shards = 72 total fragments.
-Higher shard count means smaller individual fragments (easier to transfer and
-store) and higher absolute fault tolerance. Also distributed across 7 peers.</p>
-<p>The implementation uses the <code>reed-solomon-erasure</code> crate operating over GF(2⁸) —
-the same Galois field used in QR codes and CDs. Each fragment carries a BLAKE3
-checksum so corruption is detected immediately, not silently propagated.</p>
-<pre><code>Tessera (120 MB photo album)
- ↓ encode
-16 data shards (7.5 MB each) + 8 parity shards (7.5 MB each)
- ↓ distribute
-24 fragments across 7 peers (subnet-diverse)
- ↓ any 16 fragments
-Original tessera recovered
-</code></pre>
-<h2 id="the-challenges">The challenges</h2>
-<p>Reed-Solomon solves the mathematical problem of redundancy. The engineering
-challenges are everything around it.</p>
-<h3 id="fragment-tracking">Fragment tracking</h3>
-<p>Every fragment needs to be findable. Tesseras uses a Kademlia DHT for peer
-discovery and fragment-to-peer mapping. When a node goes offline, its fragments
-need to be re-created and distributed to new peers. This means tracking which
-fragments exist, where they are, and whether they're still intact — across a
-network with no central authority.</p>
-<h3 id="silent-corruption">Silent corruption</h3>
-<p>A fragment that returns wrong data is worse than one that's missing — at least a
-missing fragment is honestly absent. Tesseras addresses this with
-attestation-based health checks: the repair loop periodically asks fragment
-holders to prove possession by returning BLAKE3 checksums. If a checksum doesn't
-match, the fragment is treated as lost.</p>
-<h3 id="correlated-failures">Correlated failures</h3>
-<p>If all 24 fragments of a tessera land on machines in the same datacenter, a
-single power outage kills them all. Reed-Solomon's math assumes independent
-failures. Tesseras enforces <strong>subnet diversity</strong> during distribution: no more
-than 2 fragments per /24 IPv4 subnet (or /48 IPv6 prefix). This spreads
-fragments across different physical infrastructure.</p>
-<h3 id="repair-speed-vs-network-load">Repair speed vs. network load</h3>
-<p>When a peer goes offline, the clock starts ticking. Lost fragments need to be
-re-created before more failures accumulate. But aggressive repair floods the
-network. Tesseras balances this with a configurable repair loop (default: every
-24 hours with 2-hour jitter) and concurrent transfer limits (default: 4
-simultaneous transfers). The jitter prevents repair storms where every node
-checks its fragments at the same moment.</p>
-<h3 id="long-term-key-management">Long-term key management</h3>
-<p>Reed-Solomon protects against data loss, not against losing access. If a tessera
-is encrypted (private or sealed visibility), you need the decryption key to make
-the recovered data useful. Tesseras separates these concerns: erasure coding
-handles availability, while Shamir's Secret Sharing (a future phase) will handle
-key distribution among heirs. The project's design philosophy — encrypt as
-little as possible — keeps the key management problem small.</p>
-<h3 id="galois-field-limitations">Galois field limitations</h3>
-<p>The GF(2⁸) field limits the total number of shards to 255 (data + parity
-combined). For Tesseras, this is not a practical constraint — even the Large
-tier uses only 72 shards. But it does mean that extremely large files with
-thousands of fragments would require either a different field or a layered
-encoding scheme.</p>
-<h3 id="evolving-codec-compatibility">Evolving codec compatibility</h3>
-<p>A tessera encoded today must be decodable in 50 years. Reed-Solomon over GF(2⁸)
-is one of the most widely implemented algorithms in computing — it's in every CD
-player, every QR code scanner, every deep-space probe. This ubiquity is itself a
-survival strategy. The algorithm won't be forgotten because half the world's
-infrastructure depends on it.</p>
-<h2 id="the-bigger-picture">The bigger picture</h2>
-<p>Reed-Solomon is a piece of a larger puzzle. It works in concert with:</p>
-<ul>
-<li><strong>Kademlia DHT</strong> for finding peers and routing fragments</li>
-<li><strong>BLAKE3 checksums</strong> for integrity verification</li>
-<li><strong>Bilateral reciprocity</strong> for fair storage exchange (no blockchain needed)</li>
-<li><strong>Subnet diversity</strong> for failure independence</li>
-<li><strong>Automatic repair</strong> for maintaining redundancy over time</li>
-</ul>
-<p>No single technique makes memories survive. Reed-Solomon ensures that data <em>can</em>
-be recovered. The DHT ensures fragments <em>can be found</em>. Reciprocity ensures
-peers <em>want to help</em>. Repair ensures none of this degrades over time.</p>
-<p>A tessera is a bet that the sum of these mechanisms, running across many
-independent machines operated by many independent people, is more durable than
-any single institution. Reed-Solomon is the mathematical foundation of that bet.</p>
-
-
-
-
- Phase 2: Memories Survive
- 2026-02-14T12:00:00+00:00
- 2026-02-14T12:00:00+00:00
-
-
-
-
- Unknown
-
-
-
-
-
- https://tesseras.net/news/phase2-replication/
-
- <p>A tessera is no longer tied to a single machine. Phase 2 delivers the
-replication layer: data is split into erasure-coded fragments, distributed
-across multiple peers, and automatically repaired when nodes go offline. A
-bilateral reciprocity ledger ensures fair storage exchange — no blockchain, no
-tokens.</p>
-<h2 id="what-was-built">What was built</h2>
-<p><strong>tesseras-core</strong> (updated) — New replication domain types: <code>FragmentPlan</code>
-(selects fragmentation tier based on tessera size), <code>FragmentId</code> (tessera hash +
-index + shard count + checksum), <code>FragmentEnvelope</code> (fragment with its metadata
-for wire transport), <code>FragmentationTier</code> (Small/Medium/Large), <code>Attestation</code>
-(proof that a node holds a fragment at a given time), and <code>ReplicateAck</code>
-(acknowledgement of fragment receipt). Three new port traits define the
-hexagonal boundaries: <code>DhtPort</code> (find peers, replicate fragments, request
-attestations, ping), <code>FragmentStore</code> (store/read/delete/list/verify fragments),
-and <code>ReciprocityLedger</code> (record storage exchanges, query balances, find best
-peers). Maximum tessera size is 1 GB.</p>
-<p><strong>tesseras-crypto</strong> (updated) — The existing <code>ReedSolomonCoder</code> now powers
-fragment encoding. Data is split into shards, parity shards are computed, and
-any combination of data shards can reconstruct the original — as long as the
-number of missing shards does not exceed the parity count.</p>
-<p><strong>tesseras-storage</strong> (updated) — Two new adapters:</p>
-<ul>
-<li><code>FsFragmentStore</code> — stores fragment data as files on disk
-(<code>{root}/{tessera_hash}/{index:03}.shard</code>) with a SQLite metadata index
-tracking tessera hash, shard index, shard count, checksum, and byte size.
-Verification recomputes the BLAKE3 hash and compares it to the stored
-checksum.</li>
-<li><code>SqliteReciprocityLedger</code> — bilateral storage accounting in SQLite. Each peer
-has a row tracking bytes stored for them and bytes they store for us. The
-<code>balance</code> column is a generated column
-(<code>bytes_they_store_for_us - bytes_stored_for_them</code>). UPSERT ensures atomic
-increment of counters.</li>
-</ul>
-<p>New migration (<code>002_replication.sql</code>) adds tables for fragments, fragment plans,
-holders, holder-fragment mappings, and reciprocity balances.</p>
-<p><strong>tesseras-dht</strong> (updated) — Four new message variants: <code>Replicate</code> (send a
-fragment envelope), <code>ReplicateAck</code> (confirm receipt), <code>AttestRequest</code> (ask a
-node to prove it holds a tessera's fragments), and <code>AttestResponse</code> (return
-attestation with checksums and timestamp). The engine handles these in its
-message dispatch loop.</p>
-<p><strong>tesseras-replication</strong> — The new crate, with five modules:</p>
-<ul>
-<li>
-<p><em>Fragment encoding</em> (<code>fragment.rs</code>): <code>encode_tessera()</code> selects the
-fragmentation tier based on size, then calls Reed-Solomon encoding for Medium
-and Large tiers. Three tiers:</p>
-<ul>
-<li><strong>Small</strong> (< 4 MB): whole-file replication to r=7 peers, no erasure coding</li>
-<li><strong>Medium</strong> (4–256 MB): 16 data + 8 parity shards, distributed across r=7
-peers</li>
-<li><strong>Large</strong> (≥ 256 MB): 48 data + 24 parity shards, distributed across r=7
-peers</li>
-</ul>
-</li>
-<li>
-<p><em>Distribution</em> (<code>distributor.rs</code>): subnet diversity filtering limits peers per
-/24 IPv4 subnet (or /48 IPv6 prefix) to avoid correlated failures. If all your
-fragments land on the same rack, a single power outage kills them all.</p>
-</li>
-<li>
-<p><em>Service</em> (<code>service.rs</code>): <code>ReplicationService</code> is the orchestrator.
-<code>replicate_tessera()</code> encodes the data, finds the closest peers via DHT,
-applies subnet diversity, and distributes fragments round-robin.
-<code>receive_fragment()</code> validates the BLAKE3 checksum, checks reciprocity balance
-(rejects if the sender's deficit exceeds the configured threshold), stores the
-fragment, and updates the ledger. <code>handle_attestation_request()</code> lists local
-fragments and computes their checksums as proof of possession.</p>
-</li>
-<li>
-<p><em>Repair</em> (<code>repair.rs</code>): <code>check_tessera_health()</code> requests attestations from
-known holders, falls back to ping for unresponsive nodes, verifies local
-fragment integrity, and returns one of three actions: <code>Healthy</code>,
-<code>NeedsReplication { deficit }</code>, or <code>CorruptLocal { fragment_index }</code>. The
-repair loop runs every 24 hours (with 2-hour jitter) via <code>tokio::select!</code> with
-shutdown integration.</p>
-</li>
-<li>
-<p><em>Configuration</em> (<code>config.rs</code>): <code>ReplicationConfig</code> with defaults for repair
-interval (24h), jitter (2h), concurrent transfers (4), minimum free space (1
-GB), deficit allowance (256 MB), and per-peer storage limit (1 GB).</p>
-</li>
-</ul>
-<p><strong>tesd</strong> (updated) — The daemon now opens a SQLite database (<code>db/tesseras.db</code>),
-runs migrations, creates <code>FsFragmentStore</code>, <code>SqliteReciprocityLedger</code>, and
-<code>FsBlobStore</code> instances, wraps the DHT engine in a <code>DhtPortAdapter</code>, builds a
-<code>ReplicationService</code>, and spawns the repair loop as a background task with
-graceful shutdown.</p>
-<p><strong>Testing</strong> — 193 tests across the workspace:</p>
-<ul>
-<li>15 unit tests in tesseras-replication (fragment encoding tiers, checksum
-validation, subnet diversity, repair health checks, service receive/replicate
-flows)</li>
-<li>3 integration tests with real storage (full encode→distribute→receive cycle
-for medium tessera, small whole-file replication, tampered fragment rejection)</li>
-<li>Tests use in-memory SQLite + tempdir fragments with mockall mocks for DHT and
-BlobStore</li>
-<li>Zero clippy warnings, clean formatting</li>
-</ul>
-<h2 id="architecture-decisions">Architecture decisions</h2>
-<ul>
-<li><strong>Three-tier fragmentation</strong>: small files don't need erasure coding — the
-overhead isn't worth it. Medium and large files get progressively more parity
-shards. This avoids wasting storage on small tesseras while providing strong
-redundancy for large ones.</li>
-<li><strong>Owner-push distribution</strong>: the tessera owner encodes fragments and pushes
-them to peers, rather than peers pulling. This simplifies the protocol (no
-negotiation phase) and ensures fragments are distributed immediately.</li>
-<li><strong>Bilateral reciprocity without consensus</strong>: each node tracks its own balance
-with each peer locally. No global ledger, no token, no blockchain. If peer A
-stores 500 MB for peer B, peer B should store roughly 500 MB for peer A. Free
-riders lose redundancy gradually — their fragments are deprioritized for
-repair, but never deleted.</li>
-<li><strong>Subnet diversity</strong>: fragments are spread across different network subnets to
-survive correlated failures. A datacenter outage shouldn't take out all copies
-of a tessera.</li>
-<li><strong>Attestation-first health checks</strong>: the repair loop asks holders to prove
-possession (attestation with checksums) before declaring a tessera degraded.
-Only when attestation fails does it fall back to a simple ping. This catches
-silent data corruption, not just node departure.</li>
-</ul>
-<h2 id="what-comes-next">What comes next</h2>
-<ul>
-<li><strong>Phase 3: API and Apps</strong> — Flutter mobile/desktop app via
-flutter_rust_bridge, GraphQL API (async-graphql), WASM browser node</li>
-<li><strong>Phase 4: Resilience and Scale</strong> — ML-DSA post-quantum signatures, advanced
-NAT traversal, Shamir's Secret Sharing for heirs, packaging for
-Alpine/Arch/Debian/FreeBSD/OpenBSD, CI on SourceHut</li>
-<li><strong>Phase 5: Exploration and Culture</strong> — public tessera browser, institutional
-curation, genealogy integration, physical media export</li>
-</ul>
-<p>Nodes can find each other and keep each other's memories alive. Next, we give
-people a way to hold their memories in their hands.</p>
-
-
-
-
- Phase 1: Nodes Find Each Other
- 2026-02-14T11:00:00+00:00
- 2026-02-14T11:00:00+00:00
-
-
-
-
- Unknown
-
-
-
-
-
- https://tesseras.net/news/phase1-basic-network/
-
- <p>Tesseras is no longer a local-only tool. Phase 1 delivers the networking layer:
-nodes discover each other through a Kademlia DHT, communicate over QUIC, and
-publish tessera pointers that any peer on the network can find. A tessera
-created on node A is now findable from node C.</p>
-<h2 id="what-was-built">What was built</h2>
-<p><strong>tesseras-core</strong> (updated) — New network domain types: <code>TesseraPointer</code>
-(lightweight reference to a tessera's holders and fragment locations),
-<code>NodeIdentity</code> (node ID + public key + proof-of-work nonce), <code>NodeInfo</code>
-(identity + address + capabilities), and <code>Capabilities</code> (bitflags for what a
-node supports: DHT, storage, relay, replication).</p>
-<p><strong>tesseras-net</strong> — The transport layer, built on QUIC via quinn. The <code>Transport</code>
-trait defines the port: <code>send</code>, <code>recv</code>, <code>disconnect</code>, <code>local_addr</code>. Two adapters
-implement it:</p>
-<ul>
-<li><code>QuinnTransport</code> — real QUIC with self-signed TLS, ALPN negotiation
-(<code>tesseras/1</code>), connection pooling via DashMap, and a background accept loop
-that handles incoming streams.</li>
-<li><code>MemTransport</code> + <code>SimNetwork</code> — in-memory channels for deterministic testing
-without network I/O. Every integration test in the DHT crate runs against
-this.</li>
-</ul>
-<p>The wire protocol uses length-prefixed MessagePack: a 4-byte big-endian length
-header followed by an rmp-serde payload. <code>WireMessage</code> carries a version byte,
-request ID, and a body that can be a request, response, or protocol-level error.
-Maximum message size is 64 KiB.</p>
-<p><strong>tesseras-dht</strong> — A complete Kademlia implementation:</p>
-<ul>
-<li><em>Routing table</em>: 160 k-buckets with k=20. Least-recently-seen eviction,
-move-to-back on update, ping-check before replacing a full bucket's oldest
-entry.</li>
-<li><em>XOR distance</em>: 160-bit XOR metric with bucket indexing by highest differing
-bit.</li>
-<li><em>Proof-of-work</em>: nodes grind a nonce until <code>BLAKE3(pubkey || nonce)[..20]</code> has
-8 leading zero bits (~256 hash attempts on average). Cheap enough for any
-device, expensive enough to make Sybil attacks impractical at scale.</li>
-<li><em>Protocol messages</em>: Ping/Pong, FindNode/FindNodeResponse,
-FindValue/FindValueResult, Store — all serialized with MessagePack via serde.</li>
-<li><em>Pointer store</em>: bounded in-memory store with configurable TTL (24 hours
-default) and max entries (10,000 default). When full, evicts pointers furthest
-from the local node ID, following Kademlia's distance-based responsibility
-model.</li>
-<li><em>DhtEngine</em>: the main orchestrator. Handles incoming RPCs, runs iterative
-lookups (alpha=3 parallelism), bootstrap, publish, and find. The <code>run()</code>
-method drives a <code>tokio::select!</code> loop with maintenance timers: routing table
-refresh every 60 seconds, pointer expiry every 5 minutes.</li>
-</ul>
-<p><strong>tesd</strong> — A full-node binary. Parses CLI args (bind address, bootstrap peers,
-data directory), generates a PoW-valid node identity, binds a QUIC endpoint,
-bootstraps into the network, and runs the DHT engine. Graceful shutdown on
-Ctrl+C via tokio signal handling.</p>
-<p><strong>Infrastructure</strong> — OpenTofu configuration for two Hetzner Cloud bootstrap
-nodes (cx22 instances in Falkenstein, Germany and Helsinki, Finland). Cloud-init
-provisioning script creates a dedicated <code>tesseras</code> user, writes a config file,
-and sets up a systemd service. Firewall rules open UDP 4433 (QUIC) and restrict
-metrics to internal access.</p>
-<p><strong>Testing</strong> — 139 tests across the workspace:</p>
-<ul>
-<li>47 unit tests in tesseras-dht (routing table, distance, PoW, pointer store,
-message serialization, engine RPCs)</li>
-<li>5 multi-node integration tests (3-node bootstrap, 10-node lookup convergence,
-publish-and-find, node departure detection, PoW rejection)</li>
-<li>14 tests in tesseras-net (codec roundtrips, transport send/recv, backpressure,
-disconnect)</li>
-<li>Docker Compose smoke tests with 3 containerized nodes communicating over real
-QUIC</li>
-<li>Zero clippy warnings, clean formatting</li>
-</ul>
-<h2 id="architecture-decisions">Architecture decisions</h2>
-<ul>
-<li><strong>Transport as a port</strong>: the <code>Transport</code> trait is the only interface between
-the DHT engine and the network. Swapping QUIC for any other protocol means
-implementing four methods. All DHT tests use the in-memory adapter, making
-them fast and deterministic.</li>
-<li><strong>One stream per RPC</strong>: each DHT request-response pair uses a fresh
-bidirectional QUIC stream. No multiplexing complexity, no head-of-line
-blocking between independent operations. QUIC handles the multiplexing at the
-connection level.</li>
-<li><strong>MessagePack over Protobuf</strong>: compact binary encoding without code generation
-or schema files. Serde integration means adding a field to a message is a
-one-line change. Trade-off: no built-in schema evolution guarantees, but at
-this stage velocity matters more.</li>
-<li><strong>PoW instead of stake or reputation</strong>: a node identity costs ~256 BLAKE3
-hashes. This runs in under a second on any hardware, including a Raspberry Pi,
-but generating thousands of identities for a Sybil attack becomes expensive.
-No tokens, no blockchain, no external dependencies.</li>
-<li><strong>Iterative lookup with routing table updates</strong>: discovered nodes are added to
-the routing table as they're encountered during iterative lookups, following
-standard Kademlia behavior. This ensures the routing table improves
-organically as nodes interact.</li>
-</ul>
-<h2 id="what-comes-next">What comes next</h2>
-<ul>
-<li><strong>Phase 2: Replication</strong> — Reed-Solomon erasure coding over the network,
-fragment distribution, automatic repair loops, bilateral reciprocity ledger
-(no blockchain, no tokens)</li>
-<li><strong>Phase 3: API and Apps</strong> — Flutter mobile/desktop app via
-flutter_rust_bridge, GraphQL API (async-graphql), WASM browser node</li>
-<li><strong>Phase 4: Resilience and Scale</strong> — ML-DSA post-quantum signatures, advanced
-NAT traversal, Shamir's Secret Sharing for heirs, packaging for
-Alpine/Arch/Debian/FreeBSD/OpenBSD, CI on SourceHut</li>
-<li><strong>Phase 5: Exploration and Culture</strong> — public tessera browser, institutional
-curation, genealogy integration, physical media export</li>
-</ul>
-<p>Nodes can find each other. Next, they learn to keep each other's memories alive.</p>
-
-
-
-
- Phase 0: Foundation Laid
- 2026-02-14T10:00:00+00:00
- 2026-02-14T10:00:00+00:00
-
-
-
-
- Unknown
-
-
-
-
-
- https://tesseras.net/news/phase0-foundation/
-
- <p>The first milestone of the Tesseras project is complete. Phase 0 establishes the
-foundation that every future component will build on: domain types,
-cryptography, storage, and a usable command-line interface.</p>
-<h2 id="what-was-built">What was built</h2>
-<p><strong>tesseras-core</strong> — The domain layer defines the tessera format: <code>ContentHash</code>
-(BLAKE3, 32 bytes), <code>NodeId</code> (Kademlia, 20 bytes), memory types (Moment,
-Reflection, Daily, Relation, Object), visibility modes (Private, Circle, Public,
-PublicAfterDeath, Sealed), and a plain-text manifest format that can be parsed
-by any programming language for the next thousand years. The application service
-layer (<code>TesseraService</code>) handles create, verify, export, and list operations
-through port traits, following hexagonal architecture.</p>
-<p><strong>tesseras-crypto</strong> — Ed25519 key generation, signing, and verification. A
-dual-signature framework (Ed25519 + ML-DSA placeholder) ready for post-quantum
-migration. BLAKE3 content hashing. Reed-Solomon erasure coding behind a feature
-flag for future replication.</p>
-<p><strong>tesseras-storage</strong> — SQLite index via rusqlite with plain-SQL migrations.
-Filesystem blob store with content-addressable layout
-(<code>blobs/<tessera_hash>/<memory_hash>/<filename></code>). Identity key persistence on
-disk.</p>
-<p><strong>tesseras-cli</strong> — A working <code>tesseras</code> binary with five commands:</p>
-<ul>
-<li><code>init</code> — generates Ed25519 identity, creates SQLite database</li>
-<li><code>create <dir></code> — scans a directory for media files, creates a signed tessera</li>
-<li><code>verify <hash></code> — checks signature and file integrity</li>
-<li><code>export <hash> <dest></code> — writes a self-contained tessera directory</li>
-<li><code>list</code> — shows a table of stored tesseras</li>
-</ul>
-<p><strong>Testing</strong> — 67+ tests across the workspace: unit tests in every module,
-property-based tests (proptest) for hex roundtrips and manifest serialization,
-integration tests covering the full create-verify-export cycle including
-tampered file and invalid signature detection. Zero clippy warnings.</p>
-<h2 id="architecture-decisions">Architecture decisions</h2>
-<ul>
-<li><strong>Hexagonal architecture</strong>: crypto operations are injected via trait objects
-(<code>Box<dyn Hasher></code>, <code>Box<dyn ManifestSigner></code>, <code>Box<dyn ManifestVerifier></code>),
-keeping the core crate free of concrete crypto dependencies.</li>
-<li><strong>Feature flags</strong>: the <code>service</code> feature on tesseras-core gates the async
-application layer. The <code>classical</code> and <code>erasure</code> features on tesseras-crypto
-control which algorithms are compiled in.</li>
-<li><strong>Plain-text manifest</strong>: parseable without any binary format library, with
-explicit <code>blake3:</code> hash prefixes and human-readable layout.</li>
-</ul>
-<h2 id="what-comes-next">What comes next</h2>
-<p>Phase 0 is the local-only foundation. The road ahead:</p>
-<ul>
-<li><strong>Phase 1: Networking</strong> — QUIC transport (quinn), Kademlia DHT for peer
-discovery, NAT traversal</li>
-<li><strong>Phase 2: Replication</strong> — Reed-Solomon erasure coding over the network,
-repair loops, bilateral reciprocity (no blockchain, no tokens)</li>
-<li><strong>Phase 3: Clients</strong> — Flutter mobile/desktop app via flutter_rust_bridge,
-GraphQL API, WASM browser node</li>
-<li><strong>Phase 4: Hardening</strong> — ML-DSA post-quantum signatures, packaging for
-Alpine/Arch/Debian/FreeBSD/OpenBSD, CI on SourceHut</li>
-</ul>
-<p>The tessera format is stable. Everything built from here connects to and extends
-what exists today.</p>
-
-
-
-
- Hello, World
- 2026-02-13T00:00:00+00:00
- 2026-02-13T00:00:00+00:00
-
-
-
-
- Unknown
-
-
-
-
-
- https://tesseras.net/news/hello-world/
-
- <p>Today we're announcing the Tesseras project: a peer-to-peer network for
-preserving human memories across millennia.</p>
-<p>Tesseras is built on a simple idea — your photos, recordings, and writings
-deserve to outlast any company, platform, or file format. Each person creates a
-tessera, a self-contained time capsule that the network keeps alive through
-mutual aid and redundancy.</p>
-<p>The project is in its earliest stage. We're building the foundation: tools to
-create, verify, and export tesseras offline. The network layer, replication, and
-apps will follow.</p>
-<p>If this mission resonates with you, <a href="/subscriptions/">join the mailing list</a> or
-browse the <a rel="external" href="https://git.sr.ht/~ijanc/tesseras">source code</a>.</p>
-
-
-
-
diff --git a/news/atom.xml.gz b/news/atom.xml.gz
deleted file mode 100644
index e04e2c9..0000000
Binary files a/news/atom.xml.gz and /dev/null differ
diff --git a/news/cli-daemon-rpc/index.html b/news/cli-daemon-rpc/index.html
deleted file mode 100644
index a1ca0c1..0000000
--- a/news/cli-daemon-rpc/index.html
+++ /dev/null
@@ -1,142 +0,0 @@
-
-
-
CLI Meets Network: Publish, Fetch, and Status Commands
-
2026-02-15
-
Until now the CLI operated in isolation: create a tessera, verify it, export it,
-list what you have. Everything stayed on your machine. With this release, tes
-gains three commands that bridge the gap between local storage and the P2P
-network — publish, fetch, and status — by talking to a running tesd over
-a Unix socket.
-
What was built
-
tesseras-rpc crate — A new shared crate that both the CLI and daemon
-depend on. It defines the RPC protocol using MessagePack serialization with
-length-prefixed framing (4-byte big-endian size header, 64 MiB max). Three
-request types (Publish, Fetch, Status) and their corresponding responses.
-A sync DaemonClient handles the Unix socket connection with configurable
-timeouts. The protocol is deliberately simple — one request, one response,
-connection closed — to keep the implementation auditable.
-
tes publish <hash> — Publishes a tessera to the network. Accepts full
-hashes or short prefixes (e.g., tes publish a1b2), which are resolved against
-the local database. The daemon reads all tessera files from storage, packs them
-into a single MessagePack buffer, and hands them to the replication engine.
-Small tesseras (< 4 MB) are replicated as a single fragment; larger ones go
-through Reed-Solomon erasure coding. Output shows the short hash and fragment
-count:
-
Published tessera 9f2c4a1b (24 fragments created)
-Distribution in progress — use `tes status 9f2c4a1b` to track.
-
-
tes fetch <hash> — Retrieves a tessera from the network using its full
-content hash. The daemon collects locally available fragments, reconstructs the
-original data via erasure decoding if needed, unpacks the files, and stores them
-in the content-addressable store. Returns the number of memories and total size
-fetched.
-
tes status <hash> — Displays the replication health of a tessera. The
-output maps directly to the replication engine's internal health model:
-
State
Meaning
-
Local
Not yet published — exists only on your machine
-
Publishing
Fragments being distributed, critical redundancy
-
Replicated
Distributed but below target redundancy
-
Healthy
Full redundancy achieved
-
-
Daemon RPC listener — The daemon now binds a Unix socket (default:
-$XDG_RUNTIME_DIR/tesseras/daemon.sock) with proper directory permissions
-(0700), stale socket cleanup, and graceful shutdown. Each connection is handled
-in a Tokio task — the listener converts the async stream to sync I/O for the
-framing layer, dispatches to the RPC handler, and writes the response back.
-
Pack/unpack in tesseras-core — A small module that serializes a list of
-file entries (path + data) into a single MessagePack buffer and back. This is
-the bridge between the tessera's directory structure and the replication
-engine's opaque byte blobs.
-
Architecture decisions
-
-
Unix socket over TCP: RPC between CLI and daemon happens on the same
-machine. Unix sockets are faster, don't need port allocation, and filesystem
-permissions provide access control without TLS.
-
MessagePack over JSON: the same wire format used everywhere else in
-Tesseras. Compact, schema-less, and already a workspace dependency. A typical
-publish request/response round-trip is under 200 bytes.
-
Sync client, async daemon: the DaemonClient uses blocking I/O because
-the CLI doesn't need concurrency — it sends one request and waits. The daemon
-listener is async (Tokio) to handle multiple connections. The framing layer
-works with any Read/Write impl, bridging both worlds.
-
Hash prefix resolution on the client side: publish and status resolve
-short prefixes locally before sending the full hash to the daemon. This keeps
-the daemon stateless — it doesn't need access to the CLI's database.
-
Default data directory alignment: the CLI default changed from
-~/.tesseras to ~/.local/share/tesseras (via dirs::data_dir()) to match
-the daemon. A migration hint is printed when legacy data is detected.
-
-
What comes next
-
-
DHT peer count: the status command currently reports 0 peers — wiring
-the actual peer count from the DHT is the next step
-
tes show: display the contents of a tessera (memories, metadata) without
-exporting
-
Streaming fetch: for large tesseras, stream fragments as they arrive
-rather than waiting for all of them
Today we're announcing the Tesseras project: a peer-to-peer network for
-preserving human memories across millennia.
-
Tesseras is built on a simple idea — your photos, recordings, and writings
-deserve to outlast any company, platform, or file format. Each person creates a
-tessera, a self-contained time capsule that the network keeps alive through
-mutual aid and redundancy.
-
The project is in its earliest stage. We're building the foundation: tools to
-create, verify, and export tesseras offline. The network layer, replication, and
-apps will follow.
Libraries, archives, and museums can now join the Tesseras network as verified institutional nodes with DNS-based identity, full-text search indexes, and configurable storage pledges.
SQLite WAL mode with centralized pragma configuration, LRU fragment caching, QUIC connection pool lifecycle management, and attestation hot path optimization.
Tesseras nodes can now discover their NAT type via STUN, coordinate UDP hole punching through introducers, and fall back to transparent relay forwarding when direct connectivity fails.
The tesseras CLI can now publish tesseras to the network, fetch them from peers, and monitor replication status — all through a new Unix socket RPC bridge to the daemon.
Tesseras now lets you split your cryptographic identity into shares distributed to trusted heirs — any threshold of them can reconstruct your keys, but fewer reveal nothing.
Tesseras now supports private and sealed memories with hybrid post-quantum encryption — AES-256-GCM, X25519 + ML-KEM-768, and time-lock key publication.
Tesseras now fragments, distributes, and automatically repairs data across the network using Reed-Solomon erasure coding and a bilateral reciprocity ledger.
The package creates a tesseras system user and group automatically via
-systemd-sysusers. To use the CLI without sudo, add yourself to the group:
-
sudo usermod -aG tesseras $USER
-
-
Log out and back in, then start the daemon:
-
sudo systemctl enable --now tesd
-
-
What the package includes
-
Path
Description
-
/usr/bin/tesd
Full node daemon
-
/usr/bin/tes
CLI client
-
/etc/tesseras/config.toml
Default configuration (marked as backup)
-
/usr/lib/systemd/system/tesd.service
Systemd unit with security hardening
-
/usr/lib/sysusers.d/tesseras.conf
System user definition
-
/usr/lib/tmpfiles.d/tesseras.conf
Data directory /var/lib/tesseras
-
Shell completions
bash, zsh, and fish
-
-
PKGBUILD details
-
The PKGBUILD builds directly from the local git checkout rather than downloading
-a source tarball. The TESSERAS_ROOT environment variable points makepkg to the
-workspace root. Cargo's target directory is set to $srcdir/target to keep
-build artifacts inside the makepkg sandbox.
-
The package depends only on sqlite at runtime and cargo at build time.
-
Updating
-
After pulling new changes, simply run just arch again and reinstall:
The postinst script automatically creates a tesseras system user and the
-data directory /var/lib/tesseras. To use the CLI without sudo, add yourself to
-the group:
-
sudo usermod -aG tesseras $USER
-
-
Log out and back in, then start the daemon:
-
sudo systemctl enable --now tesd
-
-
What the package includes
-
Path
Description
-
/usr/bin/tesd
Full node daemon
-
/usr/bin/tes
CLI client
-
/etc/tesseras/config.toml
Default configuration (marked as conffile)
-
/lib/systemd/system/tesd.service
Systemd unit with security hardening
-
Shell completions
bash, zsh, and fish
-
-
How cargo-deb works
-
The packaging metadata lives in crates/tesseras-daemon/Cargo.toml under
-[package.metadata.deb]. This section defines:
-
-
depends — runtime dependencies: libc6 and libsqlite3-0
-
assets — files to include in the package (binaries, config, systemd unit,
-shell completions)
-
conf-files — files treated as configuration (preserved on upgrade)
-
maintainer-scripts — postinst and postrm scripts in
-packaging/debian/scripts/
-
systemd-units — automatic systemd integration
-
-
The postinst script creates the tesseras system user and data directory on
-install. The postrm script cleans up the user, group, and data directory only
-on purge (not on simple removal).
-
Systemd hardening
-
The tesd.service unit includes security hardening directives:
The first milestone of the Tesseras project is complete. Phase 0 establishes the
-foundation that every future component will build on: domain types,
-cryptography, storage, and a usable command-line interface.
-
What was built
-
tesseras-core — The domain layer defines the tessera format: ContentHash
-(BLAKE3, 32 bytes), NodeId (Kademlia, 20 bytes), memory types (Moment,
-Reflection, Daily, Relation, Object), visibility modes (Private, Circle, Public,
-PublicAfterDeath, Sealed), and a plain-text manifest format that can be parsed
-by any programming language for the next thousand years. The application service
-layer (TesseraService) handles create, verify, export, and list operations
-through port traits, following hexagonal architecture.
-
tesseras-crypto — Ed25519 key generation, signing, and verification. A
-dual-signature framework (Ed25519 + ML-DSA placeholder) ready for post-quantum
-migration. BLAKE3 content hashing. Reed-Solomon erasure coding behind a feature
-flag for future replication.
-
tesseras-storage — SQLite index via rusqlite with plain-SQL migrations.
-Filesystem blob store with content-addressable layout
-(blobs/<tessera_hash>/<memory_hash>/<filename>). Identity key persistence on
-disk.
-
tesseras-cli — A working tesseras binary with five commands:
create <dir> — scans a directory for media files, creates a signed tessera
-
verify <hash> — checks signature and file integrity
-
export <hash> <dest> — writes a self-contained tessera directory
-
list — shows a table of stored tesseras
-
-
Testing — 67+ tests across the workspace: unit tests in every module,
-property-based tests (proptest) for hex roundtrips and manifest serialization,
-integration tests covering the full create-verify-export cycle including
-tampered file and invalid signature detection. Zero clippy warnings.
-
Architecture decisions
-
-
Hexagonal architecture: crypto operations are injected via trait objects
-(Box<dyn Hasher>, Box<dyn ManifestSigner>, Box<dyn ManifestVerifier>),
-keeping the core crate free of concrete crypto dependencies.
-
Feature flags: the service feature on tesseras-core gates the async
-application layer. The classical and erasure features on tesseras-crypto
-control which algorithms are compiled in.
-
Plain-text manifest: parseable without any binary format library, with
-explicit blake3: hash prefixes and human-readable layout.
-
-
What comes next
-
Phase 0 is the local-only foundation. The road ahead:
-
-
Phase 1: Networking — QUIC transport (quinn), Kademlia DHT for peer
-discovery, NAT traversal
-
Phase 2: Replication — Reed-Solomon erasure coding over the network,
-repair loops, bilateral reciprocity (no blockchain, no tokens)
Tesseras is no longer a local-only tool. Phase 1 delivers the networking layer:
-nodes discover each other through a Kademlia DHT, communicate over QUIC, and
-publish tessera pointers that any peer on the network can find. A tessera
-created on node A is now findable from node C.
-
What was built
-
tesseras-core (updated) — New network domain types: TesseraPointer
-(lightweight reference to a tessera's holders and fragment locations),
-NodeIdentity (node ID + public key + proof-of-work nonce), NodeInfo
-(identity + address + capabilities), and Capabilities (bitflags for what a
-node supports: DHT, storage, relay, replication).
-
tesseras-net — The transport layer, built on QUIC via quinn. The Transport
-trait defines the port: send, recv, disconnect, local_addr. Two adapters
-implement it:
-
-
QuinnTransport — real QUIC with self-signed TLS, ALPN negotiation
-(tesseras/1), connection pooling via DashMap, and a background accept loop
-that handles incoming streams.
-
MemTransport + SimNetwork — in-memory channels for deterministic testing
-without network I/O. Every integration test in the DHT crate runs against
-this.
-
-
The wire protocol uses length-prefixed MessagePack: a 4-byte big-endian length
-header followed by an rmp-serde payload. WireMessage carries a version byte,
-request ID, and a body that can be a request, response, or protocol-level error.
-Maximum message size is 64 KiB.
-
tesseras-dht — A complete Kademlia implementation:
-
-
Routing table: 160 k-buckets with k=20. Least-recently-seen eviction,
-move-to-back on update, ping-check before replacing a full bucket's oldest
-entry.
-
XOR distance: 160-bit XOR metric with bucket indexing by highest differing
-bit.
-
Proof-of-work: nodes grind a nonce until BLAKE3(pubkey || nonce)[..20] has
-8 leading zero bits (~256 hash attempts on average). Cheap enough for any
-device, expensive enough to make Sybil attacks impractical at scale.
-
Protocol messages: Ping/Pong, FindNode/FindNodeResponse,
-FindValue/FindValueResult, Store — all serialized with MessagePack via serde.
-
Pointer store: bounded in-memory store with configurable TTL (24 hours
-default) and max entries (10,000 default). When full, evicts pointers furthest
-from the local node ID, following Kademlia's distance-based responsibility
-model.
-
DhtEngine: the main orchestrator. Handles incoming RPCs, runs iterative
-lookups (alpha=3 parallelism), bootstrap, publish, and find. The run()
-method drives a tokio::select! loop with maintenance timers: routing table
-refresh every 60 seconds, pointer expiry every 5 minutes.
-
-
tesd — A full-node binary. Parses CLI args (bind address, bootstrap peers,
-data directory), generates a PoW-valid node identity, binds a QUIC endpoint,
-bootstraps into the network, and runs the DHT engine. Graceful shutdown on
-Ctrl+C via tokio signal handling.
-
Infrastructure — OpenTofu configuration for two Hetzner Cloud bootstrap
-nodes (cx22 instances in Falkenstein, Germany and Helsinki, Finland). Cloud-init
-provisioning script creates a dedicated tesseras user, writes a config file,
-and sets up a systemd service. Firewall rules open UDP 4433 (QUIC) and restrict
-metrics to internal access.
-
Testing — 139 tests across the workspace:
-
-
47 unit tests in tesseras-dht (routing table, distance, PoW, pointer store,
-message serialization, engine RPCs)
14 tests in tesseras-net (codec roundtrips, transport send/recv, backpressure,
-disconnect)
-
Docker Compose smoke tests with 3 containerized nodes communicating over real
-QUIC
-
Zero clippy warnings, clean formatting
-
-
Architecture decisions
-
-
Transport as a port: the Transport trait is the only interface between
-the DHT engine and the network. Swapping QUIC for any other protocol means
-implementing four methods. All DHT tests use the in-memory adapter, making
-them fast and deterministic.
-
One stream per RPC: each DHT request-response pair uses a fresh
-bidirectional QUIC stream. No multiplexing complexity, no head-of-line
-blocking between independent operations. QUIC handles the multiplexing at the
-connection level.
-
MessagePack over Protobuf: compact binary encoding without code generation
-or schema files. Serde integration means adding a field to a message is a
-one-line change. Trade-off: no built-in schema evolution guarantees, but at
-this stage velocity matters more.
-
PoW instead of stake or reputation: a node identity costs ~256 BLAKE3
-hashes. This runs in under a second on any hardware, including a Raspberry Pi,
-but generating thousands of identities for a Sybil attack becomes expensive.
-No tokens, no blockchain, no external dependencies.
-
Iterative lookup with routing table updates: discovered nodes are added to
-the routing table as they're encountered during iterative lookups, following
-standard Kademlia behavior. This ensures the routing table improves
-organically as nodes interact.
-
-
What comes next
-
-
Phase 2: Replication — Reed-Solomon erasure coding over the network,
-fragment distribution, automatic repair loops, bilateral reciprocity ledger
-(no blockchain, no tokens)
-
Phase 3: API and Apps — Flutter mobile/desktop app via
-flutter_rust_bridge, GraphQL API (async-graphql), WASM browser node
-
Phase 4: Resilience and Scale — ML-DSA post-quantum signatures, advanced
-NAT traversal, Shamir's Secret Sharing for heirs, packaging for
-Alpine/Arch/Debian/FreeBSD/OpenBSD, CI on SourceHut
-
Phase 5: Exploration and Culture — public tessera browser, institutional
-curation, genealogy integration, physical media export
-
-
Nodes can find each other. Next, they learn to keep each other's memories alive.
A tessera is no longer tied to a single machine. Phase 2 delivers the
-replication layer: data is split into erasure-coded fragments, distributed
-across multiple peers, and automatically repaired when nodes go offline. A
-bilateral reciprocity ledger ensures fair storage exchange — no blockchain, no
-tokens.
-
What was built
-
tesseras-core (updated) — New replication domain types: FragmentPlan
-(selects fragmentation tier based on tessera size), FragmentId (tessera hash +
-index + shard count + checksum), FragmentEnvelope (fragment with its metadata
-for wire transport), FragmentationTier (Small/Medium/Large), Attestation
-(proof that a node holds a fragment at a given time), and ReplicateAck
-(acknowledgement of fragment receipt). Three new port traits define the
-hexagonal boundaries: DhtPort (find peers, replicate fragments, request
-attestations, ping), FragmentStore (store/read/delete/list/verify fragments),
-and ReciprocityLedger (record storage exchanges, query balances, find best
-peers). Maximum tessera size is 1 GB.
-
tesseras-crypto (updated) — The existing ReedSolomonCoder now powers
-fragment encoding. Data is split into shards, parity shards are computed, and
-any combination of data shards can reconstruct the original — as long as the
-number of missing shards does not exceed the parity count.
-
tesseras-storage (updated) — Two new adapters:
-
-
FsFragmentStore — stores fragment data as files on disk
-({root}/{tessera_hash}/{index:03}.shard) with a SQLite metadata index
-tracking tessera hash, shard index, shard count, checksum, and byte size.
-Verification recomputes the BLAKE3 hash and compares it to the stored
-checksum.
-
SqliteReciprocityLedger — bilateral storage accounting in SQLite. Each peer
-has a row tracking bytes stored for them and bytes they store for us. The
-balance column is a generated column
-(bytes_they_store_for_us - bytes_stored_for_them). UPSERT ensures atomic
-increment of counters.
-
-
New migration (002_replication.sql) adds tables for fragments, fragment plans,
-holders, holder-fragment mappings, and reciprocity balances.
-
tesseras-dht (updated) — Four new message variants: Replicate (send a
-fragment envelope), ReplicateAck (confirm receipt), AttestRequest (ask a
-node to prove it holds a tessera's fragments), and AttestResponse (return
-attestation with checksums and timestamp). The engine handles these in its
-message dispatch loop.
-
tesseras-replication — The new crate, with five modules:
-
-
-
Fragment encoding (fragment.rs): encode_tessera() selects the
-fragmentation tier based on size, then calls Reed-Solomon encoding for Medium
-and Large tiers. Three tiers:
-
-
Small (< 4 MB): whole-file replication to r=7 peers, no erasure coding
-
Medium (4–256 MB): 16 data + 8 parity shards, distributed across r=7
-peers
-
Large (≥ 256 MB): 48 data + 24 parity shards, distributed across r=7
-peers
-
-
-
-
Distribution (distributor.rs): subnet diversity filtering limits peers per
-/24 IPv4 subnet (or /48 IPv6 prefix) to avoid correlated failures. If all your
-fragments land on the same rack, a single power outage kills them all.
-
-
-
Service (service.rs): ReplicationService is the orchestrator.
-replicate_tessera() encodes the data, finds the closest peers via DHT,
-applies subnet diversity, and distributes fragments round-robin.
-receive_fragment() validates the BLAKE3 checksum, checks reciprocity balance
-(rejects if the sender's deficit exceeds the configured threshold), stores the
-fragment, and updates the ledger. handle_attestation_request() lists local
-fragments and computes their checksums as proof of possession.
-
-
-
Repair (repair.rs): check_tessera_health() requests attestations from
-known holders, falls back to ping for unresponsive nodes, verifies local
-fragment integrity, and returns one of three actions: Healthy,
-NeedsReplication { deficit }, or CorruptLocal { fragment_index }. The
-repair loop runs every 24 hours (with 2-hour jitter) via tokio::select! with
-shutdown integration.
-
-
-
Configuration (config.rs): ReplicationConfig with defaults for repair
-interval (24h), jitter (2h), concurrent transfers (4), minimum free space (1
-GB), deficit allowance (256 MB), and per-peer storage limit (1 GB).
-
-
-
tesd (updated) — The daemon now opens a SQLite database (db/tesseras.db),
-runs migrations, creates FsFragmentStore, SqliteReciprocityLedger, and
-FsBlobStore instances, wraps the DHT engine in a DhtPortAdapter, builds a
-ReplicationService, and spawns the repair loop as a background task with
-graceful shutdown.
-
Testing — 193 tests across the workspace:
-
-
15 unit tests in tesseras-replication (fragment encoding tiers, checksum
-validation, subnet diversity, repair health checks, service receive/replicate
-flows)
-
3 integration tests with real storage (full encode→distribute→receive cycle
-for medium tessera, small whole-file replication, tampered fragment rejection)
-
Tests use in-memory SQLite + tempdir fragments with mockall mocks for DHT and
-BlobStore
-
Zero clippy warnings, clean formatting
-
-
Architecture decisions
-
-
Three-tier fragmentation: small files don't need erasure coding — the
-overhead isn't worth it. Medium and large files get progressively more parity
-shards. This avoids wasting storage on small tesseras while providing strong
-redundancy for large ones.
-
Owner-push distribution: the tessera owner encodes fragments and pushes
-them to peers, rather than peers pulling. This simplifies the protocol (no
-negotiation phase) and ensures fragments are distributed immediately.
-
Bilateral reciprocity without consensus: each node tracks its own balance
-with each peer locally. No global ledger, no token, no blockchain. If peer A
-stores 500 MB for peer B, peer B should store roughly 500 MB for peer A. Free
-riders lose redundancy gradually — their fragments are deprioritized for
-repair, but never deleted.
-
Subnet diversity: fragments are spread across different network subnets to
-survive correlated failures. A datacenter outage shouldn't take out all copies
-of a tessera.
-
Attestation-first health checks: the repair loop asks holders to prove
-possession (attestation with checksums) before declaring a tessera degraded.
-Only when attestation fails does it fall back to a simple ping. This catches
-silent data corruption, not just node departure.
-
-
What comes next
-
-
Phase 3: API and Apps — Flutter mobile/desktop app via
-flutter_rust_bridge, GraphQL API (async-graphql), WASM browser node
-
Phase 4: Resilience and Scale — ML-DSA post-quantum signatures, advanced
-NAT traversal, Shamir's Secret Sharing for heirs, packaging for
-Alpine/Arch/Debian/FreeBSD/OpenBSD, CI on SourceHut
-
Phase 5: Exploration and Culture — public tessera browser, institutional
-curation, genealogy integration, physical media export
-
-
Nodes can find each other and keep each other's memories alive. Next, we give
-people a way to hold their memories in their hands.
People can now hold their memories in their hands. Phase 3 delivers what the
-previous phases built toward: a mobile app where someone downloads Tesseras,
-creates an identity, takes a photo, and that memory enters the preservation
-network. No cloud accounts, no subscriptions, no company between you and your
-memories.
-
What was built
-
tesseras-embedded — A full P2P node that runs inside a mobile app. The
-EmbeddedNode struct owns a Tokio runtime, SQLite database, QUIC transport,
-Kademlia DHT engine, replication service, and tessera service — the same stack
-as the desktop daemon, compiled into a shared library. A global singleton
-pattern (Mutex<Option<EmbeddedNode>>) ensures one node per app lifecycle. On
-start, it opens the database, runs migrations, loads or generates an Ed25519
-identity with proof-of-work node ID, binds QUIC on an ephemeral port, wires up
-DHT and replication, and spawns the repair loop. On stop, it sends a shutdown
-signal and drains gracefully.
-
Eleven FFI functions are exposed to Dart via flutter_rust_bridge: lifecycle
-(node_start, node_stop, node_is_running), identity (create_identity,
-get_identity), memories (create_memory, get_timeline, get_memory), and
-network status (get_network_stats, get_replication_status). All types
-crossing the FFI boundary are flat structs with only String, Option<String>,
-Vec<String>, and primitives — no trait objects, no generics, no lifetimes.
-
Four adapter modules bridge core ports to concrete implementations:
-Blake3HasherAdapter, Ed25519SignerAdapter/Ed25519VerifierAdapter for
-cryptography, DhtPortAdapter for DHT operations, and
-ReplicationHandlerAdapter for incoming fragment and attestation RPCs.
-
The bundled-sqlite feature flag compiles SQLite from source, required for
-Android and iOS where the system library may not be available. Cargokit
-configuration passes this flag automatically in both debug and release builds.
-
Flutter app — A Material Design 3 application with Riverpod state
-management, targeting Android, iOS, Linux, macOS, and Windows from a single
-codebase.
-
The onboarding flow is three screens: a welcome screen explaining the project
-in one sentence ("Preserve your memories across millennia. No cloud. No
-company."), an identity creation screen that triggers Ed25519 keypair generation
-in Rust, and a confirmation screen showing the user's name and cryptographic
-identity.
-
The timeline screen displays memories in reverse chronological order with
-image previews, context text, and chips for memory type and visibility.
-Pull-to-refresh reloads from the Rust node. A floating action button opens the
-memory creation screen, which supports photo selection from gallery or camera
-via image_picker, optional context text, memory type and visibility dropdowns,
-and comma-separated tags. Creating a memory calls the Rust FFI synchronously,
-then returns to the timeline.
-
The network screen shows two cards: node status (peer count, DHT size,
-bootstrap state, uptime) and replication health (total fragments, healthy
-fragments, repairing fragments, replication factor). The settings screen
-displays the user's identity — name, truncated node ID, truncated public key,
-and creation date.
-
Three Riverpod providers manage state: nodeProvider starts the embedded node
-on app launch using the app documents directory and stops it on dispose;
-identityProvider loads the existing profile or creates a new one;
-timelineProvider fetches the memory list with pagination.
-
Testing — 9 Rust unit tests in tesseras-embedded covering node lifecycle
-(start/stop without panic), identity persistence across restarts, restart cycles
-without SQLite corruption, network event streaming, stats retrieval, memory
-creation and timeline retrieval, and single memory lookup by hash. 2 Flutter
-tests: an integration test verifying Rust initialization and app startup, and a
-widget smoke test.
-
Architecture decisions
-
-
Embedded node, not client-server: the phone runs the full P2P stack, not a
-thin client talking to a remote daemon. This means memories are preserved even
-without internet. Users with a Raspberry Pi or VPS can optionally connect the
-app to their daemon via GraphQL for higher availability, but it's not
-required.
-
Synchronous FFI: all flutter_rust_bridge functions are marked
-#[frb(sync)] and block on the internal Tokio runtime. This simplifies the
-Dart side (no async bridge complexity) while the Rust side handles concurrency
-internally. Flutter's UI thread stays responsive because Riverpod wraps calls
-in async providers.
-
Global singleton: a Mutex<Option<EmbeddedNode>> global ensures the node
-lifecycle is predictable — one start, one stop, no races. Mobile platforms
-kill processes aggressively, so simplicity in lifecycle management is a
-feature.
-
Flat FFI types: no Rust abstractions leak across the FFI boundary. Every
-type is a plain struct with strings and numbers. This makes the auto-generated
-Dart bindings reliable and easy to debug.
-
Three-screen onboarding: identity creation is the only required step. No
-email, no password, no server registration. The app generates a cryptographic
-identity locally and is ready to use.
-
-
What comes next
-
-
Phase 4: Resilience and Scale — Advanced NAT traversal (STUN/TURN),
-Shamir's Secret Sharing for heirs, sealed tesseras with time-lock encryption,
-performance tuning, security audits, OS packaging for
-Alpine/Arch/Debian/FreeBSD/OpenBSD
-
Phase 5: Exploration and Culture — Public tessera browser by
-era/location/theme/language, institutional curation, genealogy integration,
-physical media export (M-DISC, microfilm, acid-free paper with QR)
-
-
The infrastructure is complete. The network exists, replication works, and now
-anyone with a phone can participate. What remains is hardening what we have and
-opening it to the world.
Some memories are not meant for everyone. A private journal, a letter to be
-opened in 2050, a family secret sealed until the grandchildren are old enough.
-Until now, every tessera on the network was open. Phase 4 changes that: Tesseras
-now encrypts private and sealed content with a hybrid cryptographic scheme
-designed to resist both classical and quantum attacks.
-
The principle remains the same — encrypt as little as possible. Public memories
-need availability, not secrecy. But when someone creates a private or sealed
-tessera, the content is now locked behind AES-256-GCM encryption with keys
-protected by a hybrid key encapsulation mechanism combining X25519 and
-ML-KEM-768. Both algorithms must be broken to access the content.
-
What was built
-
AES-256-GCM encryptor (tesseras-crypto/src/encryption.rs) — Symmetric
-content encryption with random 12-byte nonces and authenticated associated data
-(AAD). The AAD binds ciphertext to its context: for private tesseras, the
-content hash is included; for sealed tesseras, both the content hash and the
-open_after timestamp are bound into the AAD. This means moving ciphertext
-between tesseras with different open dates causes decryption failure — you
-cannot trick the system into opening a sealed memory early by swapping its
-ciphertext into a tessera with an earlier seal date.
-
Hybrid Key Encapsulation Mechanism (tesseras-crypto/src/kem.rs) — Key
-exchange using X25519 (classical elliptic curve Diffie-Hellman) combined with
-ML-KEM-768 (the NIST-standardized post-quantum lattice-based KEM, formerly
-Kyber). Both shared secrets are combined via blake3::derive_key with a fixed
-context string ("tesseras hybrid kem v1") to produce a single 256-bit content
-encryption key. This follows the same "dual from day one" philosophy as the
-project's dual signing (Ed25519 + ML-DSA): if either algorithm is broken in the
-future, the other still protects the content.
-
Sealed Key Envelope (tesseras-crypto/src/sealed.rs) — Wraps a content
-encryption key using the hybrid KEM, so only the tessera owner can recover it.
-The KEM produces a transport key, which is XORed with the content key to produce
-a wrapped key stored alongside the KEM ciphertext. On unsealing, the owner
-decapsulates the KEM ciphertext to recover the transport key, then XORs again to
-recover the content key.
-
Key Publication (tesseras-crypto/src/sealed.rs) — A standalone signed
-artifact for publishing a sealed tessera's content key after its open_after
-date has passed. The owner signs the content key, tessera hash, and publication
-timestamp with their dual keys (Ed25519, with ML-DSA placeholder). The manifest
-stays immutable — the key publication is a separate document. Other nodes verify
-the signature against the owner's public key before using the published key to
-decrypt the content.
-
EncryptionContext (tesseras-core/src/enums.rs) — A domain type that
-represents the AAD context for encryption. It lives in tesseras-core rather than
-tesseras-crypto because it's a domain concept (not a crypto implementation
-detail). The to_aad_bytes() method produces deterministic serialization: a tag
-byte (0x00 for Private, 0x01 for Sealed), followed by the content hash, and for
-Sealed, the open_after timestamp as little-endian i64.
-
Domain validation (tesseras-core/src/service.rs) —
-TesseraService::create() now rejects Sealed and Private tesseras that don't
-provide encryption keys. This is a domain-level validation: the service layer
-enforces that you cannot create a sealed memory without the cryptographic
-machinery to protect it. The error message is clear: "missing encryption keys
-for visibility sealed until 2050-01-01."
-
Core type updates — TesseraIdentity now includes an optional
-encryption_public: Option<HybridEncryptionPublic> field containing both the
-X25519 and ML-KEM-768 public keys. KeyAlgorithm gained X25519 and MlKem768
-variants. The identity filesystem layout now supports node.x25519.key/.pub
-and node.mlkem768.key/.pub.
-
Testing — 8 unit tests for AES-256-GCM (roundtrip, wrong key, tampered
-ciphertext, wrong AAD, cross-context decryption failure, unique nonces, plus 2
-property-based tests for arbitrary payloads and nonce uniqueness). 5 unit tests
-for HybridKem (roundtrip, wrong keypair, tampered X25519, KDF determinism, plus
-1 property-based test). 4 unit tests for SealedKeyEnvelope and KeyPublication. 2
-integration tests covering the complete sealed and private tessera lifecycle:
-generate keys, create content key, encrypt, seal, unseal, decrypt, publish key,
-and verify — the full cycle.
-
Architecture decisions
-
-
Hybrid KEM from day one: X25519 + ML-KEM-768 follows the same philosophy
-as dual signing. We don't know which cryptographic assumptions will hold over
-millennia, so we combine classical and post-quantum algorithms. The cost is
-~1.2 KB of additional key material per identity — trivial compared to the
-photos and videos in a tessera.
-
BLAKE3 for KDF: rather than adding hkdf + sha2 as new dependencies, we
-use blake3::derive_key with a fixed context string. BLAKE3's key derivation
-mode is specifically designed for this use case, and the project already
-depends on BLAKE3 for content hashing.
-
Immutable manifests: when a sealed tessera's open_after date passes, the
-content key is published as a separate signed artifact (KeyPublication), not
-by modifying the manifest. This preserves the append-only, content-addressed
-nature of tesseras. The manifest was signed at creation time and never
-changes.
-
AAD binding prevents ciphertext swapping: the EncryptionContext binds
-both the content hash and (for sealed tesseras) the open_after timestamp
-into the AES-GCM authenticated data. An attacker who copies encrypted content
-from a "sealed until 2050" tessera into a "sealed until 2025" tessera will
-find that decryption fails — the AAD no longer matches.
-
XOR key wrapping: the sealed key envelope uses a simple XOR of the content
-key with the KEM-derived transport key, rather than an additional layer of
-AES-GCM. Since the transport key is a fresh random value from the KEM and is
-used exactly once, XOR is information-theoretically secure for this specific
-use case and avoids unnecessary complexity.
-
Domain validation, not storage validation: the "missing encryption keys"
-check lives in TesseraService::create(), not in the storage layer. This
-follows the hexagonal architecture pattern: domain rules are enforced at the
-service boundary, not scattered across adapters.
-
-
What comes next
-
-
Phase 4 continued: Resilience and Scale — Shamir's Secret Sharing for heir
-key distribution, advanced NAT traversal (STUN/TURN), performance tuning,
-security audits, OS packaging
-
Phase 5: Exploration and Culture — Public tessera browser by
-era/location/theme/language, institutional curation, genealogy integration,
-physical media export (M-DISC, microfilm, acid-free paper with QR)
-
-
Sealed tesseras make Tesseras a true time capsule. A father can now record a
-message for his unborn grandchild, seal it until 2060, and know that the
-cryptographic envelope will hold — even if the quantum computers of the future
-try to break it open early.
A P2P network of individuals is fragile. Hard drives die, phones get lost,
-people lose interest. The long-term survival of humanity's memories depends on
-institutions — libraries, archives, museums, universities — that measure their
-lifetimes in centuries. Phase 4 continues with institutional node onboarding:
-verified organizations can now pledge storage, run searchable indexes, and
-participate in the network with a distinct identity.
-
The design follows a principle of trust but verify: institutions identify
-themselves via DNS TXT records (the same mechanism used by SPF, DKIM, and DMARC
-for email), pledge a storage budget, and receive reciprocity exemptions so they
-can store fragments for others without expecting anything in return. In
-exchange, the network treats their fragments as higher-quality replicas and
-limits over-reliance on any single institution through diversity constraints.
-
What was built
-
Capability bits (tesseras-core/src/network.rs) — Two new flags added to
-the Capabilities bitfield: INSTITUTIONAL (bit 7) and SEARCH_INDEX (bit 8).
-A new institutional_default() constructor returns the full Phase 2 capability
-set plus these two bits and RELAY. Normal nodes advertise phase2_default()
-which lacks institutional flags. Serialization roundtrip tests verify the new
-bits survive MessagePack encoding.
-
Search types (tesseras-core/src/search.rs) — Three new domain types for
-the search subsystem:
SearchHit — a single result: content hash plus a MetadataExcerpt (title,
-description, memory type, creation date, visibility, language, tags)
-
GeoFilter — bounding box with min_lat, max_lat, min_lon, max_lon for
-spatial queries
-
-
All types derive Serialize/Deserialize for wire transport and
-Clone/Debug for diagnostics.
-
Institutional daemon config (tesd/src/config.rs) — A new [institutional]
-TOML section with domain (the DNS domain to verify), pledge_bytes (storage
-commitment in bytes), and search_enabled (toggle for the FTS5 index). The
-to_dht_config() method now sets Capabilities::institutional_default() when
-institutional config is present, so institutional nodes advertise the right
-capability bits in Pong responses.
-
DNS TXT verification (tesd/src/institutional.rs) — Async DNS resolution
-using hickory-resolver to verify institutional identity. The daemon looks up
-_tesseras.<domain> TXT records and parses key-value fields: v (version),
-node (hex-encoded node ID), and pledge (storage pledge in bytes).
-Verification checks:
-
-
A TXT record exists at _tesseras.<domain>
-
The node field matches the daemon's own node ID
-
The pledge field is present and valid
-
-
On startup, the daemon attempts DNS verification. If it succeeds, the node runs
-with institutional capabilities. If it fails, the node logs a warning and
-downgrades to a normal full node — no crash, no manual intervention.
-
CLI setup command (tesseras-cli/src/institutional.rs) — A new
-institutional setup subcommand that guides operators through onboarding:
-
-
Reads the node's identity from the data directory
-
Prompts for domain name and pledge size
-
Generates the exact DNS TXT record to add:
-v=tesseras1 node=<hex> pledge=<bytes>
-
Writes the institutional section to the daemon's config file
-
Prints next steps: add the TXT record, restart the daemon
-
-
SQLite search index (tesseras-storage) — A migration
-(003_institutional.sql) that creates three structures:
-
-
search_content — an FTS5 virtual table for full-text search over tessera
-metadata (title, description, creator, tags, language)
-
geo_index — an R-tree virtual table for spatial bounding-box queries over
-latitude/longitude
-
geo_map — a mapping table linking R-tree row IDs to content hashes
-
-
The SqliteSearchIndex adapter implements the SearchIndex port trait with
-index_tessera() (insert/update) and search() (query with filters). FTS5
-queries support natural language search; geo queries use R-tree INTERSECT for
-bounding box lookups. Results are ranked by FTS5 relevance score.
-
The migration also adds an is_institutional column to the reciprocity table,
-handled idempotently via pragma_table_info checks (SQLite's
-ALTER TABLE ADD COLUMN lacks IF NOT EXISTS).
-
Reciprocity bypass (tesseras-replication/src/service.rs) — Institutional
-nodes are exempt from reciprocity checks. When receive_fragment() is called,
-if the sender's node ID is marked as institutional in the reciprocity ledger,
-the balance check is skipped entirely. This means institutions can store
-fragments for the entire network without needing to "earn" credits first — their
-DNS-verified identity and storage pledge serve as their credential.
-
Node-type diversity constraint (tesseras-replication/src/distributor.rs) —
-A new apply_institutional_diversity() function limits how many replicas of a
-single tessera can land on institutional nodes. The cap is
-ceil(replication_factor / 3.5) — with the default r=7, at most 2 of 7
-replicas go to institutions. This prevents the network from becoming dependent
-on a small number of large institutions: if a university's servers go down, at
-least 5 replicas remain on independent nodes.
-
DHT message extensions (tesseras-dht/src/message.rs) — Two new message
-variants:
-
Message
Purpose
-
Search
Client sends query string, filters, and page number
-
SearchResult
Institutional node responds with hits and total count
-
-
The encode() function was switched from positional to named MessagePack
-serialization (rmp_serde::to_vec_named) to handle SearchFilters' optional
-fields correctly — positional encoding breaks when skip_serializing_if omits
-fields.
tesseras_institutional_stored_bytes — actual bytes stored
-
tesseras_institutional_pledge_utilization_ratio — stored/pledged ratio
-
tesseras_institutional_peers_served — unique peers served fragments
-
tesseras_institutional_search_index_total — tesseras in the search index
-
tesseras_institutional_search_queries_total — search queries received
-
tesseras_institutional_dns_verification_status — 1 if DNS verified, 0
-otherwise
-
tesseras_institutional_dns_verification_last — Unix timestamp of last
-verification
-
-
Integration tests — Two tests in
-tesseras-replication/tests/integration.rs:
-
-
institutional_peer_bypasses_reciprocity — verifies that an institutional
-peer with a massive deficit (-999,999 balance) is still allowed to store
-fragments, while a non-institutional peer with the same deficit is rejected
-
institutional_node_accepts_fragment_despite_deficit — full async test using
-ReplicationService with mocked DHT, fragment store, reciprocity ledger, and
-blob store: sends a fragment from an institutional sender and verifies it's
-accepted
-
-
322 tests pass across the workspace. Clippy clean with -D warnings.
-
Architecture decisions
-
-
DNS TXT over PKI or blockchain: DNS is universally deployed, universally
-understood, and already used for domain verification (SPF, DKIM, Let's
-Encrypt). Institutions already manage DNS. No certificate authority, no token,
-no on-chain transaction — just a TXT record. If an institution loses control
-of their domain, the verification naturally fails on the next check.
-
Graceful degradation on DNS failure: if DNS verification fails at startup,
-the daemon downgrades to a normal full node instead of refusing to start. This
-prevents operational incidents — a DNS misconfiguration shouldn't take a node
-offline.
-
Diversity cap at ceil(r / 3.5): with r=7, at most 2 replicas go to
-institutions. This is conservative — it ensures the network never depends on
-institutions for majority quorum, while still benefiting from their storage
-capacity and uptime.
-
Named MessagePack encoding: switching from positional to named encoding
-adds ~15% overhead per message but eliminates a class of serialization bugs
-when optional fields are present. The DHT is not bandwidth-constrained at the
-message level, so the tradeoff is worth it.
-
Reciprocity exemption over credit grants: rather than giving institutions
-a large initial credit balance (which is arbitrary and needs tuning), we
-exempt them entirely. Their DNS-verified identity and public storage pledge
-replace the bilateral reciprocity mechanism.
-
FTS5 + R-tree in SQLite: full-text search and spatial indexing are built
-into SQLite as loadable extensions. No external search engine (Elasticsearch,
-Meilisearch) needed. This keeps the deployment a single binary with a single
-database file — critical for institutional operators who may not have a DevOps
-team.
-
-
What comes next
-
-
Phase 4 continued — storage deduplication (content-addressable store with
-BLAKE3 keying), security audits, OS packaging (Alpine, Arch, Debian, OpenBSD,
-FreeBSD)
-
Phase 5: Exploration and Culture — public tessera browser by
-era/location/theme/language, institutional curation, genealogy integration
-(FamilySearch, Ancestry), physical media export (M-DISC, microfilm, acid-free
-paper with QR), AI-assisted context
-
-
Institutional onboarding closes a critical gap in Tesseras' preservation model.
-Individual nodes provide grassroots resilience — thousands of devices across the
-globe, each storing a few fragments. Institutional nodes provide anchoring —
-organizations with professional infrastructure, redundant storage, and
-multi-decade operational horizons. Together, they form a network where memories
-can outlast both individual devices and individual institutions.
Most people's devices sit behind a NAT — a network address translator that lets
-them reach the internet but prevents incoming connections. For a P2P network,
-this is an existential problem: if two nodes behind NATs can't talk to each
-other, the network fragments. Phase 4 continues with a full NAT traversal stack:
-STUN-based discovery, coordinated hole punching, and relay fallback.
-
The approach follows the same pattern as most battle-tested P2P systems (WebRTC,
-BitTorrent, IPFS): try the cheapest option first, escalate only when necessary.
-Direct connectivity costs nothing. Hole punching costs a few coordinated
-packets. Relaying costs sustained bandwidth from a third party. Tesseras tries
-them in that order.
-
What was built
-
NatType classification (tesseras-core/src/network.rs) — A new NatType
-enum (Public, Cone, Symmetric, Unknown) added to the core domain layer. This
-type is shared across the entire stack: the STUN client writes it, the DHT
-advertises it in Pong messages, and the punch coordinator reads it to decide
-whether hole punching is even worth attempting (Cone-to-Cone works ~80% of the
-time; Symmetric-to-Symmetric almost never works).
-
STUN client (tesseras-net/src/stun.rs) — A minimal STUN implementation
-(RFC 5389 Binding Request/Response) that discovers a node's external address.
-The codec encodes 20-byte binding requests with a random transaction ID and
-decodes XOR-MAPPED-ADDRESS responses. The discover_nat() function queries
-multiple STUN servers in parallel (Google, Cloudflare by default), compares the
-mapped addresses, and classifies the NAT type:
-
-
Same IP and port from all servers → Public (no NAT)
-
Same mapped address from all servers → Cone (hole punching works)
-
Different mapped addresses → Symmetric (hole punching unreliable)
-
No responses → Unknown
-
-
Retries with exponential backoff and configurable timeouts. 12 tests covering
-codec roundtrips, all classification paths, and async loopback queries.
-
Signed punch coordination (tesseras-net/src/punch.rs) — Ed25519 signing
-and verification for PunchIntro, RelayRequest, and RelayMigrate messages.
-Every introduction is signed by the initiator with a 30-second timestamp window,
-preventing reflection attacks (where an attacker replays an old introduction to
-redirect traffic). The payload format is target || external_addr || timestamp
-— changing any field invalidates the signature. 6 unit tests plus 3
-property-based tests with proptest (arbitrary node IDs, ports, and session
-tokens).
-
Relay session manager (tesseras-net/src/relay.rs) — Manages transparent
-UDP relay sessions between NATed peers. Each session has a random 16-byte token;
-peers prefix their packets with the token, the relay strips it and forwards.
-Features:
-
-
Bidirectional forwarding (A→R→B and B→R→A)
-
Rate limiting: 256 KB/s for reciprocal peers, 64 KB/s for non-reciprocal
-
10-minute maximum duration for bootstrap (non-reciprocal) sessions
-
Address migration: when a peer's IP changes (Wi-Fi to cellular), a signed
-RelayMigrate updates the session without tearing it down
-
Idle cleanup with configurable timeout
-
8 unit tests plus 2 property-based tests
-
-
DHT message extensions (tesseras-dht/src/message.rs) — Seven new message
-variants added to the DHT protocol:
-
Message
Purpose
-
PunchIntro
"I want to connect to node X, here's my signed external address"
-
PunchRequest
Introducer forwards the request to the target
-
PunchReady
Target confirms readiness, sends its external address
-
RelayRequest
"Create a relay session to node X"
-
RelayOffer
Relay responds with its address and session token
-
RelayClose
Tear down a relay session
-
RelayMigrate
Update session after network change
-
-
The Pong message was extended with NAT metadata: nat_type,
-relay_slots_available, and relay_bandwidth_used_kbps. All new fields use
-#[serde(default)] for backward compatibility — old nodes ignore what they
-don't recognize, new nodes fall back to defaults. 9 new serialization roundtrip
-tests.
-
NatHandler trait and dispatch (tesseras-dht/src/engine.rs) — A new
-NatHandler async trait (5 methods) injected into the DHT engine, following the
-same dependency injection pattern as the existing ReplicationHandler. The
-engine's message dispatch loop now routes all punch/relay messages to the
-handler. This keeps the DHT engine protocol-agnostic while allowing the NAT
-traversal logic to live in tesseras-net.
-
Mobile reconnection types (tesseras-embedded/src/reconnect.rs) — A
-three-phase reconnection state machine for mobile devices:
-
-
QuicMigration (0-2s) — try QUIC connection migration for all active peers
-
ReStun (2-5s) — re-discover external address via STUN
-
ReEstablish (5-10s) — reconnect peers that migration couldn't save
-
-
Peers are reconnected in priority order: bootstrap nodes first, then nodes
-holding our fragments, then nodes whose fragments we hold, then general DHT
-neighbors. A new NetworkChanged event variant was added to the FFI event
-stream so the Flutter app can show reconnection progress.
-
Daemon NAT configuration (tesd/src/config.rs) — A new [nat] section in
-the TOML config with STUN server list, relay toggle, max relay sessions,
-bandwidth limits (reciprocal vs bootstrap), and idle timeout. All fields have
-sensible defaults; relay is disabled by default.
-
Prometheus metrics (tesseras-net/src/metrics.rs) — 16 metrics across four
-subsystems:
-
-
STUN: requests, failures, latency histogram
-
Punch: attempts/successes/failures (by NAT type pair), latency histogram
-
Relay: active sessions, total sessions, bytes forwarded, idle timeouts,
-rate limit hits
-
Reconnect: network changes, attempts/successes by phase, duration
-histogram
-
-
6 tests verifying registration, increment, label cardinality, and
-double-registration detection.
-
Integration tests — Two end-to-end tests using MemTransport (in-memory
-simulated network):
-
-
punch_integration.rs — Full 3-node hole-punch flow: A sends signed
-PunchIntro to introducer I, I verifies and forwards PunchRequest to B, B
-verifies the original signature and sends PunchReady back, A and B exchange
-messages directly. Also tests that a bad signature is correctly rejected.
-
relay_integration.rs — Full 3-node relay flow: A requests relay from R, R
-creates session and sends RelayOffer to both peers, A and B exchange
-token-prefixed packets through R, A migrates to a new address mid-session, A
-closes the session, and the test verifies the session is torn down and further
-forwarding fails.
-
-
Property tests — 7 proptest-based tests covering: signature round-trips for
-all three signed message types (arbitrary node IDs, ports, tokens), NAT
-classification determinism (same inputs always produce same output), STUN
-binding request validity, session token uniqueness, and relay rejection of
-too-short packets.
-
Justfile targets — just test-nat runs all NAT traversal tests across
-tesseras-net and tesseras-dht. just test-chaos is a placeholder for future
-Docker Compose chaos tests with tc netem.
-
Architecture decisions
-
-
STUN over TURN: we implement STUN (discovery) and custom relay rather than
-full TURN. TURN requires authenticated allocation and is designed for media
-relay; our relay is simpler — token-prefixed UDP forwarding with rate limits.
-This keeps the protocol minimal and avoids depending on external TURN servers.
-
Signatures on introductions: every PunchIntro is signed by the
-initiator. Without this, an attacker could send forged introductions to
-redirect a node's hole-punch attempts to an attacker-controlled address (a
-reflection attack). The 30-second timestamp window limits replay.
-
Reciprocal bandwidth tiers: relay nodes give 4x more bandwidth (256 vs 64
-KB/s) to peers with good reciprocity scores. This incentivizes nodes to store
-fragments for others — if you contribute, you get better relay service when
-you need it.
-
Backward-compatible Pong extension: new NAT fields in Pong use
-#[serde(default)] and Option<T>. Old nodes that don't understand these
-fields simply skip them during deserialization. No protocol version bump
-needed.
-
NatHandler as async trait: the NAT traversal logic is injected into the
-DHT engine via a trait, just like ReplicationHandler. This keeps the DHT
-engine focused on routing and peer management, and allows the NAT
-implementation to be swapped or disabled without touching core DHT code.
-
-
What comes next
-
-
Phase 4 continued — performance tuning (connection pooling, fragment
-caching, SQLite WAL), security audits, institutional node onboarding, OS
-packaging
-
Phase 5: Exploration and Culture — public tessera browser by
-era/location/theme/language, institutional curation, genealogy integration,
-physical media export (M-DISC, microfilm, acid-free paper with QR)
-
-
With NAT traversal, Tesseras can connect nodes regardless of their network
-topology. Public nodes talk directly. Cone-NATed nodes punch through with an
-introducer's help. Symmetric-NATed or firewalled nodes relay through willing
-peers. The network adapts to the real world, where most devices are behind a NAT
-and network conditions change constantly.
A P2P network that can traverse NATs but chokes on its own I/O is not much use.
-Phase 4 continues with performance tuning: centralizing database configuration,
-caching fragment blobs in memory, managing QUIC connection lifecycles, and
-eliminating unnecessary disk reads from the attestation hot path.
-
The guiding principle was the same as the rest of Tesseras: do the simplest
-thing that actually works. No custom allocators, no lock-free data structures,
-no premature complexity. A centralized StorageConfig, an LRU cache, a
-connection reaper, and a targeted fix to avoid re-reading blobs that were
-already checksummed.
-
What was built
-
Centralized SQLite configuration (tesseras-storage/src/database.rs) — A
-new StorageConfig struct and open_database() / open_in_memory() functions
-that apply all SQLite pragmas in one place: WAL journal mode, foreign keys,
-synchronous mode (NORMAL by default, FULL for unstable hardware like RPi + SD
-card), busy timeout, page cache size, and WAL autocheckpoint interval.
-Previously, each call site opened a connection and applied pragmas ad hoc. Now
-the daemon, CLI, and tests all go through the same path. 7 tests covering
-foreign keys, busy timeout, journal mode, migrations, synchronous modes, and
-on-disk WAL file creation.
-
LRU fragment cache (tesseras-storage/src/cache.rs) — A
-CachedFragmentStore that wraps any FragmentStore with a byte-aware LRU
-cache. Fragment blobs are cached on read and invalidated on write or delete.
-When the cache exceeds its configured byte limit, the least recently used
-entries are evicted. The cache is transparent: it implements FragmentStore
-itself, so the rest of the stack doesn't know it's there. Optional Prometheus
-metrics track hits, misses, and current byte usage. 3 tests: cache hit avoids
-inner read, store invalidates cache, eviction when over max bytes.
-
Prometheus storage metrics (tesseras-storage/src/metrics.rs) — A
-StorageMetrics struct with three counters/gauges: fragment_cache_hits,
-fragment_cache_misses, and fragment_cache_bytes. Registered with the
-Prometheus registry and wired into the fragment cache via with_metrics().
-
Attestation hot path fix (tesseras-replication/src/service.rs) — The
-attestation flow previously read every fragment blob from disk and recomputed
-its BLAKE3 checksum. Since list_fragments() already returns FragmentId with
-a stored checksum, the fix is trivial: use frag.checksum instead of
-blake3::hash(&data). This eliminates one disk read per fragment during
-attestation — for a tessera with 100 fragments, that's 100 fewer reads. A test
-with expect_read_fragment().never() verifies no blob reads happen during
-attestation.
-
QUIC connection pool lifecycle (tesseras-net/src/quinn_transport.rs) — A
-PoolConfig struct controlling max connections, idle timeout, and reaper
-interval. PooledConnection wraps each quinn::Connection with a last_used
-timestamp. When the pool reaches capacity, the oldest idle connection is evicted
-before opening a new one. A background reaper task (Tokio spawn) periodically
-closes connections that have been idle beyond the timeout. 4 new pool metrics:
-tesseras_conn_pool_size, pool_hits_total, pool_misses_total,
-pool_evictions_total.
-
Daemon integration (tesd/src/config.rs, main.rs) — A new [performance]
-section in the TOML config with fields for SQLite cache size, synchronous mode,
-busy timeout, fragment cache size, max connections, idle timeout, and reaper
-interval. The daemon's main() now calls open_database() with the configured
-StorageConfig, wraps FsFragmentStore with CachedFragmentStore, and binds
-QUIC with the configured PoolConfig. The direct rusqlite dependency was
-removed from the daemon crate.
-
CLI migration (tesseras-cli/src/commands/init.rs, create.rs) — Both
-init and create commands now use tesseras_storage::open_database() with
-the default StorageConfig instead of opening raw rusqlite connections. The
-rusqlite dependency was removed from the CLI crate.
-
Architecture decisions
-
-
Decorator pattern for caching: CachedFragmentStore wraps
-Box<dyn FragmentStore> and implements FragmentStore itself. This means
-caching is opt-in, composable, and invisible to consumers. The daemon enables
-it; tests can skip it.
-
Byte-aware eviction: the LRU cache tracks total bytes, not entry count.
-Fragment blobs vary wildly in size (a 4KB text fragment vs a 2MB photo shard),
-so counting entries would give a misleading picture of memory usage.
-
No connection pool crate: instead of pulling in a generic pool library,
-the connection pool is a thin wrapper around
-DashMap<SocketAddr, PooledConnection> with a Tokio reaper. QUIC connections
-are multiplexed, so the "pool" is really about lifecycle management (idle
-cleanup, max connections) rather than borrowing/returning.
-
Stored checksums over re-reads: the attestation fix is intentionally
-minimal — one line changed, one disk read removed per fragment. The checksums
-were already stored in SQLite by store_fragment(), they just weren't being
-used.
-
Centralized pragma configuration: a single StorageConfig struct replaces
-scattered PRAGMA calls. The sqlite_synchronous_full flag exists
-specifically for Raspberry Pi deployments where the kernel can crash and lose
-un-checkpointed WAL transactions.
-
-
What comes next
-
-
Phase 4 continued — Shamir's Secret Sharing for heirs, sealed tesseras
-(time-lock encryption), security audits, institutional node onboarding,
-storage deduplication, OS packaging
-
Phase 5: Exploration and Culture — public tessera browser by
-era/location/theme/language, institutional curation, genealogy integration,
-physical media export (M-DISC, microfilm, acid-free paper with QR)
-
-
With performance tuning in place, Tesseras handles the common case efficiently:
-fragment reads hit the LRU cache, attestation skips disk I/O, idle QUIC
-connections are reaped automatically, and SQLite is configured consistently
-across the entire stack. The next steps focus on cryptographic features (Shamir,
-time-lock) and hardening for production deployment.
Phase 4: Heir Key Recovery with Shamir's Secret Sharing
-
2026-02-15
-
What happens to your memories when you die? Until now, Tesseras could preserve
-content across millennia — but the private and sealed keys died with their
-owner. Phase 4 continues with a solution: Shamir's Secret Sharing, a
-cryptographic scheme that lets you split your identity into shares and
-distribute them to the people you trust most.
-
The math is elegant: you choose a threshold T and a total N. Any T shares
-reconstruct the full secret; T-1 shares reveal absolutely nothing. This is not
-"almost nothing" — it is information-theoretically secure. An attacker with one
-fewer share than the threshold has exactly zero bits of information about the
-secret, no matter how much computing power they have.
-
What was built
-
GF(256) finite field arithmetic (tesseras-crypto/src/shamir/gf256.rs) —
-Shamir's Secret Sharing requires arithmetic in a finite field. We implement
-GF(256) using the same irreducible polynomial as AES (x^8 + x^4 + x^3 + x + 1),
-with compile-time lookup tables for logarithm and exponentiation. All operations
-are constant-time via table lookups — no branches on secret data. The module
-includes Horner's method for polynomial evaluation and Lagrange interpolation at
-x=0 for secret recovery. 233 lines, exhaustively tested: all 256 elements for
-identity/inverse properties, commutativity, and associativity.
-
ShamirSplitter (tesseras-crypto/src/shamir/mod.rs) — The core
-split/reconstruct API. split() takes a secret byte slice, a configuration
-(threshold T, total N), and the owner's Ed25519 public key. For each byte of the
-secret, it constructs a random polynomial of degree T-1 over GF(256) with the
-secret byte as the constant term, then evaluates it at N distinct points.
-reconstruct() takes T or more shares and recovers the secret via Lagrange
-interpolation. Both operations include extensive validation: threshold bounds,
-session consistency, owner fingerprint matching, and BLAKE3 checksum
-verification.
-
HeirShare format — Each share is a self-contained, serializable artifact
-with:
-
-
Format version (v1) for forward compatibility
-
Share index (1..N) and threshold/total metadata
-
Session ID (random 8 bytes) — prevents mixing shares from different split
-sessions
-
Owner fingerprint (first 8 bytes of BLAKE3 hash of the Ed25519 public key)
-
Share data (the Shamir y-values, same length as the secret)
-
BLAKE3 checksum over all preceding fields
-
-
Shares are serialized in two formats: MessagePack (compact binary, for
-programmatic use) and base64 text (human-readable, for printing and physical
-storage). The text format includes a header with metadata and delimiters:
This format is designed to be printed on paper, stored in a safe deposit box, or
-engraved on metal. The header is informational — only the base64 payload is
-parsed during reconstruction.
-
CLI integration (tesseras-cli/src/commands/heir.rs) — Three new
-subcommands:
-
-
tes heir create — splits your Ed25519 identity into heir shares. Prompts for
-confirmation (your full identity is at stake), generates both .bin and
-.txt files for each share, and writes heir_meta.json to your identity
-directory.
-
tes heir reconstruct — loads share files (auto-detects binary vs text
-format), validates consistency, reconstructs the secret, derives the Ed25519
-keypair, and optionally installs it to ~/.tesseras/identity/ (with automatic
-backup of the existing identity).
-
tes heir info — displays share metadata and verifies the checksum without
-exposing any secret material.
-
-
Secret blob format — Identity keys are serialized into a versioned blob
-before splitting: a version byte (0x01), a flags byte (0x00 for Ed25519-only),
-followed by the 32-byte Ed25519 secret key. This leaves room for future
-expansion when X25519 and ML-KEM-768 private keys are integrated into the heir
-share system.
-
Testing — 20 unit tests for ShamirSplitter (roundtrip, all share
-combinations, insufficient shares, wrong owner, wrong session, threshold-1
-boundary, large secrets up to ML-KEM-768 key size). 7 unit tests for GF(256)
-arithmetic (exhaustive field properties). 3 property-based tests with proptest
-(arbitrary secrets up to 5000 bytes, arbitrary T-of-N configurations,
-information-theoretic security verification). Serialization roundtrip tests for
-both MessagePack and base64 text formats. 2 integration tests covering the
-complete heir lifecycle: generate identity, split into shares, serialize,
-deserialize, reconstruct, verify keypair, and sign/verify with reconstructed
-keys.
-
Architecture decisions
-
-
GF(256) over GF(prime): we use GF(256) rather than a prime field because
-it maps naturally to bytes — each element is a single byte, each share is the
-same length as the secret. No big-integer arithmetic, no modular reduction, no
-padding. This is the same approach used by most real-world Shamir
-implementations including SSSS and Hashicorp Vault.
-
Compile-time lookup tables: the LOG and EXP tables for GF(256) are
-computed at compile time using const fn. This means zero runtime
-initialization cost and constant-time operations via table lookups rather than
-loops.
-
Session ID prevents cross-session mixing: each call to split() generates
-a fresh random session ID. If an heir accidentally uses shares from two
-different split sessions (e.g., before and after a key rotation),
-reconstruction fails cleanly with a validation error rather than producing
-garbage output.
-
BLAKE3 checksums detect corruption: each share includes a BLAKE3 checksum
-over its contents. This catches bit rot, transmission errors, and accidental
-truncation before any reconstruction attempt. A share printed on paper and
-scanned back via OCR will fail the checksum if a single character is wrong.
-
Owner fingerprint for identification: shares include the first 8 bytes of
-BLAKE3(Ed25519 public key) as a fingerprint. This lets heirs verify which
-identity a share belongs to without revealing the full public key. During
-reconstruction, the fingerprint is cross-checked against the recovered key.
-
Dual format for resilience: both binary (MessagePack) and text (base64)
-formats are generated because physical media has different failure modes than
-digital storage. A USB drive might fail; paper survives. A QR code might be
-unreadable; base64 text can be manually typed.
-
Blob versioning: the secret is wrapped in a versioned blob (version +
-flags + key material) so future versions can include additional keys (X25519,
-ML-KEM-768) without breaking backward compatibility with existing shares.
-
-
What comes next
-
-
Phase 4 continued: Resilience and Scale — advanced NAT traversal
-(STUN/TURN), performance tuning (connection pooling, fragment caching, SQLite
-WAL), security audits, institutional node onboarding, OS packaging
-
Phase 5: Exploration and Culture — public tessera browser by
-era/location/theme/language, institutional curation, genealogy integration,
-physical media export (M-DISC, microfilm, acid-free paper with QR)
-
-
With Shamir's Secret Sharing, Tesseras closes the last critical gap in long-term
-preservation. Your memories survive infrastructure failures through erasure
-coding. Your privacy survives quantum computers through hybrid encryption. And
-now, your identity survives you — passed on to the people you chose, requiring
-their cooperation to unlock what you left behind.
When multiple tesseras share the same photo, the same audio clip, or the same
-fragment data, the old storage layer kept separate copies of each. On a node
-storing thousands of tesseras for the network, this duplication adds up fast.
-Phase 4 continues with storage deduplication: a content-addressable store (CAS)
-that ensures every unique piece of data is stored exactly once on disk,
-regardless of how many tesseras reference it.
-
The design is simple and proven: hash the content with BLAKE3, use the hash as
-the filename, and maintain a reference count in SQLite. When two tesseras
-include the same 5 MB photo, one file exists on disk with a refcount of 2. When
-one tessera is deleted, the refcount drops to 1 and the file stays. When the
-last reference is released, a periodic sweep cleans up the orphan.
-
What was built
-
CAS schema migration (tesseras-storage/migrations/004_dedup.sql) — Three
-new tables:
-
-
cas_objects — tracks every object in the store: BLAKE3 hash (primary key),
-byte size, reference count, and creation timestamp
-
blob_refs — maps logical blob identifiers (tessera hash + memory hash +
-filename) to CAS hashes, replacing the old filesystem path convention
-
fragment_refs — maps logical fragment identifiers (tessera hash + fragment
-index) to CAS hashes, replacing the old fragments/ directory layout
-
-
Indexes on the hash columns ensure O(1) lookups during reads and reference
-counting.
-
CasStore (tesseras-storage/src/cas.rs) — The core content-addressable
-storage engine. Files are stored under a two-level prefix directory:
-<root>/<2-char-hex-prefix>/<full-hash>.blob. The store provides five
-operations:
-
-
put(hash, data) — writes data to disk if not already present, increments
-refcount. Returns whether a dedup hit occurred.
-
get(hash) — reads data from disk by hash
-
release(hash) — decrements refcount. If it reaches zero, the on-disk file is
-deleted immediately.
-
contains(hash) — checks existence without reading
-
ref_count(hash) — returns the current reference count
-
-
All operations are atomic within a single SQLite transaction. The refcount is
-the source of truth — if the refcount says the object exists, the file must be
-on disk.
-
CAS-backed FsBlobStore (tesseras-storage/src/blob.rs) — Rewritten to
-delegate all storage to the CAS. When a blob is written, its BLAKE3 hash is
-computed and passed to cas.put(). A row in blob_refs maps the logical path
-(tessera + memory + filename) to the CAS hash. Reads look up the CAS hash via
-blob_refs and fetch from cas.get(). Deleting a tessera releases all its blob
-references in a single transaction.
-
CAS-backed FsFragmentStore (tesseras-storage/src/fragment.rs) — Same
-pattern for erasure-coded fragments. Each fragment's BLAKE3 checksum is already
-computed during Reed-Solomon encoding, so it's used directly as the CAS key.
-Fragment verification now checks the CAS hash instead of recomputing from
-scratch — if the CAS says the data is intact, it is.
-
Sweep garbage collector (cas.rs:sweep()) — A periodic GC pass that handles
-three edge cases the normal refcount path can't:
-
-
Orphan files — files on disk with no corresponding row in cas_objects.
-Can happen after a crash mid-write. Files younger than 1 hour are skipped
-(grace period for in-flight writes); older orphans are deleted.
-
Leaked refcounts — rows in cas_objects with refcount zero that weren't
-cleaned up (e.g., if the process died between decrementing and deleting).
-These rows are removed.
-
Idempotent — running sweep twice produces the same result.
-
-
The sweep is wired into the existing repair loop in tesseras-replication, so
-it runs automatically every 24 hours alongside fragment health checks.
-
Migration from old layout (tesseras-storage/src/migration.rs) — A
-copy-first migration strategy that moves data from the old directory-based
-layout (blobs/<tessera>/<memory>/<file> and
-fragments/<tessera>/<index>.shard) into the CAS. The migration:
-
-
Checks the storage version in storage_meta (version 1 = old layout, version
-2 = CAS)
-
Walks the old blobs/ and fragments/ directories
-
Computes BLAKE3 hashes and inserts into CAS via put() — duplicates are
-automatically deduplicated
Removes old directories only after all data is safely in CAS
-
Updates the storage version to 2
-
-
The migration runs on daemon startup, is idempotent (safe to re-run), and
-reports statistics: files migrated, duplicates found, bytes saved.
-
Prometheus metrics (tesseras-storage/src/metrics.rs) — Ten new metrics for
-observability:
-
Metric
Description
-
cas_objects_total
Total unique objects in the CAS
-
cas_bytes_total
Total bytes stored
-
cas_dedup_hits_total
Number of writes that found an existing object
-
cas_bytes_saved_total
Bytes saved by deduplication
-
cas_gc_refcount_deletions_total
Objects deleted when refcount reached zero
-
cas_gc_sweep_orphans_cleaned_total
Orphan files removed by sweep
-
cas_gc_sweep_leaked_refs_cleaned_total
Leaked refcount rows cleaned
-
cas_gc_sweep_skipped_young_total
Young orphans skipped (grace period)
-
cas_gc_sweep_duration_seconds
Time spent in sweep GC
-
-
Property-based tests — Two proptest tests verify CAS invariants under random
-inputs:
-
-
refcount_matches_actual_refs — after N random put/release operations, the
-refcount always matches the actual number of outstanding references
-
cas_path_is_deterministic — the same hash always produces the same
-filesystem path
-
-
Integration test updates — All integration tests across tesseras-core,
-tesseras-replication, tesseras-embedded, and tesseras-cli updated for the
-new CAS-backed constructors. Tamper-detection tests updated to work with the CAS
-directory layout.
-
347 tests pass across the workspace. Clippy clean with -D warnings.
-
Architecture decisions
-
-
BLAKE3 as CAS key: the content hash we already compute for integrity
-verification doubles as the deduplication key. No additional hashing step —
-the hash computed during create or replicate is reused as the CAS address.
-
SQLite refcount over filesystem reflinks: we considered using
-filesystem-level copy-on-write (reflinks on btrfs/XFS), but that would tie
-Tesseras to specific filesystems. SQLite refcounting works on any filesystem,
-including FAT32 on cheap USB drives and ext4 on Raspberry Pis.
-
Two-level hex prefix directories: storing all CAS objects in a flat
-directory would slow down filesystems with millions of entries. The
-<2-char prefix>/ split limits any single directory to ~65k entries before a
-second prefix level is needed. This matches the approach used by Git's object
-store.
-
Grace period for orphan files: the sweep GC skips files younger than 1
-hour to avoid deleting objects that are being written by a concurrent
-operation. This is a pragmatic choice — it trades a small window of potential
-orphans for crash safety without requiring fsync or two-phase commit.
-
Copy-first migration: the migration copies data to CAS before removing old
-directories. If the process is interrupted, the old data is still intact and
-migration can be re-run. This is slower than moving files but guarantees no
-data loss.
-
Sweep in repair loop: rather than adding a separate GC timer, the CAS
-sweep piggybacks on the existing 24-hour repair loop. This keeps the daemon
-simple — one background maintenance cycle handles both fragment health and
-storage cleanup.
-
-
What comes next
-
-
Phase 4 continued — security audits, OS packaging (Alpine, Arch, Debian,
-OpenBSD, FreeBSD)
-
Phase 5: Exploration and Culture — public tessera browser by
-era/location/theme/language, institutional curation, genealogy integration
-(FamilySearch, Ancestry), physical media export (M-DISC, microfilm, acid-free
-paper with QR), AI-assisted context
-
-
Storage deduplication completes the storage efficiency story for Tesseras. A
-node that stores fragments for thousands of users — common for institutional
-nodes and always-on full nodes — now pays the disk cost of unique data only.
-Combined with Reed-Solomon erasure coding (which already minimizes redundancy at
-the network level), the system achieves efficient storage at both the local and
-distributed layers.
Trust shouldn't require installing software. If someone sends you a tessera — a
-bundle of preserved memories — you should be able to verify it's genuine and
-unmodified without downloading an app, creating an account, or trusting a
-server. That's what tesseras-wasm delivers: drag a tessera archive into a web
-page, and cryptographic verification happens entirely in your browser.
-
What was built
-
tesseras-wasm — A Rust crate that compiles to WebAssembly via wasm-pack,
-exposing four stateless functions to JavaScript. The crate depends on
-tesseras-core for manifest parsing and calls cryptographic primitives directly
-(blake3, ed25519-dalek) rather than depending on tesseras-crypto, which pulls
-in C-based post-quantum libraries that don't compile to
-wasm32-unknown-unknown.
-
parse_manifest takes raw MANIFEST bytes (UTF-8 plain text, not MessagePack),
-delegates to tesseras_core::manifest::Manifest::parse(), and returns a JSON
-string with the creator's Ed25519 public key, signature file paths, and a list
-of files with their expected BLAKE3 hashes, sizes, and MIME types. Internal
-structs (ManifestJson, CreatorPubkey, SignatureFiles, FileEntry) are
-serialized with serde_json. The ML-DSA public key and signature file fields are
-present in the JSON contract but set to null — ready for when post-quantum
-signing is implemented on the native side.
-
hash_blake3 computes a BLAKE3 hash of arbitrary bytes and returns a
-64-character hex string. It's called once per file in the tessera to verify
-integrity against the MANIFEST.
-
verify_ed25519 takes a message, a 64-byte signature, and a 32-byte public key,
-constructs an ed25519_dalek::VerifyingKey, and returns whether the signature
-is valid. Length validation returns descriptive errors ("Ed25519 public key must
-be 32 bytes") rather than panicking.
-
verify_ml_dsa is a stub that returns an error explaining ML-DSA verification
-is not yet available. This is deliberate: the ml-dsa crate on crates.io is
-v0.1.0-rc.7 (pre-release), and tesseras-crypto uses pqcrypto-dilithium
-(C-based CRYSTALS-Dilithium) which is byte-incompatible with FIPS 204 ML-DSA.
-Both sides need to use the same pure Rust implementation before
-cross-verification works. Ed25519 verification is sufficient — every tessera is
-Ed25519-signed.
-
All four functions use a two-layer pattern for testability: inner functions
-return Result<T, String> and are tested natively, while thin #[wasm_bindgen]
-wrappers convert errors to JsError. This avoids JsError::new() panicking on
-non-WASM targets during testing.
-
The compiled WASM binary is 109 KB raw and 44 KB gzipped — well under the 200 KB
-budget. wasm-opt applies -Oz optimization after wasm-pack builds with
-opt-level = "z", LTO, and single codegen unit.
-
@tesseras/verify — A TypeScript npm package (crates/tesseras-wasm/js/)
-that orchestrates browser-side verification. The public API is a single
-function:
The VerificationResult type provides everything a UI needs: overall validity,
-tessera hash, creator public keys, signature status (valid/invalid/missing for
-both Ed25519 and ML-DSA), per-file integrity results with expected and actual
-hashes, a list of unexpected files not in the MANIFEST, and an errors array.
-
Archive unpacking (unpack.ts) handles three formats: gzip-compressed tar
-(detected by \x1f\x8b magic bytes, decompressed with fflate then parsed as
-tar), ZIP (PK\x03\x04 magic, unpacked with fflate's unzipSync), and raw tar
-(ustar at offset 257). A normalizePath function strips the leading
-tessera-<hash>/ prefix so internal paths match MANIFEST entries.
-
Verification runs in a Web Worker (worker.ts) to keep the UI thread
-responsive. The worker initializes the WASM module, unpacks the archive, parses
-the MANIFEST, verifies the Ed25519 signature against the creator's public key,
-then hashes each file with BLAKE3 and compares against expected values. Progress
-messages stream back to the main thread after each file. If any signature is
-invalid, verification stops early without hashing files — failing fast on the
-most critical check.
-
The archive is transferred to the worker with zero-copy
-(worker.postMessage({ type: "verify", archive }, [archive.buffer])) to avoid
-duplicating potentially large tessera files in memory.
-
Build pipeline — Three new justfile targets: wasm-build runs wasm-pack
-with --target web --release and optimizes with wasm-opt; wasm-size reports
-raw and gzipped binary size; test-wasm runs the native test suite.
-
Tests — 9 native unit tests cover BLAKE3 hashing (empty input, known value),
-Ed25519 verification (valid signature, invalid signature, wrong key, bad key
-length), and MANIFEST parsing (valid manifest, invalid UTF-8, garbage input). 3
-WASM integration tests run in headless Chrome via
-wasm-pack test --headless --chrome, verifying that hash_blake3,
-verify_ed25519, and parse_manifest work correctly when compiled to
-wasm32-unknown-unknown.
-
Architecture decisions
-
-
No tesseras-crypto dependency: the WASM crate calls blake3 and
-ed25519-dalek directly. tesseras-crypto depends on pqcrypto-kyber (C-based
-ML-KEM via pqcrypto-traits) which requires a C compiler toolchain and doesn't
-target wasm32. By depending only on pure Rust crates, the WASM build has zero
-C dependencies and compiles cleanly to WebAssembly.
-
ML-DSA deferred, not faked: rather than silently skipping post-quantum
-verification, the stub returns an explicit error. This ensures that if a
-tessera contains an ML-DSA signature, the verification result will report
-ml_dsa: "missing" rather than pretending it was checked. The JS orchestrator
-handles this gracefully — a tessera is valid if Ed25519 passes and ML-DSA is
-missing (not yet implemented on either side).
-
Inner function pattern: JsError cannot be constructed on non-WASM
-targets (it panics). Splitting each function into
-foo_inner() -> Result<T, String> and foo() -> Result<T, JsError> lets the
-native test suite exercise all logic without touching JavaScript types. The
-WASM integration tests in headless Chrome test the full #[wasm_bindgen]
-surface.
-
Web Worker isolation: cryptographic operations (especially BLAKE3 over
-large media files) can take hundreds of milliseconds. Running in a Worker
-prevents UI jank. The streaming progress protocol
-({ type: "progress", current, total, file }) lets the UI show a progress bar
-during verification of tesseras with many files.
-
Zero-copy transfer: archive.buffer is transferred to the Worker, not
-copied. For a 50 MB tessera archive, this avoids doubling memory usage during
-verification.
-
Plain text MANIFEST, not MessagePack: the WASM crate parses the same
-plain-text MANIFEST format as the CLI. This is by design — the MANIFEST is the
-tessera's Rosetta Stone, readable by anyone with a text editor. The
-rmp-serde dependency in the Cargo.toml is not used and will be removed.
-
-
What comes next
-
-
Phase 4: Resilience and Scale — OS packaging (Alpine, Arch, Debian,
-FreeBSD, OpenBSD), CI on SourceHut and GitHub Actions, security audits,
-browser-based tessera explorer at tesseras.net using @tesseras/verify
-
Phase 5: Exploration and Culture — Public tessera browser by
-era/location/theme/language, institutional curation, genealogy integration,
-physical media export (M-DISC, microfilm, acid-free paper with QR)
-
-
Verification no longer requires trust in software. A tessera archive dropped
-into a browser is verified with the same cryptographic rigor as the CLI — same
-BLAKE3 hashes, same Ed25519 signatures, same MANIFEST parser. The difference is
-that now anyone can do it.
Your hard drive will die. Your cloud provider will pivot. The RAID array in your
-closet will outlive its controller but not its owner. If a memory is stored in
-exactly one place, it has exactly one way to be lost forever.
-
Tesseras is a network that keeps human memories alive through mutual aid. The
-core survival mechanism is Reed-Solomon erasure coding — a technique
-borrowed from deep-space communication that lets us reconstruct data even when
-pieces go missing.
-
What is Reed-Solomon?
-
Reed-Solomon is a family of error-correcting codes invented by Irving Reed and
-Gustave Solomon in 1960. The original use case was correcting errors in data
-transmitted over noisy channels — think Voyager sending photos from Jupiter, or
-a CD playing despite scratches.
-
The key insight: if you add carefully computed redundancy to your data before
-something goes wrong, you can recover the original even after losing some
-pieces.
-
Here's the intuition. Suppose you have a polynomial of degree 2 — a parabola.
-You need 3 points to define it uniquely. But if you evaluate it at 5 points, you
-can lose any 2 of those 5 and still reconstruct the polynomial from the
-remaining 3. Reed-Solomon generalizes this idea to work over finite fields
-(Galois fields), where the "polynomial" is your data and the "evaluation points"
-are your fragments.
-
In concrete terms:
-
-
Split your data into k data shards
-
Computem parity shards from the data shards
-
Distribute all k + m shards across different locations
-
Reconstruct the original data from any k of the k + m shards
-
-
You can lose up to m shards — any m, data or parity, in any combination —
-and still recover everything.
-
Why not just make copies?
-
The naive approach to redundancy is replication: make 3 copies, store them in 3
-places. This gives you tolerance for 2 failures at the cost of 3x your storage.
-
Reed-Solomon is dramatically more efficient:
-
Strategy
Storage overhead
Failures tolerated
-
3x replication
200%
2 out of 3
-
Reed-Solomon (16,8)
50%
8 out of 24
-
Reed-Solomon (48,24)
50%
24 out of 72
-
-
With 16 data shards and 8 parity shards, you use 50% extra storage but can
-survive losing a third of all fragments. To achieve the same fault tolerance
-with replication alone, you'd need 3x the storage.
-
For a network that aims to preserve memories across decades and centuries, this
-efficiency isn't a nice-to-have — it's the difference between a viable system
-and one that drowns in its own overhead.
-
How Tesseras uses Reed-Solomon
-
Not all data deserves the same treatment. A 500-byte text memory and a 100 MB
-video have very different redundancy needs. Tesseras uses a three-tier
-fragmentation strategy:
-
Small (< 4 MB) — Whole-file replication to 7 peers. For small tesseras, the
-overhead of erasure coding (encoding time, fragment management, reconstruction
-logic) outweighs its benefits. Simple copies are faster and simpler.
-
Medium (4–256 MB) — 16 data shards + 8 parity shards = 24 total fragments.
-Each fragment is roughly 1/16th of the original size. Any 16 of the 24 fragments
-reconstruct the original. Distributed across 7 peers.
-
Large (≥ 256 MB) — 48 data shards + 24 parity shards = 72 total fragments.
-Higher shard count means smaller individual fragments (easier to transfer and
-store) and higher absolute fault tolerance. Also distributed across 7 peers.
-
The implementation uses the reed-solomon-erasure crate operating over GF(2⁸) —
-the same Galois field used in QR codes and CDs. Each fragment carries a BLAKE3
-checksum so corruption is detected immediately, not silently propagated.
Reed-Solomon solves the mathematical problem of redundancy. The engineering
-challenges are everything around it.
-
Fragment tracking
-
Every fragment needs to be findable. Tesseras uses a Kademlia DHT for peer
-discovery and fragment-to-peer mapping. When a node goes offline, its fragments
-need to be re-created and distributed to new peers. This means tracking which
-fragments exist, where they are, and whether they're still intact — across a
-network with no central authority.
-
Silent corruption
-
A fragment that returns wrong data is worse than one that's missing — at least a
-missing fragment is honestly absent. Tesseras addresses this with
-attestation-based health checks: the repair loop periodically asks fragment
-holders to prove possession by returning BLAKE3 checksums. If a checksum doesn't
-match, the fragment is treated as lost.
-
Correlated failures
-
If all 24 fragments of a tessera land on machines in the same datacenter, a
-single power outage kills them all. Reed-Solomon's math assumes independent
-failures. Tesseras enforces subnet diversity during distribution: no more
-than 2 fragments per /24 IPv4 subnet (or /48 IPv6 prefix). This spreads
-fragments across different physical infrastructure.
-
Repair speed vs. network load
-
When a peer goes offline, the clock starts ticking. Lost fragments need to be
-re-created before more failures accumulate. But aggressive repair floods the
-network. Tesseras balances this with a configurable repair loop (default: every
-24 hours with 2-hour jitter) and concurrent transfer limits (default: 4
-simultaneous transfers). The jitter prevents repair storms where every node
-checks its fragments at the same moment.
-
Long-term key management
-
Reed-Solomon protects against data loss, not against losing access. If a tessera
-is encrypted (private or sealed visibility), you need the decryption key to make
-the recovered data useful. Tesseras separates these concerns: erasure coding
-handles availability, while Shamir's Secret Sharing (a future phase) will handle
-key distribution among heirs. The project's design philosophy — encrypt as
-little as possible — keeps the key management problem small.
-
Galois field limitations
-
The GF(2⁸) field limits the total number of shards to 255 (data + parity
-combined). For Tesseras, this is not a practical constraint — even the Large
-tier uses only 72 shards. But it does mean that extremely large files with
-thousands of fragments would require either a different field or a layered
-encoding scheme.
-
Evolving codec compatibility
-
A tessera encoded today must be decodable in 50 years. Reed-Solomon over GF(2⁸)
-is one of the most widely implemented algorithms in computing — it's in every CD
-player, every QR code scanner, every deep-space probe. This ubiquity is itself a
-survival strategy. The algorithm won't be forgotten because half the world's
-infrastructure depends on it.
-
The bigger picture
-
Reed-Solomon is a piece of a larger puzzle. It works in concert with:
-
-
Kademlia DHT for finding peers and routing fragments
-
BLAKE3 checksums for integrity verification
-
Bilateral reciprocity for fair storage exchange (no blockchain needed)
-
Subnet diversity for failure independence
-
Automatic repair for maintaining redundancy over time
-
-
No single technique makes memories survive. Reed-Solomon ensures that data can
-be recovered. The DHT ensures fragments can be found. Reciprocity ensures
-peers want to help. Repair ensures none of this degrades over time.
-
A tessera is a bet that the sum of these mechanisms, running across many
-independent machines operated by many independent people, is more durable than
-any single institution. Reed-Solomon is the mathematical foundation of that bet.