summaryrefslogtreecommitdiffstats
path: root/news/phase4-performance-tuning/index.html
blob: 5426996b018a5aefc1b02d2f832a446c46e4ccec (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <title>Phase 4: Performance Tuning — Tesseras</title>
    <meta name="description" content="SQLite WAL mode with centralized pragma configuration, LRU fragment caching, QUIC connection pool lifecycle management, and attestation hot path optimization.">
    <!-- Open Graph -->
    <meta property="og:type" content="article">
    <meta property="og:title" content="Phase 4: Performance Tuning">
    <meta property="og:description" content="SQLite WAL mode with centralized pragma configuration, LRU fragment caching, QUIC connection pool lifecycle management, and attestation hot path optimization.">
    <meta property="og:image" content="https://tesseras.net/images/social.jpg">
    <meta property="og:image:width" content="1200">
    <meta property="og:image:height" content="630">
    <meta property="og:site_name" content="Tesseras">
    <!-- Twitter Card -->
    <meta name="twitter:card" content="summary_large_image">
    <meta name="twitter:title" content="Phase 4: Performance Tuning">
    <meta name="twitter:description" content="SQLite WAL mode with centralized pragma configuration, LRU fragment caching, QUIC connection pool lifecycle management, and attestation hot path optimization.">
    <meta name="twitter:image" content="https://tesseras.net/images/social.jpg">
    <link rel="stylesheet" href="https://tesseras.net/style.css?h=21f0f32121928ee5c690">
    
        
            <link rel="alternate" type="application/atom+xml" title="Tesseras" href="https://tesseras.net/atom.xml">
        
    
    <link rel="icon" type="image/png" sizes="32x32" href="https://tesseras.net/images/favicon.png?h=be4e123a23393b1a027d">
    
</head>
<body>
    <header>
        <h1>
            <a href="https:&#x2F;&#x2F;tesseras.net/">
                <img src="https://tesseras.net/images/logo-64.png?h=c1b8d0c4c5f93b49d40b" alt="Tesseras" width="40" height="40" class="logo">
                Tesseras
            </a>
        </h1>
        <nav>
            
                <a href="https://tesseras.net/about/">About</a>
                <a href="https://tesseras.net/news/">News</a>
                <a href="https://tesseras.net/releases/">Releases</a>
                <a href="https://tesseras.net/faq/">FAQ</a>
                <a href="https://tesseras.net/subscriptions/">Subscriptions</a>
                <a href="https://tesseras.net/contact/">Contact</a>
            
        </nav>
        <nav class="lang-switch">
            
                <strong>English</strong> | <a href="/pt-br&#x2F;news&#x2F;phase4-performance-tuning&#x2F;">Português</a>
            
        </nav>
    </header>

    <main>
        
<article>
    <h2>Phase 4: Performance Tuning</h2>
    <p class="news-date">2026-02-15</p>
    <p>A P2P network that can traverse NATs but chokes on its own I/O is not much use.
Phase 4 continues with performance tuning: centralizing database configuration,
caching fragment blobs in memory, managing QUIC connection lifecycles, and
eliminating unnecessary disk reads from the attestation hot path.</p>
<p>The guiding principle was the same as the rest of Tesseras: do the simplest
thing that actually works. No custom allocators, no lock-free data structures,
no premature complexity. A centralized <code>StorageConfig</code>, an LRU cache, a
connection reaper, and a targeted fix to avoid re-reading blobs that were
already checksummed.</p>
<h2 id="what-was-built">What was built</h2>
<p><strong>Centralized SQLite configuration</strong> (<code>tesseras-storage/src/database.rs</code>) — A
new <code>StorageConfig</code> struct and <code>open_database()</code> / <code>open_in_memory()</code> functions
that apply all SQLite pragmas in one place: WAL journal mode, foreign keys,
synchronous mode (NORMAL by default, FULL for unstable hardware like RPi + SD
card), busy timeout, page cache size, and WAL autocheckpoint interval.
Previously, each call site opened a connection and applied pragmas ad hoc. Now
the daemon, CLI, and tests all go through the same path. 7 tests covering
foreign keys, busy timeout, journal mode, migrations, synchronous modes, and
on-disk WAL file creation.</p>
<p><strong>LRU fragment cache</strong> (<code>tesseras-storage/src/cache.rs</code>) — A
<code>CachedFragmentStore</code> that wraps any <code>FragmentStore</code> with a byte-aware LRU
cache. Fragment blobs are cached on read and invalidated on write or delete.
When the cache exceeds its configured byte limit, the least recently used
entries are evicted. The cache is transparent: it implements <code>FragmentStore</code>
itself, so the rest of the stack doesn't know it's there. Optional Prometheus
metrics track hits, misses, and current byte usage. 3 tests: cache hit avoids
inner read, store invalidates cache, eviction when over max bytes.</p>
<p><strong>Prometheus storage metrics</strong> (<code>tesseras-storage/src/metrics.rs</code>) — A
<code>StorageMetrics</code> struct with three counters/gauges: <code>fragment_cache_hits</code>,
<code>fragment_cache_misses</code>, and <code>fragment_cache_bytes</code>. Registered with the
Prometheus registry and wired into the fragment cache via <code>with_metrics()</code>.</p>
<p><strong>Attestation hot path fix</strong> (<code>tesseras-replication/src/service.rs</code>) — The
attestation flow previously read every fragment blob from disk and recomputed
its BLAKE3 checksum. Since <code>list_fragments()</code> already returns <code>FragmentId</code> with
a stored checksum, the fix is trivial: use <code>frag.checksum</code> instead of
<code>blake3::hash(&amp;data)</code>. This eliminates one disk read per fragment during
attestation — for a tessera with 100 fragments, that's 100 fewer reads. A test
with <code>expect_read_fragment().never()</code> verifies no blob reads happen during
attestation.</p>
<p><strong>QUIC connection pool lifecycle</strong> (<code>tesseras-net/src/quinn_transport.rs</code>) — A
<code>PoolConfig</code> struct controlling max connections, idle timeout, and reaper
interval. <code>PooledConnection</code> wraps each <code>quinn::Connection</code> with a <code>last_used</code>
timestamp. When the pool reaches capacity, the oldest idle connection is evicted
before opening a new one. A background reaper task (Tokio spawn) periodically
closes connections that have been idle beyond the timeout. 4 new pool metrics:
<code>tesseras_conn_pool_size</code>, <code>pool_hits_total</code>, <code>pool_misses_total</code>,
<code>pool_evictions_total</code>.</p>
<p><strong>Daemon integration</strong> (<code>tesd/src/config.rs</code>, <code>main.rs</code>) — A new <code>[performance]</code>
section in the TOML config with fields for SQLite cache size, synchronous mode,
busy timeout, fragment cache size, max connections, idle timeout, and reaper
interval. The daemon's <code>main()</code> now calls <code>open_database()</code> with the configured
<code>StorageConfig</code>, wraps <code>FsFragmentStore</code> with <code>CachedFragmentStore</code>, and binds
QUIC with the configured <code>PoolConfig</code>. The direct <code>rusqlite</code> dependency was
removed from the daemon crate.</p>
<p><strong>CLI migration</strong> (<code>tesseras-cli/src/commands/init.rs</code>, <code>create.rs</code>) — Both
<code>init</code> and <code>create</code> commands now use <code>tesseras_storage::open_database()</code> with
the default <code>StorageConfig</code> instead of opening raw <code>rusqlite</code> connections. The
<code>rusqlite</code> dependency was removed from the CLI crate.</p>
<h2 id="architecture-decisions">Architecture decisions</h2>
<ul>
<li><strong>Decorator pattern for caching</strong>: <code>CachedFragmentStore</code> wraps
<code>Box&lt;dyn FragmentStore&gt;</code> and implements <code>FragmentStore</code> itself. This means
caching is opt-in, composable, and invisible to consumers. The daemon enables
it; tests can skip it.</li>
<li><strong>Byte-aware eviction</strong>: the LRU cache tracks total bytes, not entry count.
Fragment blobs vary wildly in size (a 4KB text fragment vs a 2MB photo shard),
so counting entries would give a misleading picture of memory usage.</li>
<li><strong>No connection pool crate</strong>: instead of pulling in a generic pool library,
the connection pool is a thin wrapper around
<code>DashMap&lt;SocketAddr, PooledConnection&gt;</code> with a Tokio reaper. QUIC connections
are multiplexed, so the "pool" is really about lifecycle management (idle
cleanup, max connections) rather than borrowing/returning.</li>
<li><strong>Stored checksums over re-reads</strong>: the attestation fix is intentionally
minimal — one line changed, one disk read removed per fragment. The checksums
were already stored in SQLite by <code>store_fragment()</code>, they just weren't being
used.</li>
<li><strong>Centralized pragma configuration</strong>: a single <code>StorageConfig</code> struct replaces
scattered <code>PRAGMA</code> calls. The <code>sqlite_synchronous_full</code> flag exists
specifically for Raspberry Pi deployments where the kernel can crash and lose
un-checkpointed WAL transactions.</li>
</ul>
<h2 id="what-comes-next">What comes next</h2>
<ul>
<li><strong>Phase 4 continued</strong> — Shamir's Secret Sharing for heirs, sealed tesseras
(time-lock encryption), security audits, institutional node onboarding,
storage deduplication, OS packaging</li>
<li><strong>Phase 5: Exploration and Culture</strong> — public tessera browser by
era/location/theme/language, institutional curation, genealogy integration,
physical media export (M-DISC, microfilm, acid-free paper with QR)</li>
</ul>
<p>With performance tuning in place, Tesseras handles the common case efficiently:
fragment reads hit the LRU cache, attestation skips disk I/O, idle QUIC
connections are reaped automatically, and SQLite is configured consistently
across the entire stack. The next steps focus on cryptographic features (Shamir,
time-lock) and hardening for production deployment.</p>

</article>

    </main>

    <footer>
        <p>&copy; 2026 Tesseras Project. <a href="/atom.xml">News Feed</a> · <a href="https://git.sr.ht/~ijanc/tesseras">Source</a></p>
    </footer>
</body>
</html>