<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>kernveld.com</title>
  <subtitle>Projects and writing on simulations, experiments, and building small, fast software.</subtitle>
  <link href="https://kernveld.com/feed.xml" rel="self"/>
  <link href="https://kernveld.com/"/>
  <author>
    <name>Jonathan Vander Hout</name>
  </author>
  <updated>2026-04-17T00:00:00Z</updated>
  <id>https://kernveld.com/</id>
  <entry>
    <title>SQLite is Enough</title>
    <link href="https://kernveld.com/blog/sqlite-benchmark/"/>
    <updated>2026-04-17T00:00:00Z</updated>
    <id>https://kernveld.com/blog/sqlite-benchmark/</id>
    <category term="post"/>
    <summary>Benchmarking SQLite behind FastAPI and Axum APIs to see how much load a single server can handle for hobby-scale web apps.</summary>
    <content type="html"><![CDATA[<p>For simplicity I stick with SQLite where possible for the projects I post here. It is very straightforward compared to postgresql (which I use a lot in my job) because database setup basically just becomes part of coding an api. The result is that I have one less thing to worry about and can keep these enjoyable. Db backups are as simple as streaming the .backup file from SQLite to another server over ssh.</p>
<p>I wanted to get a sense of how far using SQLite as a database on a reasonably powered server could take me in terms of scale, so I set up some simple benchmarking.</p>
<h2>The Setup</h2>
<p>I built two identical APIs, one in python using FastAPI and one in Rust using Axum. Each is set up to respond to requests that either initiate reads and writes to an SQLite database. The table's primary key is used in each read request. Both APIs use the same concurrency pattern: all writes are funneled through a single dedicated writer thread, while reads run in parallel. WAL mode is enabled, so readers never block the writer and vice versa.</p>
<p>A &quot;load generator&quot; using a configurable number of concurrent async workers, written in Rust using reqwest + tokio, starts by warming up the db with 1000 entries, and then proceeds to hammer the API with both get requests to retrieve data and post requests to write a new entry into the db. <b>The mix is 90% reads / 10% writes, a rough but I think fair estimation for many web apps.</b> All payloads are 2 KB. While running, the load generator gathers statistics on the time requests take to complete.</p>
<p>To gather useful data I configured the load generator tool to run with (8, 32, 64, 128, 256) workers, sending 50,000 requests at each level. This allows finding the point where the api can no longer handle the load. The server process was pinned to 4 CPU cores and the load generator to the remaining 16 so the two didn't fight for compute.</p>
<h2>Methodology</h2>
<p>This setup is specific to the use case I want to test. To test the raw performance of SQLite in Rust vs Python, no HTTP api would be involved, however this setup represents my needs by showing me the performance of APIs + SQLite. I ran these tests on my laptop (ThinkPad P15v in power saving mode) to represent a capacity that would be easy to scale to by increasing the specs of the server in the cloud where the applications run.</p>
<h2>Results</h2>
<p><strong>Python (FastAPI + sqlite3):</strong></p>
<table>
<thead>
<tr>
<th>workers</th>
<th>rps</th>
<th>p50 (ms)</th>
<th>p95 (ms)</th>
<th>p99 (ms)</th>
<th>errors</th>
</tr>
</thead>
<tbody>
<tr>
<td>8</td>
<td>977</td>
<td>7.99</td>
<td>11.25</td>
<td>13.32</td>
<td>0</td>
</tr>
<tr>
<td>32</td>
<td>1,030</td>
<td>30.84</td>
<td>42.80</td>
<td>50.94</td>
<td>0</td>
</tr>
<tr>
<td>64</td>
<td>1,096</td>
<td>58.03</td>
<td>78.68</td>
<td>88.49</td>
<td>0</td>
</tr>
<tr>
<td>128</td>
<td>1,107</td>
<td>115.85</td>
<td>155.13</td>
<td>171.52</td>
<td>0</td>
</tr>
<tr>
<td>256</td>
<td>978</td>
<td>258.11</td>
<td>320.22</td>
<td>348.16</td>
<td>0</td>
</tr>
</tbody>
</table>
<p><strong>Rust (Axum + rusqlite):</strong></p>
<table>
<thead>
<tr>
<th>workers</th>
<th>rps</th>
<th>p50 (ms)</th>
<th>p95 (ms)</th>
<th>p99 (ms)</th>
<th>errors</th>
</tr>
</thead>
<tbody>
<tr>
<td>8</td>
<td>6,393</td>
<td>1.17</td>
<td>1.81</td>
<td>2.18</td>
<td>0</td>
</tr>
<tr>
<td>32</td>
<td>6,680</td>
<td>4.33</td>
<td>8.26</td>
<td>21.70</td>
<td>0</td>
</tr>
<tr>
<td>64</td>
<td>6,528</td>
<td>7.75</td>
<td>18.79</td>
<td>66.82</td>
<td>0</td>
</tr>
<tr>
<td>128</td>
<td>6,197</td>
<td>9.85</td>
<td>101.72</td>
<td>170.12</td>
<td>0</td>
</tr>
<tr>
<td>256</td>
<td>5,939</td>
<td>10.54</td>
<td>304.29</td>
<td>394.12</td>
<td>0</td>
</tr>
</tbody>
</table>
<p>This means that the median request completed in 7.99 ms for the Python server and 1.17 ms for the Rust server. Overall the Rust implementation was over 6 times faster despite the fact that the amount of processing required on the request body was minimal.</p>
<p>The similarities in the p99 results between the Python and Rust implementations for higher worker counts suggest where the writes to the database become the limiting factor in the performance of the API.</p>
<p>More to the point, both the Python and Rust implementations of the system were far more than sufficient for my needs.</p>
<p>If the average user sends a request every 10 seconds while using a web application, then a single (admittedly fairly powerful) server could handle ~11,000 active users with the Python implementation or ~66,000 active users with the Rust implementation.</p>
<p>This doesn't account for spikes in traffic but it is still such a large threshold that I am confident to keep doing what I am doing.</p>
<h2>Caveats / Conclusion</h2>
<p>Admittedly, this setup is in the ideal scenario. I did not test anything external to the backend examples, nor did I include a simulation of more complex processing that needs to happen to the data before inserting or sending. I'd also like to build out more complete tests in the future, like more complex database schemas, serving large amounts of static content, and websocket based workloads.</p>
<p>In the future I might cover how to scale a SQLite setup like this to multiple servers with read replicas of the database.</p>
<p>But, for now this is sufficient to make me confident to continue launching projects this way, and much more intrinsically satisfying than paying a middleman or maintaining a postgres database myself for hobby projects.</p>
<h3>note:</h3>
<p>To get data for the cover image, I ran the same system with [8, 10, 13, 16, 20, 25, 32, 40, 51, 64, 81, 102, 128, 161, 203, 256] workers.</p>
<h3>another note:</h3>
<p>code is <a href="https://github.com/jonathanvanderhout/experiments/tree/main/sqlite-benchmark" target="_blank" rel="noopener noreferrer">here</a>.</p>
]]></content>
  </entry>
  <entry>
    <title>kernveld.com</title>
    <link href="https://kernveld.com/projects/kernveld/"/>
    <updated>2026-04-13T00:00:00Z</updated>
    <id>https://kernveld.com/projects/kernveld/</id>
    <category term="project"/>
    <summary>A handmade Eleventy site with a live in-browser customizer for color, typography, and layout presets.</summary>
    <content type="html"><![CDATA[<p>I made this site to be a place where I share some projects. It is a static site generated with Eleventy. I could have left it at that, but while working on the CSS I decided to make most of the styling customizable.</p>
<p>The first area of the customizer tool allows selection of color, typography, and layout PRESETS. They are the fastest way to see the site styling shift. For more fine grained changes color, typography, and layout can be changed in the section below. You can even save your customizations to presets.</p>
<p>The header image of this post is an svg composed of shapes colored directly through the css variables, so that when the color settings for the site change, the svg changes too.</p>
<p>I'm launching this in production on this site, as it is a fun novelty here. However, especially for simple static sites, adding something similar to this could really speed up refinement of style changes or allow someone other than the developer to propose exactly what they want before a site goes live.</p>
<p><button class="btn btn-primary" onclick="document.getElementById('customizer-trigger').click()">Try the customizer</button></p>
]]></content>
  </entry>
  <entry>
    <title>Marble Lab 2D</title>
    <link href="https://kernveld.com/projects/marble-lab-2d/"/>
    <updated>2026-04-01T00:00:00Z</updated>
    <id>https://kernveld.com/projects/marble-lab-2d/</id>
    <category term="project"/>
    <summary>Design and share marble tracks with physics-based gameplay. Build levels, play community creations, and vote on your favourites.</summary>
    <content type="html"><![CDATA[<p>I recently discovered that a lot of people like watching videos of marble races, even pure simulated marble races are quite popular. Marble Lab 2D is a game that allows users to build custom tracks and then watch the simulation of their creation. It has sharing features, so if someone builds a track they are happy with, they can share it with others. Users can view tracks made by others and vote for the ones they like the most.</p>
<p>Like my other experimental projects, the only front end dependencies were kept minimal. The only library imported for this project is a web assembly implementation of the Box 2D physics engine. All artwork is done through drawing directly to an HTML canvas using the browser's canvas API.</p>
<p>The part that was expectedly challenging was the creation of the free form level designer. The functionality for adding, reshaping, and moving the various shapes, and configuring their physics properties required some careful planning.</p>
<p>The part that was unexpectedly challenging was getting the &quot;marbles&quot; to roll smoothly down a curved line. I started with the <a href="https://rapier.rs/" target="_blank" rel="noopener noreferrer">rapier.js physics engine</a>, but consistently ran into issues with <a href="https://box2d.org/posts/2020/06/ghost-collisions/" target="_blank" rel="noopener noreferrer">&quot;ghost collisions&quot;</a> when trying to get a marble to roll down a curved line. Smooth rolling curved surfaces was a requirement though, because I wanted people to be able to make neat loops and curved ramps. I really enjoyed working with the Rapier physics engine, and I am going to use it for other projects in the future (it is very good), but I was forced to move to the Box 2D Physics engine, which provides the option to chain shapes (as long as they are static). Once I adopted this approach the smooth rolling over curves worked, and high speed loops could be created without the marbles randomly jittering or springing off tangential to the line.</p>
]]></content>
  </entry>
  <entry>
    <title>Hexamotive</title>
    <link href="https://kernveld.com/projects/hexamotive/"/>
    <updated>2026-03-15T00:00:00Z</updated>
    <id>https://kernveld.com/projects/hexamotive/</id>
    <category term="project"/>
    <summary>Build resource networks and production and incrementally scale a satisfying logistics simulation</summary>
    <content type="html"><![CDATA[<p>This simulation was an experiment with building a simulation style game for the browser with zero dependencies, no game engine, no javascript libraries imported, no framework used. It took a bit longer building this from completely from scratch, but as a result it is extremely small for browsers to load, and I find it very satisfying that it starts near instantly. The map is procedurally generated to allow building endlessly in any direction.</p>
<p>All the building and terrain art are drawn directly in javascript using the canvas api. Rather than redrawing every hex and building each time from scratch, since this would be very expensive, the drawing system pre-renders each visual component once and stores it to an offscreen canvas. These can then be &quot;stamped&quot; to the main canvas in a loop, making the process much faster. To allow for animation of the buildings, the building stamps are generated fresh each frame.</p>
<p>The system that drives the simulation is very simple. Buildings produce resources corresponding to their type. Trains move along the tracks, randomly branching at intersections or following the track with the highest priority (player configurable). The pickup and delivery system works by checking if a train has crossed the midpoint of hex edge adjacent to a building which produces or consumes resources. If the building has room for resources it consumes and the train has them, they get sent to the building; if the building has produced resources and the train has cargo space they get sent there. Buildings produce and consume resources as the simulation updates.</p>
]]></content>
  </entry>
</feed>
