SQLite vs Postgres for the Solo Founder in 2026
Postgres is the default for a reason. It is also the wrong default for a solo-founder SaaS in 2026. This site runs on SQLite; here is why that is the correct call.
SQLite is not the backup plan for the solo founder's SaaS in 2026. It is the first-choice database for a single-operator site with read-heavy traffic, admin-only writes, and zero appetite for paying a managed-DB bill that scales with nothing you care about. This post is the case for defaulting to SQLite on a production personal-brand or solo-SaaS site, written from inside one: every page you are reading is served by Next.js 16 reading from a single data/music.db file bind-mounted into a Docker container on a single VPS, with Cloudflare in front. Six tables, WAL mode, better-sqlite3, Drizzle ORM. No managed database. No connection pool tier.
The operational shape of a solo-founder SaaS
The case breaks down the moment you pretend the solo-founder SaaS is a scaled-down enterprise app. It is not. The shape is specific and the shape is what decides the database.
- Writes are admin-only. I am the only person who writes to this DB. Adding a blog post, editing a track, flipping a publish flag. One writer, rare events, never concurrent.
- Reads are public and request-time. Every public page reads from the DB on every request because the catalog and the blog are the source of truth. Cloudflare caches at the edge; origin still serves fresh HTML on a cache miss.
- One app server. One VPS. No load balancer. No second read replica. Deploy is
docker compose up -d --build. - User-generated content is zero. No comments, no likes, no user sessions other than my admin one.
That is the shape. Postgres is optimised for a different shape: many concurrent writers, client pools negotiating connections, horizontal reads, replication topologies, network-accessible administration. Every one of those features is a cost when nothing you run needs them, and the cost is paid in ops complexity, per-connection pricing, cold starts, and a monthly bill for features you are not using.
What the stack actually looks like on disk
The whole database layer of this site is one file of initialisation code plus one schema file. That is it.
// lib/db/index.ts — the entire DB setup
import Database from "better-sqlite3";
import { drizzle } from "drizzle-orm/better-sqlite3";
import * as schema from "./schema";
const sqlite = new Database(process.env.DATABASE_URL || "./data/music.db");
sqlite.pragma("journal_mode = WAL");
sqlite.pragma("foreign_keys = ON");
sqlite.pragma("busy_timeout = 5000");
sqlite.pragma("synchronous = NORMAL");
sqlite.pragma("cache_size = -64000");
sqlite.pragma("temp_store = MEMORY");
sqlite.pragma("mmap_size = 268435456");
export const db = drizzle(sqlite, { schema });
Seven pragmas. WAL mode so readers never block the writer and the writer never blocks readers. A five-second busy_timeout so contention waits instead of throwing. synchronous = NORMAL because WAL makes FULL-sync pointlessly slow on a single-node. A 64 MB page cache and 256 MB memory-mapped I/O window that together mean the working set lives in RAM. These are not exotic settings — they are the SQLite documentation's own recommendations for a production read-heavy workload.
The schema is six tables: four owned by BetterAuth (user, session, account, verification), two owned by the app (media, blog). Drizzle's $inferSelect and $inferInsert generate every row type at compile time — the schema file is the single source of truth for TypeScript too, which is the single biggest reason I picked Drizzle over Prisma. No generated client, no codegen step, just type inference off the column definitions.
Running versions at the time of writing: next@16.2.3, better-sqlite3@12.x, drizzle-orm@0.45.x, drizzle-kit@0.31.x, Node.js 24.
Why Postgres is the default, and why it is the wrong default here
Steel-man first. Postgres is the default for a reason, and the reason is real. It has richer types (jsonb with GIN indexes, proper arrays, tsvector for full-text search), concurrent writers, logical replication, a mature extension ecosystem, and every hosted provider under the sun. If you are building a multi-tenant SaaS with real concurrency, a team that will grow, analytical queries that want window functions over millions of rows, or compliance requirements that mandate point-in-time recovery, Postgres is correct and SQLite is wrong.
Now dismantle it at solo-founder scale.
- Concurrent writers. You do not have any. You are the only writer. The lock-the-whole-DB-on-write behaviour people cite as SQLite's disqualifying flaw only hurts when two writers collide, and two writers do not collide on a one-admin site. WAL mode additionally lets unlimited readers proceed during a write.
- Replication. You are running one app server. There is nothing to replicate to.
- Connection pooling. There is no network between your app and your database. The connection is a C function call. There is no pool to manage, no pool to run out of, no pool pricing tier. The fastest possible connection is no connection at all.
- JSONB and richer types. SQLite's JSON1 extension is on by default. Drizzle handles
text("tags", { mode: "json" })cleanly — I use it for thetagsandcategoriescolumns onmediaand thetagscolumn onblog. The indexing story on JSON is weaker than Postgres'sjsonb, but my queries never need it: I filter by tag via a small JOIN-friendly pattern and the row count never makes that slow. - Extensions. I have not needed one yet. FTS5 ships in-box for full-text search the moment I want it, which I will for the blog's in-site search when there are enough posts for that to matter.
- Managed hosting. The site lives on a VPS.
cp data/music.db data/backups/music.db.bak.$(date +%Y%m%d)is a perfectly adequate backup, and the DB is already sitting in a bind-mounted volume sodocker compose up --buildnever touches it.
The only legitimate thing Postgres gives you that SQLite cannot is the shape you get when your operational model looks nothing like the one above. If you run two app servers, or you add comments, or you hire a second developer who also writes, the argument inverts. Until then, SQLite is correct and Postgres is LARPing as an enterprise.
The real bottleneck, named honestly
SQLite has one hard limit on this site's shape and it is worth stating plainly: every write serialises. If I ever ship a feature where the site itself writes to the DB on a public-facing code path (say, a play-count counter incremented on every audio fetch, or a per-visitor analytics row), that write contends with my admin writes under WAL. At this site's traffic it is not a problem. At a hundred writes per second it becomes a problem. At a thousand writes per second SQLite is the wrong pick on this deployment shape.
The answer, when I get there, is not "switch to Postgres". It is "write to Redis first, batch to SQLite on a timer". Redis is already in this stack for honeypot rate limiting. A counter there costs one INCR per request, and a cron-driven flush moves the batch into SQLite once a minute. I have not built this because the beacon endpoint that logs plays and downloads is sufficient at current volume — it writes to a logfile, not the DB. But the migration path exists and it does not involve a database swap.
Backups and disaster recovery, without paying for someone else's S3 bill
The least glamorous but most operationally important part of the case. A SQLite DB is a single file. cp it. rsync it. Commit it to a private repo if it is small enough. Any filesystem-level tool you already know how to operate is a valid backup pipeline.
What I run today is a scheduled checkpoint-and-copy: sqlite3 data/music.db "PRAGMA wal_checkpoint(TRUNCATE);" to flush the WAL into the main file, then a copy of music.db to a timestamped filename under data/backups/. Nothing exotic. The WAL checkpoint before copy is the one non-obvious step: without it, a raw file copy misses in-flight writes sitting in the -wal sidecar.
The upgrade path from here is Litestream, which streams the WAL to an object store continuously. Zero data loss on replay, point-in-time recovery, and the bill for storing multiple years of a small-site WAL on Cloudflare R2 is effectively noise. I have not wired it up yet because the current site's write frequency is so low that a daily backup is enough; the day I add a feature that writes on the public path, Litestream ships the same day.
The managed-DB alternatives, in order of when they would make sense
For the solo founder shopping around in 2026, the real comparison is not SQLite vs Postgres, it is SQLite vs Supabase vs Neon vs Turso vs PlanetScale. These all solve related problems and they all impose different costs:
- Supabase. Excellent if you want a realtime layer, row-level security against an auth table, and the ability to call the DB from a client app without a backend. None of that applies here. The cost of running it is a connection pool and a multi-tenant Postgres you share; the benefit is a product you did not build.
- Neon. Serverless Postgres with branching. Branching is genuinely useful if you run a team and want per-PR preview DBs. For one operator it is a feature you will never touch. The pricing model is kinder than it looks at low scale; the cold-start behaviour on the free tier can bite a low-traffic site in ways that do not show up in their demos.
- Turso. Positioned as "SQLite at the edge" via libSQL, which is genuinely interesting as an edge-first pattern. The question is whether your site needs edge reads. If Cloudflare is already in front of origin and the cache hit rate is high, the marginal value of a second database layer at the edge is small. For this site, CF caching of SSR HTML does the same job for free.
- PlanetScale. MySQL-family. Excellent branching and zero-downtime schema changes at scale. Overkill for a solo founder and the pricing only makes sense above a certain team size.
The honest summary: every managed-DB provider is optimising for a shape that involves either multiple app servers, a team, or a user-generated-content workload. Solo-founder SaaS with admin-only writes is none of those shapes, and the correct database for that shape is the one that runs in-process and costs nothing to operate.
Drizzle on SQLite, the practical gotchas
A few real ones from shipping this codebase that are worth flagging for anyone starting:
- Timestamps as integers. Drizzle's
integer(..., { mode: "timestamp" })stores Unix epoch seconds, which means you writenew Date()and read back aDate. Clean. The failure mode: if you eversqlite3into the DB and write a raw ISO string to that column by hand, it stays a string and breaks the read path. Always round-trip writes through the Drizzle client. - Booleans as integers.
integer(..., { mode: "boolean" })stores 0/1 and Drizzle reads it back as a JS boolean. Again clean. The gotcha:NULLis a valid value on nullable boolean columns and you getnullback, notfalse— branch on that explicitly when it matters (as it does foruser.banned). - JSON columns.
text(..., { mode: "json" }).$type<string[]>()is the pattern I use fortagsandcategories. The$typeannotation is a pure compile-time hint; at runtime you can still write garbage into the column if you cast wrongly. Validate at the boundary with Zod, not at read. - Self-referential foreign keys. The
mediatable hasparentIdpointing back atmedia.id(for "reimagined" and remix variants). Drizzle needs theAnySQLiteColumntype annotation on the self-reference — if you drop it, TypeScript recurses and you get a type error that is not obvious. The right import is fromdrizzle-orm/sqlite-core. - Composite indexes. The
media_type_published_idxandmedia_published_createdAt_idxindexes carry the common query shapes (filter by type, filter by published, order by createdAt). On a table with tens of thousands of rows the index order matters; on a table with fifty rows it is free speed you take anyway. Plan indexes for the table you will have in two years, not the one you have today.
When I would migrate
My rule has two conditions and one of them has to actually fire, not merely look like it might. First, if I add a second app server behind a load balancer, the single-file model breaks because each box would hold a different DB state, and that is a hard migration, not a choice. Second, if a public-facing code path starts writing to the DB on every request at a rate the serialised writer cannot absorb — roughly a hundred sustained writes per second, which no personal-brand site hits by accident — SQLite is the wrong tool and I will not try to pretend otherwise.
When either fires, the first stop is Postgres on the same VPS over a Unix socket, not a managed Postgres on someone else's cloud. That migration buys me concurrent writers and richer types without paying the network-hop tax. Only the second migration, from local Postgres to a hosted provider, buys replication. Only the third, horizontal sharding, buys anything past that. Most solo founders who "just default to Postgres" on day one are paying the bill for migrations two and three before they have even shipped the product that would justify migration one.
The rule is the whole case. If it has not fired, SQLite is correct. If it has fired, Postgres is the next stop. The answer is not "always SQLite" or "always Postgres"; it is "match the database to the operational shape of the product, and stop assuming the default is your shape".
FAQ
Is SQLite actually production-ready for a real SaaS in 2026?
Yes, for a specific shape of SaaS: single app server, admin-only or low-concurrency writes, read-heavy traffic. Fly.io, Tailscale, Expensify, and a growing list of others run production workloads on SQLite. The "SQLite is for dev only" line is a 2010s meme that has not caught up with WAL mode, better-sqlite3's synchronous speed, or the fact that a single modern VPS can serve more traffic than most indie SaaS products will ever see.
What about when I need to scale to multiple app servers?
You migrate, and the migration is straightforward because Drizzle's SQLite schema is a near-trivial port to Drizzle's Postgres schema for most columns. The JSON-mode columns map to jsonb, the integer-mode timestamps map to timestamptz, and the integer-mode booleans map to boolean. The migration I plan if the rule ever fires is Postgres on the same VPS first (Unix-socket connection, no network), then horizontal if the traffic actually demands it.
How do I back up a SQLite database safely?
Checkpoint the WAL with PRAGMA wal_checkpoint(TRUNCATE) before copying. Then cp or rsync the file. Automate it on a cron. For zero-RPO, wire up Litestream to stream the WAL to S3 or R2 continuously. Do not rely on a raw cp of a hot database without the checkpoint — the -wal sidecar holds recent writes that a naive copy misses.
Is there any downside I should know about before picking SQLite?
Two real ones. First, every write serialises, so a public write path at scale needs a buffer (Redis or a queue) in front of the DB. Second, the absence of network access to the DB means you cannot run a separate analytics worker that reads the DB from another box without either copying the file over or exposing it via HTTP — neither of which is hard, but both are work you do not do with hosted Postgres. Beyond those two, the tradeoffs at solo-founder scale all fall in SQLite's favour.

