The app works fine on the demo. Works fine with 100 users. At 1,000 users something starts feeling slow. At 10,000 the dashboard takes 8 seconds to load. This is almost always the database. Here is where to look first.
PostgreSQL is excellent and handles enormous scale when configured correctly. Most performance problems in small-to-medium apps are not hardware problems. They are query problems and configuration problems that compound as data grows.
Step 0: Turn on query logging first
Before you try to fix anything, find out what is actually slow. Enablepg_stat_statements to surface the queries consuming the most time:
-- Enable the extension (run once as superuser)
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
-- Find the slowest queries in your database
SELECT
query,
calls,
total_exec_time / calls AS avg_ms,
rows / calls AS avg_rows
FROM pg_stat_statements
ORDER BY avg_ms DESC
LIMIT 20;This tells you exactly which queries to focus on. Fix the slow ones first. Do not guess.
Problem 1: The N+1 query
This is the most common performance killer in ORM-heavy codebases. You fetch a list of users, then inside a loop, fetch data for each user individually. For 1,000 users, that is 1,001 database round trips for what could be 2.
// The N+1 problem (Prisma example)
// This runs 1 query for users + 1 query per user for their orders
const users = await prisma.user.findMany();
for (const user of users) {
const orders = await prisma.order.findMany({
where: { userId: user.id }
});
// process orders...
}
// The fix: use include to JOIN in a single query
const users = await prisma.user.findMany({
include: {
orders: true
}
});The same pattern appears with plain SQL. The symptom is a page load that triggers hundreds of tiny fast queries rather than a few larger ones. Check your query count in your ORM's debug mode or in pg_stat_statements.
Problem 2: Missing indexes on columns you query by
Without an index, PostgreSQL reads every row in the table to find matches. On a table with 100 rows this is imperceptible. On a table with 500,000 rows, a query that filters on an unindexed column is a full table scan.
The easiest way to find missing indexes: look for sequential scans on large tables in your query plans.
-- EXPLAIN ANALYZE shows the actual execution plan
EXPLAIN ANALYZE
SELECT * FROM orders WHERE user_id = 12345 AND status = 'pending';
-- If you see "Seq Scan" on a large table, you need an index:
CREATE INDEX CONCURRENTLY idx_orders_user_id_status
ON orders (user_id, status);
-- The CONCURRENTLY option creates the index without locking the tableAlways add indexes to foreign key columns. Your ORM creates the foreign key constraint but does not always create the index. A join from orders to users onorders.user_id hits a full scan on the orders table if there is no index on user_id.
Problem 3: No connection pooling
PostgreSQL creates a process for each connection. Opening a new connection takes 50-100ms and consumes roughly 10MB of RAM. A serverless app or an app that opens a new connection per request will hit hundreds of connections under load, consuming gigabytes of RAM just for connection overhead.
PgBouncer sits between your app and PostgreSQL and pools connections. Your app connects to PgBouncer, which manages a smaller set of real PostgreSQL connections. 100 app connections might map to 10 real database connections.
# PgBouncer basic configuration (pgbouncer.ini)
[databases]
yourdb = host=127.0.0.1 port=5432 dbname=yourdb
[pgbouncer]
listen_port = 6432
listen_addr = 127.0.0.1
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction # transaction pooling is best for most web apps
max_client_conn = 200 # max connections from your app
default_pool_size = 20 # actual connections to PostgreSQLIf you use Supabase, connection pooling through their Supavisor is built in and just requires using the pooler connection string (port 6543) instead of the direct connection (port 5432).
Problem 4: Default shared_buffers configuration
PostgreSQL's default shared_buffers is 128MB, which is appropriate for a shared hosting environment where PostgreSQL gets a tiny fraction of the RAM. On a dedicated server, the recommended value is 25% of total RAM.
# postgresql.conf - tune for a server with 4GB RAM:
shared_buffers = 1GB # 25% of RAM
effective_cache_size = 3GB # 75% of RAM (used for query planning estimates)
work_mem = 64MB # per sort/hash operation (lower if many connections)
maintenance_work_mem = 256MB # for VACUUM, CREATE INDEX, etc.
# Restart PostgreSQL after changing these:
sudo systemctl restart postgresqlProblem 5: Queries on unindexed text columns with LIKE
WHERE name LIKE '%search%' cannot use a standard B-tree index. The leading wildcard forces a full table scan. If your app has a search feature running this pattern, it will get painful at scale.
For simple prefix search (LIKE 'search%'), a standard index works. For full-text or substring search, use PostgreSQL's built-in full-text search:
-- Add a tsvector column for full-text search
ALTER TABLE products ADD COLUMN search_vector tsvector;
UPDATE products SET search_vector = to_tsvector('english', name || ' ' || description);
CREATE INDEX idx_products_search ON products USING GIN(search_vector);
-- Query using full-text search instead of LIKE
SELECT * FROM products
WHERE search_vector @@ plainto_tsquery('english', 'search term');Problem 6: VACUUM is not running regularly
PostgreSQL uses MVCC (multi-version concurrency control). When a row is updated or deleted, the old version is marked dead but stays on disk until VACUUM cleans it up. Tables with high write volume accumulate dead tuples. This bloats the table, slows down sequential scans, and causes table bloat that can eventually hit performance significantly.
Autovacuum handles this automatically, but it may run too infrequently for high-write tables by default:
-- Check which tables have accumulated dead tuples
SELECT
relname,
n_dead_tup,
n_live_tup,
round(n_dead_tup::numeric / NULLIF(n_live_tup + n_dead_tup, 0) * 100, 2) AS dead_pct,
last_autovacuum
FROM pg_stat_user_tables
ORDER BY n_dead_tup DESC
LIMIT 10;
-- Run VACUUM manually on a table with too many dead tuples:
VACUUM ANALYZE orders;Where to start
Enable pg_stat_statements, find the 5 slowest queries, runEXPLAIN ANALYZE on each, and look for sequential scans on large tables. Fix those first. In most codebases I look at, fixing N+1 queries and adding 2-3 indexes cuts average response time by 60-80%.
$ diagnose --slow-queries
If your database is the thing slowing everything down and you want to know exactly what to fix, I can go through it.
$ ./request-backend-help.sh →