PgBouncer and Rails: Connection Pooling That Doesn't Blow Up Your Database
The alert came in on a Tuesday afternoon. Our client’s Rails app—doing fine all morning—started throwing PG::ConnectionBad: FATAL: sorry, too many clients already across every endpoint. Their Heroku dynos had auto-scaled from 4 to 12 under a traffic spike. Each dyno ran Puma with 5 threads. Each thread holds a database connection. That’s 60 connections. Their Postgres plan allowed 25.
Math: terminal.
After nineteen years of Rails, I’ve seen this pattern more times than I can count. The fix is PgBouncer. The trap is thinking PgBouncer is simple.
Why Rails Connection Pooling Isn’t Enough
Rails has built-in connection pooling through ActiveRecord. The pool setting in database.yml controls how many connections each process maintains. But this is a per-process pool. Each Puma worker, each Sidekiq process, each job runner—every one of them holds its own connections open to Postgres.
At low scale, this is fine. At higher scale, you do the multiplication:
- 10 Puma workers × 5 threads = 50 connections
- 4 Sidekiq processes × 10 concurrency = 40 connections
- Total: 90 connections, before you’ve run a single query
Postgres isn’t free with connections. Each one spawns a backend process on the server, consuming roughly 5–10 MB in practice. A Postgres instance with max_connections = 100 can exhaust itself before your application even starts handling real traffic.
PgBouncer sits between your application and Postgres and reuses connections. When a Rails thread finishes a query, PgBouncer returns that physical connection to a pool, available for the next request—even if it comes from a completely different process.
The Three Modes
PgBouncer offers three pooling modes. Picking the wrong one will cause subtle, maddening bugs.
Session pooling: A client gets a server connection for the duration of their session. This barely helps Rails apps; Rails connections are long-lived by design.
Transaction pooling: A client gets a server connection for the duration of a single transaction. After COMMIT or ROLLBACK, the connection returns to the pool. This is what you want for Rails.
Statement pooling: One connection per statement. Never use this with Rails. It doesn’t work with multi-statement transactions and will break in ways that are genuinely hard to diagnose.
Use transaction pooling.
Installing PgBouncer
On Ubuntu/Debian:
apt-get install pgbouncer
If you’re running Kamal, add it as a service in your deploy.yml:
accessories:
pgbouncer:
image: bitnami/pgbouncer:latest
host: db-proxy.yourdomain.com
env:
clear:
POSTGRESQL_HOST: your-postgres-host
POSTGRESQL_PORT: "5432"
PGBOUNCER_POOL_MODE: transaction
PGBOUNCER_MAX_CLIENT_CONN: "1000"
PGBOUNCER_DEFAULT_POOL_SIZE: "20"
PGBOUNCER_DATABASE: "*"
ports:
- "6432:6432"
The pgbouncer.ini File
If you’re configuring PgBouncer directly rather than via environment variables:
[databases]
myapp = host=127.0.0.1 port=5432 dbname=myapp_production
[pgbouncer]
listen_port = 6432
listen_addr = 0.0.0.0
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 20
min_pool_size = 5
reserve_pool_size = 5
reserve_pool_timeout = 3
server_idle_timeout = 600
log_connections = 0
log_disconnections = 0
Key settings:
max_client_conn: How many total client connections PgBouncer accepts. Set this high (1000+). These are cheap.default_pool_size: How many real Postgres connections PgBouncer maintains per database/user combination. This is the expensive number—tune it based on your Postgresmax_connections.reserve_pool_size: Extra connections kept in reserve for traffic spikes.
Wiring It Into Rails
Point your database.yml at PgBouncer instead of Postgres directly:
production:
adapter: postgresql
host: pgbouncer-host
port: 6432
database: myapp_production
username: myapp
password: <%= ENV["DB_PASSWORD"] %>
pool: 5
prepared_statements: false
advisory_locks: false
Those last two settings are non-negotiable with PgBouncer in transaction mode. If you skip them, you will have a bad day.
The Transaction Mode Gotchas
Transaction pooling is powerful and it will absolutely wreck you if you don’t understand what it breaks.
Prepared statements. By default, Rails caches query plans on the server side using prepared statements. In session mode this works fine—each client has its own server connection. In transaction mode you get a different server connection each time. Prepared statements from connection A aren’t visible on connection B. Rails will throw PG::InvalidSqlStatementName errors intermittently, under load, when you least expect them.
Fix: prepared_statements: false in database.yml.
Advisory locks. Rails uses Postgres advisory locks internally—for migrations, among other things. Advisory locks are session-scoped: they belong to a specific backend connection. With transaction pooling you’re not guaranteed the same backend between calls.
Fix: advisory_locks: false in database.yml. Rails will fall back to database-level locking using a migration lock table.
SET variables. If you’re using SET LOCAL or SET to configure session-level parameters—search_path, timezone, anything—these won’t persist across transactions when PgBouncer rotates the underlying connection. The setting is gone the moment the transaction commits.
If you need per-request configuration, set it at the start of every transaction, not once at connection open.
LISTEN/NOTIFY. Async notifications are session-scoped. Don’t use LISTEN through PgBouncer. If you’re using Action Cable with a Postgres adapter, point it directly at Postgres, bypassing PgBouncer entirely:
# config/cable.yml
production:
adapter: postgresql
# Point directly at Postgres, not PgBouncer
url: <%= ENV["DIRECT_DATABASE_URL"] %>
Monitoring Your Pool
Once PgBouncer is running, connect to its admin console to see what’s happening:
psql -h 127.0.0.1 -p 6432 -U pgbouncer pgbouncer
Then:
SHOW POOLS;
You’ll see cl_active (client connections in use), sv_active (server connections in use), and cl_waiting (clients waiting for a connection). If cl_waiting is consistently above zero, your default_pool_size is too small.
SHOW STATS;
Shows request rates, average query times, and total bytes transferred. Useful for spotting anomalies and capacity planning.
Expose these metrics to your observability stack. The pgbouncer_exporter for Prometheus works well and surfaces everything from SHOW POOLS as Prometheus metrics you can alert on.
Tuning default_pool_size
This is the number that actually matters. Set it too low and you’ll see connection waiting under load. Set it too high and you’ll exhaust Postgres max_connections.
A reasonable starting formula:
default_pool_size = (max_connections - superuser_reserved_connections) / number_of_pools
If Postgres has max_connections = 100, you reserve 3 for superuser connections, and you have one app database, that’s 97. In practice leave more headroom—you want room for direct connections during migrations and admin work.
For a typical production app on a moderate Postgres instance:
- Postgres:
max_connections = 100 - PgBouncer:
default_pool_size = 20,reserve_pool_size = 5 - Rails/Puma:
pool: 5 - Sidekiq:
concurrency: 10
This means your application can have hundreds of virtual connections sharing 20 physical ones. At any given moment, no more than 20 queries run simultaneously against Postgres—which is more than enough for most workloads.
The pool Setting in database.yml Still Matters
One thing people miss: even with PgBouncer, your Rails pool setting controls how many connections each process keeps to PgBouncer. Set it equal to your Puma thread count. If Puma runs 5 threads, set pool: 5. Setting it higher just wastes resources.
production:
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
Reading pool from the same env var that controls Puma threads is idiomatic and correct. If you change concurrency, the pool size follows automatically.
Testing the Setup
Before shipping to production, verify the configuration is actually working:
# In rails console, run this several times
ActiveRecord::Base.connection_pool.with_connection do |conn|
result = conn.execute("SELECT pg_backend_pid()")
puts result.first["pg_backend_pid"]
end
With PgBouncer in transaction mode you should see different PIDs across calls. If you see the same PID every time, you’re either not routing through PgBouncer (check port—6432 vs 5432) or you’re in session mode.
You can also verify the gotchas are properly disabled:
conn = ActiveRecord::Base.connection
puts conn.prepared_statements # should be false
When PgBouncer Isn’t Enough
PgBouncer solves connection exhaustion beautifully. But it doesn’t help if your queries are slow. If every request spends 500ms waiting on a query, you’ll still exhaust your pool—just more slowly.
PgBouncer buys you headroom. It doesn’t replace good query performance. Those two problems need separate solutions: PgBouncer for connection management, proper indexing and query optimization for throughput.
The Tuesday afternoon client I mentioned at the top? We added PgBouncer, dropped their physical connection count from 60 to 15, and their application handled the traffic spike without breaking a sweat. Then we went back and looked at their slow query log. That’s a separate story.
Frequently Asked Questions
Do I still need PgBouncer if I use a managed Postgres service with high connection limits?
Higher connection limits just let you exhaust Postgres more slowly. You still want PgBouncer for the actual connection reuse—it reduces memory pressure on the Postgres server and lets you scale application processes freely. Some managed services (Heroku, Supabase) offer PgBouncer as a built-in feature. Use it.
Can I use PgBouncer with Rails database migrations?
Migrations need advisory locks, which are session-scoped. Either run migrations with a direct Postgres connection (set DATABASE_URL to bypass PgBouncer during deploys), or keep advisory_locks: false and let Rails use its fallback mechanism. Most teams use a separate DATABASE_URL for migrations and the PgBouncer URL for the application.
Does PgBouncer work with Rails multi-db?
Yes, but you need a PgBouncer pool configured for each database. Define multiple entries in your [databases] section of pgbouncer.ini, one per Rails database. Each gets its own default_pool_size.
What’s the performance overhead of PgBouncer itself?
Minimal. PgBouncer is written in C and handles tens of thousands of connections per second on modest hardware. It adds under 1ms of latency for connection routing. The tradeoff is worth it at any non-trivial scale.
Running into connection exhaustion or database performance issues? TTB Software specializes in Rails infrastructure and Postgres tuning. Nineteen years in the trenches—we’ve fixed this exact problem more times than we can count.
About the Author
Roger Heykoop is a senior Ruby on Rails developer with 19+ years of Rails experience and 35+ years in software development. He specializes in Rails modernization, performance optimization, and AI-assisted development.
Get in TouchRelated Articles
Need Expert Rails Development?
Let's discuss how we can help you build or modernize your Rails application with 19+ years of expertise
Schedule a Free Consultation