Rails Postgres Advisory Locks: Stop Cron Overlap and Race Conditions in Production
A logistics client paged me on a Tuesday morning because their inventory was off by thousands of units. Not slightly off — meaningfully off, the kind of off that makes a CFO walk over to your desk. We dug in for an hour and found the cause: their five-minute “reconcile stock from supplier feed” cron had, at some point during a Kamal rollout, ended up running on two app pods simultaneously. Both pods read the same supplier batch. Both decremented the same orders. Both wrote results back. Whenever the windows overlapped, every line was double-counted.
The fix took eight lines. We wrapped the entire job in a Postgres advisory lock. After nineteen years of Rails I have seen more outages caused by jobs that ran twice than by jobs that did not run at all, and Rails Postgres advisory locks are still the cleanest weapon against that whole class of bug. This is the production playbook.
What Rails Postgres Advisory Locks Actually Are
A Postgres advisory lock is a named lock that you, the application developer, define and acquire by your own convention. Postgres tracks whether the lock is held, blocks anyone else who tries to acquire it, and releases it when you ask — or when your session goes away.
Crucially, advisory locks do not lock any rows or tables. They lock a number of your choosing. That is the whole trick. You pick a 64-bit integer (usually derived from a string like "reconcile-stock"), and Postgres uses it as a coordination point across every connection pointed at the database. That is everything you need to coordinate Rails processes across pods, regions, or deploys.
Compared to pg_locks on a row, advisory locks have three advantages worth memorising:
- They do not depend on a record existing — you can lock the concept of “this user’s onboarding flow” before any row is written.
- They are cheap. No row reads, no autovacuum interaction, no lock-table bloat.
- They have a session-level mode that survives across transactions, which is exactly what you need for long-running jobs.
SELECT pg_try_advisory_lock(42);
-- t (lock acquired)
SELECT pg_try_advisory_lock(42);
-- f (someone else holds it)
SELECT pg_advisory_unlock(42);
-- t (released)
That is the entire primitive. Everything below is just patterns built on top.
Session-Level vs Transaction-Level Rails Postgres Advisory Locks
Postgres gives you two flavours and Rails developers conflate them constantly.
Transaction-level locks are acquired with pg_advisory_xact_lock and released automatically when the transaction commits or rolls back. You cannot release them manually. They are perfect for “make this code path inside a single transaction mutually exclusive.”
Session-level locks are acquired with pg_advisory_lock and held until you explicitly release them with pg_advisory_unlock or your database session ends. They are what you want for long-running jobs that span many transactions.
The trap: with PgBouncer in transaction-pooling mode, session-level advisory locks are dangerous. Your “session” is actually a borrowed backend connection that PgBouncer hands to the next client the moment your transaction ends, lock and all. Either use transaction-level locks behind PgBouncer, or route lock-holding work through a direct connection. I wrote about this gotcha in more detail in the Postgres connection pooling guide.
The with_advisory_lock Gem Is Worth the Dependency
You can roll your own, and for one or two call sites you should. Past that, use with_advisory_lock. It hashes a string into a 64-bit integer for you, handles transaction-vs-session correctly, supports timeouts, and reads beautifully.
# Gemfile
gem "with_advisory_lock", "~> 5.2"
class Inventory::ReconcileJob < ApplicationJob
queue_as :default
def perform(account_id)
Account.with_advisory_lock("reconcile-stock-#{account_id}", timeout_seconds: 0) do
Inventory::Reconciler.new(account_id).run
end
rescue WithAdvisoryLock::FailedToAcquireLock
Rails.logger.info("reconcile-stock-#{account_id} already running, skipping")
end
end
timeout_seconds: 0 is the line that turns this from “wait forever” into “skip if already running.” That is the right default for cron-style jobs where the next tick will pick up where you left off. For workflows where you must run, raise the timeout to a few seconds, but never to infinity. A blocked job holding a worker is worse than a skipped one.
Use Case 1: Stop Cron Overlap Across Pods and Deploys
This is the most common reason teams adopt Rails Postgres advisory locks, and it was the inventory bug that started this post.
class DailyDigestJob < ApplicationJob
def perform
ApplicationRecord.with_advisory_lock("daily-digest", timeout_seconds: 0) do
Account.find_each do |account|
DigestMailer.with(account: account).daily.deliver_later
end
end
end
end
Whether your scheduler is whenever, GoodJob’s recurring jobs, Solid Queue’s cron, or a Kubernetes CronJob, you should assume it can fire twice. A pod restart, a deploy, a clock drift, a “manual run from the console” by a junior engineer — the only thing standing between you and a duplicate run is a lock. If you are coordinating background work specifically with Solid Queue, the Solid Queue background jobs guide walks through how to wire this into recurring tasks cleanly.
Use Case 2: Race Conditions in Checkout and Inventory
The other classic. Two requests arrive at the same instant, both check stock >= 1, both pass, both create an order. Now you have sold one of something twice.
SELECT FOR UPDATE works here, but it requires the row to exist and locks the row for the entire transaction window. An advisory lock keyed on "checkout-#{sku}" is often clearer and lets you guard work that touches multiple tables.
class CheckoutService
def initialize(sku:, user:)
@sku, @user = sku, user
end
def call
Product.with_advisory_lock("checkout-#{@sku}", transaction: true, timeout_seconds: 5) do
product = Product.find_by!(sku: @sku)
raise OutOfStock unless product.stock_available >= 1
product.decrement!(:stock_available)
Order.create!(user: @user, product: product, status: :placed)
end
end
end
Note transaction: true. That switches with_advisory_lock to pg_advisory_xact_lock, which means the lock auto-releases when the surrounding transaction ends — even if your code raises. That is exactly what you want here: no chance of a lock leak if Order.create! blows up.
Use Case 3: Leader Election for One-Off Work
You have N app servers, all running the same code, and you want exactly one of them to run a piece of work. Maybe you are warming a cache. Maybe you are running a backfill on boot. Maybe you are launching a single websocket subscriber.
# config/initializers/leader_tasks.rb
Rails.application.config.after_initialize do
next if Rails.env.test?
Thread.new do
ApplicationRecord.with_advisory_lock("backfill-leader", timeout_seconds: 0) do
Backfill::ImportancesRecalculator.new.run
end
rescue WithAdvisoryLock::FailedToAcquireLock
# someone else is the leader, fine
end
end
Whichever pod boots first grabs the lock and runs the work. The others fail to acquire and move on. No Redis, no Zookeeper, no leader election library. Just one Postgres call.
Use Case 4: Idempotent Webhooks and External Callbacks
Webhook providers retry. Stripe, GitHub, Shopify — they all assume your endpoint can be hit twice for the same event. Combining an advisory lock with an idempotency record gives you the two layers you need.
class StripeWebhooksController < ApplicationController
def create
event_id = params[:id]
ApplicationRecord.with_advisory_lock("stripe-event-#{event_id}", transaction: true) do
return head :ok if WebhookEvent.exists?(provider: "stripe", external_id: event_id)
Stripe::EventProcessor.new(params).call
WebhookEvent.create!(provider: "stripe", external_id: event_id)
end
head :ok
end
end
The lock prevents two concurrent retries from both seeing “no record yet” and both processing. The WebhookEvent row is the durable proof for retries that arrive after the lock is released. I covered the broader webhook security and idempotency patterns in the Rails webhook processing guide.
Production Traps Nobody Warns You About
These are the four that have bitten me or my clients in the last two years.
Lock leak with session-level locks under PgBouncer. Already mentioned. If you must use session-level locks, make sure those connections bypass PgBouncer or use session pooling.
Acquiring a lock inside a transaction and then doing slow work. A transaction-level advisory lock keeps the transaction open, which keeps idle in transaction connections alive, which makes autovacuum sad. Keep the work inside the lock short, or use a session-level lock and a short transaction inside it. The autovacuum tuning guide explains why long transactions are so corrosive.
Hashing collisions across the codebase. with_advisory_lock hashes your string into 64 bits, which is plenty of space, but it is still a hash. If you key locks on user input directly ("checkout-#{params[:sku]}"), be aware two different SKUs could collide. Prefix your lock names with the domain (“checkout-“, “reconcile-stock-“) so collisions are bounded.
Forgetting timeout_seconds. The default is to wait forever. In a job system that means workers stack up behind a stuck lock and your queue depth explodes. Always set a timeout. For “skip if running” workflows that timeout is 0. For “wait briefly then give up,” it is two to five seconds.
Monitoring Rails Postgres Advisory Locks
The pg_locks view shows you exactly what is held and by whom. This query is worth pinning to your monitoring dashboard:
SELECT pid,
locktype,
classid,
objid,
mode,
granted,
now() - state_change AS held_for,
query
FROM pg_locks
JOIN pg_stat_activity USING (pid)
WHERE locktype = 'advisory'
ORDER BY held_for DESC;
If you see the same advisory lock held for hours, somebody leaked one. If you see queues of waiters, someone is acquiring without a timeout. Both are findable in seconds with this query and effectively invisible without it.
When Not to Reach for Advisory Locks
Advisory locks are the right tool for coordination. They are the wrong tool for durable serialization. If you need “exactly one of these jobs has run, ever, even across deploys,” you want a uniqueness constraint on a job-runs table, not a lock. If you need “this user can only have one open session,” you want a row in a sessions table with a unique index. If you need “this credit card cannot be charged twice for the same intent,” you want an idempotency key persisted to a row, with the lock as a thin layer of concurrency control on top.
The mental model: advisory locks coordinate concurrent runs. Database constraints prevent duplicate runs across time. You usually want both.
Frequently Asked Questions
Are Rails Postgres advisory locks safe with PgBouncer?
Transaction-level advisory locks (transaction: true) are safe with PgBouncer in any pooling mode because they are released at transaction end before the connection is returned to the pool. Session-level advisory locks are only safe with session pooling, never with transaction pooling — the connection gets handed to another client while still holding your lock.
How do I prevent two cron jobs from running at the same time in Rails?
Wrap the job body in with_advisory_lock("job-name", timeout_seconds: 0) and rescue WithAdvisoryLock::FailedToAcquireLock. The first invocation grabs the lock, every overlapping invocation skips. This works the same whether your scheduler is whenever, Solid Queue cron, GoodJob recurring, or a Kubernetes CronJob.
What is the difference between pg_advisory_lock and pg_try_advisory_lock?
pg_advisory_lock blocks until the lock is acquired. pg_try_advisory_lock returns immediately with true or false. For background jobs you almost always want the try variant or a bounded timeout — blocking forever inside a worker is how queue depth blows up.
Can advisory locks deadlock?
Yes, if you acquire two of them in inconsistent orders across processes. The fix is the same as for row locks: pick a canonical order (alphabetical lock name, ascending ID) and always acquire in that order. Postgres will detect the deadlock and abort one transaction, but you would rather not get there at all.
After nineteen years of Rails the lesson keeps repeating: most production correctness bugs are concurrency bugs in disguise, and most concurrency bugs are solved by one well-placed lock. Rails Postgres advisory locks are the lightest, cheapest tool in that toolbox. Use them.
Need help hardening a Rails system against race conditions, cron overlap, or duplicate processing? TTB Software ships production-grade Rails infrastructure. We have been doing this for nineteen years.
About the Author
Roger Heykoop is a senior Ruby on Rails developer with 19+ years of Rails experience and 35+ years in software development. He specializes in Rails modernization, performance optimization, and AI-assisted development.
Get in TouchRelated Articles
Need Expert Rails Development?
Let's discuss how we can help you build or modernize your Rails application with 19+ years of expertise
Schedule a Free Consultation