Rails Logging You Can Actually Debug With
Last month I got pulled into a production incident at 11 PM. Users couldn’t check out. The error was somewhere in a 200-line request cycle that touched three services, two queues, and a payment gateway. I opened the logs and found this:
Started POST "/orders" for 10.0.1.52 at 2026-01-14 23:02:17 +0100
Processing by OrdersController#create as HTML
Parameters: {"order"=>{"product_id"=>"482", "quantity"=>"1"}, "commit"=>"Place Order"}
User Load (0.4ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1
Product Load (0.3ms) SELECT "products".* FROM "products" WHERE "products"."id" = $1
TRANSACTION (0.1ms) BEGIN
Order Create (0.6ms) INSERT INTO "orders" ...
TRANSACTION (0.2ms) COMMIT
Completed 500 Internal Server Error in 3241ms
Three seconds of wall time. Dozens of SQL lines. No clue what actually failed. The stack trace pointed to a Stripe call, but which one? With what parameters? What was the response?
This is where most Rails apps live: logging everything and communicating nothing.
The Problem with Default Rails Logs
Rails ships with ActiveSupport::Logger configured to be helpful during development. Every SQL query, every parameter hash, every rendered partial — all dumped to the log. In development, fine. You’re watching one request at a time.
In production, with fifty requests per second, those multi-line entries become noise. You can’t grep them reliably. You can’t pipe them into a log aggregator without parsing headaches. You definitely can’t correlate a failed request across your background jobs and external service calls.
Step One: Lograge
Lograge condenses each request into a single line. One request, one log entry.
# Gemfile
gem 'lograge'
# config/environments/production.rb
config.lograge.enabled = true
config.lograge.formatter = Lograge::Formatters::Json.new
Now that 12-line mess becomes:
{"method":"POST","path":"/orders","format":"html","controller":"OrdersController","action":"create","status":500,"duration":3241.08,"db":1.6,"view":0.0}
One line. Parseable. Greppable. But still missing context. Who was this user? What order? Let’s fix that.
Adding Business Context
Lograge lets you append custom data through custom_options:
# config/initializers/lograge.rb
Rails.application.configure do
config.lograge.custom_options = lambda do |event|
exceptions = %w[controller action format id]
{
params: event.payload[:params]
.except(*exceptions)
.reject { |_, v| v.blank? },
user_id: event.payload[:user_id],
request_id: event.payload[:request_id]
}
end
end
And in your ApplicationController:
class ApplicationController < ActionController::Base
def append_info_to_payload(payload)
super
payload[:user_id] = current_user&.id
payload[:request_id] = request.request_id
end
end
Now every log line carries the user ID and a request ID you can trace across services.
Step Two: Semantic Logger for Everything Else
Lograge handles request logs. But your application does plenty of logging outside the request cycle — in background jobs, rake tasks, service objects. For those, rails_semantic_logger gives you structured output everywhere.
# Gemfile
gem 'rails_semantic_logger'
# config/environments/production.rb
config.log_level = :info
config.rails_semantic_logger.format = :json
config.semantic_logger.add_appender(
io: $stdout,
formatter: :json
)
Then in your code:
class PaymentService
include SemanticLogger::Loggable
def charge(order)
logger.info("Charging customer", order_id: order.id, amount: order.total, currency: "EUR")
response = Stripe::Charge.create(
amount: order.total_cents,
currency: 'eur',
customer: order.user.stripe_customer_id
)
logger.info("Charge succeeded", order_id: order.id, charge_id: response.id)
response
rescue Stripe::CardError => e
logger.error("Charge failed", order_id: order.id, error: e.message, code: e.code)
raise
end
end
Each of those log calls produces a JSON line with timestamps, log level, class name, and your custom fields. When the 11 PM call comes, you search for order_id: 9482 and see every step that order took — from controller to service to background job.
Request ID Propagation
The X-Request-Id header is your best friend. Rails generates one automatically, or you can pass one in from your load balancer. The key is carrying it everywhere.
# Propagate to background jobs
class ApplicationJob < ActiveJob::Base
around_perform do |job, block|
SemanticLogger.tagged(request_id: job.arguments.last[:request_id]) do
block.call
end
end
end
# Propagate to HTTP calls
class ApiClient
def get(url, request_id:)
HTTP.headers("X-Request-Id" => request_id).get(url)
end
end
One request ID, traceable from the user’s browser through your entire stack. When something breaks, you search for that ID and get the complete story.
What to Log, What to Skip
Structured logging tempts you to log everything. Resist. Every log line costs storage, processing, and attention.
Log these:
- Business events (order placed, payment processed, user signed up)
- Errors with full context (what was attempted, what failed, what the input was)
- External service calls with duration and response status
- State transitions (order pending → confirmed → shipped)
- Feature flag evaluations alongside request context, so you know which code path a user hit
Skip these:
- SQL queries (your APM tool handles this better)
- Successful cache hits (too noisy, too frequent)
- Request parameters that duplicate what Lograge already captures
- Anything containing PII you don’t need for debugging
A good rule: if you’d want to see it at 11 PM while debugging, log it. If you’d scroll past it, don’t.
The Log Levels Nobody Uses Right
Most codebases treat log levels as a binary: info for normal stuff, error for bad stuff. The middle levels exist for a reason.
- debug: Development only. SQL, cache details, variable dumps.
- info: Business events. Things that happened and matter.
- warn: Something unexpected but handled. A retry that succeeded. A fallback that kicked in. A deprecation you should fix before it breaks.
- error: Something failed and a user or process was affected.
- fatal: The application is going down.
The warn level is particularly underused. It’s your early warning system — the canary logs that tell you something is degrading before it fails outright.
if retries > 0
logger.warn("Payment retry succeeded", order_id: order.id, retries: retries, duration_ms: elapsed)
else
logger.info("Payment succeeded", order_id: order.id, duration_ms: elapsed)
end
Putting It Together
Here’s the production setup I use for most client Rails apps:
# Gemfile
gem 'lograge'
gem 'rails_semantic_logger'
# config/environments/production.rb
config.lograge.enabled = true
config.lograge.formatter = Lograge::Formatters::Json.new
config.lograge.custom_options = lambda do |event|
{
user_id: event.payload[:user_id],
request_id: event.payload[:request_id],
host: event.payload[:host]
}
end
config.log_level = :info
config.rails_semantic_logger.format = :json
Pipe stdout to your log aggregator of choice — CloudWatch, Datadog, Loki, even a self-hosted ELK stack. The JSON format means no custom parsers, no grok patterns, no regex nightmares.
The Debugging Session, Revisited
With this setup, that 11 PM incident goes differently. I search for the failing user’s request ID. I get a timeline:
{"level":"info","name":"OrdersController","message":"Creating order","user_id":891,"request_id":"abc-123"}
{"level":"info","name":"PaymentService","message":"Charging customer","order_id":9482,"amount":89.99,"request_id":"abc-123"}
{"level":"error","name":"PaymentService","message":"Charge failed","order_id":9482,"error":"Your card was declined","code":"card_declined","request_id":"abc-123"}
Three lines. Complete story. The card was declined, the error wasn’t handled gracefully in the controller, and the user got a 500 instead of a helpful message. Fix the error handling, deploy, go back to sleep.
That’s what logging should do. Not document every SQL query your ORM generates — just tell you what happened and why. And if you’re running background jobs, the same structured approach applies — propagate the request ID into every job so failures are traceable end-to-end.
Frequently Asked Questions
What’s the difference between Lograge and Rails Semantic Logger?
Lograge reformats Rails request logs into single-line entries (one line per request). Semantic Logger replaces the entire Rails logging framework with structured output everywhere — controllers, models, jobs, and custom code. Use both together: Lograge for clean request logs, Semantic Logger for everything else.
How much does structured logging cost in storage?
JSON logs are larger than plain text per line, but you typically log fewer lines because you stop dumping SQL queries and partial renders. In practice, most teams see similar or reduced log volume after switching. The real cost consideration is your log aggregator’s pricing — services like Datadog charge per ingested GB, so be selective about what you log.
Should I log in development the same way as production?
No. Keep verbose, human-readable logs in development — you want SQL queries, render times, and parameter details when working locally. Switch to structured JSON only in production and staging. Configure this in your environment files so developers get a good experience locally.
How do I add logging to background jobs without repeating boilerplate?
Create an around_perform callback in your ApplicationJob base class that wraps every job in a Semantic Logger tagged block. Include the job class, job ID, and any relevant business identifiers. This gives every job automatic structured context without touching individual job classes.
About the Author
Roger Heykoop is a senior Ruby on Rails developer with 19+ years of Rails experience and 35+ years in software development. He specializes in Rails modernization, performance optimization, and AI-assisted development.
Get in TouchRelated Articles
Need Expert Rails Development?
Let's discuss how we can help you build or modernize your Rails application with 19+ years of expertise
Schedule a Free Consultation