Rails Caching: What They Don't Tell You in the Tutorials
Every Rails caching tutorial shows you the same thing: wrap some HTML in a cache block, watch your response times drop. Simple. Clean. And about 20% of what you actually need to know when things get real.
I’ve spent the last decade untangling caching problems in Rails applications. The pattern repeats: a team adds caching, performance improves, and then six months later they’re debugging phantom stale data or wondering why their Redis memory keeps climbing. Here’s what I wish someone had told me earlier.
The Fragment Cache Footgun
Fragment caching looks innocent:
<% @products.each do |product| %>
<% cache product do %>
<%= render product %>
<% end %>
<% end %>
Rails generates a cache key from the product’s cache_key_with_version, which includes the updated_at timestamp. Update the product, cache invalidates automatically. Beautiful.
Until someone adds a pricing rule that depends on the current user’s membership tier. Or until a related object changes. Or until you realize that updated_at doesn’t change when associations change.
The problem isn’t fragment caching itself. The problem is assuming that updated_at captures all the reasons a cached fragment might need to change.
Building Cache Keys That Actually Work
Here’s a cache key pattern I use constantly:
# app/helpers/cache_key_helper.rb
module CacheKeyHelper
def product_cache_key(product, user)
[
product,
product.category,
user&.membership_tier,
PricingRule.maximum(:updated_at),
"v2" # bump this when the template changes
]
end
end
<% cache product_cache_key(product, current_user) do %>
<%= render product %>
<% end %>
Now the cache invalidates when:
- The product changes
- The product’s category changes
- The user’s tier changes (different user = different cache entry)
- Any pricing rule changes
- You deploy a template change and bump the version
The array gets normalized into a cache key automatically. Rails handles this well.
Russian Doll Caching: The Real Version
Tutorials explain Russian Doll caching as nested fragments where inner caches survive outer invalidation. True. But the implementation details matter.
<% cache ["products-list", @products.maximum(:updated_at), current_user&.id] do %>
<% @products.each do |product| %>
<% cache product do %>
<% cache [product, :details] do %>
<%= product.description %>
<%= render product.specifications %>
<% end %>
<% cache [product, :pricing, current_user&.membership_tier] do %>
<%= render partial: "pricing", locals: { product: product } %>
<% end %>
<% end %>
<% end %>
<% end %>
When a product updates, only that product’s fragment rebuilds. The outer list cache still invalidates (because maximum(:updated_at) changed), but rebuilding it just means reassembling the inner cached fragments. Fast.
But here’s what trips people up: if you change the structure of the inner fragments without changing the cache keys, you get stale HTML embedded in fresh wrappers. Always include a version in cache keys when you modify template structure.
Low-Level Caching for Expensive Computations
Fragment caching handles view rendering. Low-level caching handles everything else:
class Product < ApplicationRecord
def recommended_products
Rails.cache.fetch([self, :recommendations, Recommendation.maximum(:updated_at)], expires_in: 1.hour) do
Recommendation.compute_for(self) # expensive ML call or complex query
end
end
end
The cache key includes the product itself and the latest recommendation update. The expires_in acts as a safety valve—even if the key logic misses an invalidation trigger, data refreshes within an hour.
For really expensive computations, I often add race condition protection:
Rails.cache.fetch(cache_key, expires_in: 1.hour, race_condition_ttl: 10.seconds) do
expensive_computation
end
This prevents cache stampedes where multiple processes simultaneously try to recompute an expired value.
Cache Invalidation: The Hard Problem
Phil Karlton’s quote about cache invalidation being one of the two hard problems in computer science gets repeated so often that people forget to actually solve it.
Here are the strategies that work:
Touch propagation:
class LineItem < ApplicationRecord
belongs_to :order, touch: true
end
class Order < ApplicationRecord
belongs_to :customer, touch: true
end
Updating a line item touches the order, which touches the customer. Cache keys based on updated_at invalidate up the chain.
But touch: true can cascade badly. I’ve seen saves that touch hundreds of records, each firing callbacks and running validations. Be intentional about your touch graph.
Version columns:
class Product < ApplicationRecord
belongs_to :category
def cache_version
[updated_at, category.updated_at, inventory_version].max
end
def increment_inventory_version!
update_column(:inventory_version, inventory_version + 1)
end
end
Separate version tracking for different concerns. Price changes, inventory updates, and content edits might need to invalidate different caches.
Explicit invalidation:
class InventoryService
def update_stock(product, quantity)
product.update!(stock: quantity)
Rails.cache.delete_matched("products/#{product.id}/*")
Rails.cache.delete_matched("categories/#{product.category_id}/*")
end
end
Sometimes explicit is better than clever. delete_matched is slow on large cache stores, but for targeted invalidation it’s fine.
Debugging Cache Problems
When something looks stale but shouldn’t be:
# Check what's actually in the cache
key = controller.fragment_cache_key([:product, @product])
Rails.cache.exist?(key) # Is it cached?
Rails.cache.read(key) # What's cached?
# In Rails console
ActiveSupport::Cache::Store.logger = Logger.new(STDOUT)
# Now all cache operations get logged
For fragment caches specifically:
<% cache product do %>
<!-- DEBUG: <%= product.cache_key_with_version %> -->
<%= render product %>
<% end %>
That HTML comment shows exactly what cache key is being used. Check if it matches what you expect.
Redis Memory and Cache Eviction
By default, Rails with Redis uses whatever eviction policy your Redis has configured (often noeviction). This can cause writes to fail when Redis fills up.
Configure Redis with maxmemory-policy allkeys-lru for cache workloads. Least-recently-used eviction keeps the hot data.
But also: set reasonable expires_in values. Cached data without expiration accumulates forever:
# config/initializers/cache.rb
Rails.application.config.cache_store = :redis_cache_store, {
url: ENV['REDIS_URL'],
expires_in: 1.day # default expiration for all keys
}
For fragment caches, you can’t set expiration directly, but you can use a cache store that supports it:
config.action_controller.cache_store = :redis_cache_store, {
expires_in: 12.hours
}
When Caching Hurts More Than It Helps
Not everything should be cached. Rules of thumb:
Skip caching when:
- The data changes more often than it’s read
- Cache key computation approaches query time
- The cached content is highly personalized (you’re caching per-user, cache hit rate tanks)
- The underlying query is already fast and the view is simple
Invest in caching when:
- The same data gets requested repeatedly
- Computation or rendering is genuinely expensive
- Cache keys can be kept simple and reliable
- You have monitoring to detect stale data problems
The best caching is boring caching. Predictable keys, explicit invalidation, sensible expiration. The clever approaches usually become debugging nightmares.
Production Lessons
A few things I learned the hard way:
Warm the cache on deploy. When you deploy and cache keys change (new asset fingerprints, version bumps), your cache is cold. Either accept the temporary slowdown or run a cache warming job:
# lib/tasks/cache.rake
task warm: :environment do
Product.find_each do |product|
ProductSerializer.new(product).to_json # touches the cache
end
end
Monitor cache hit rates. Set up structured logging to track cache performance in production. A cache with a 10% hit rate is worse than no cache—you’re paying the read overhead on every request plus the write overhead sometimes. Track this.
Test cache invalidation explicitly. Not just “does the cache work” but “when I change this, does the cache actually invalidate.” Write tests for your invalidation logic.
If you’re running cache warming jobs on deploy, those are a great fit for Solid Queue or Sidekiq — they can run asynchronously without slowing down the deploy itself.
Caching in Rails isn’t hard. Caching correctly takes practice and attention. The good news: once you understand the patterns, they transfer to every project.
Frequently Asked Questions
When should I use Redis vs Memcached for Rails caching?
Redis is the better default for most Rails apps. It supports data structures beyond simple key-value pairs, persists data across restarts, and you likely already run it for background jobs or Action Cable. Memcached is simpler and slightly faster for pure cache workloads, but Redis’s versatility usually wins.
How do I debug stale cache data in production?
Start by checking the cache key — add an HTML comment with cache_key_with_version to confirm the key matches what you expect. Then verify the cache entry exists with Rails.cache.exist?(key) and read it with Rails.cache.read(key). Enable cache logging temporarily with ActiveSupport::Cache::Store.logger = Logger.new(STDOUT) in a console session.
Does Russian Doll caching work with Turbo/Hotwire?
Yes. Russian Doll caching works at the fragment level during server-side rendering, which happens before Turbo processes the response. The cached fragments produce the same HTML regardless of whether it’s delivered as a full page or a Turbo Frame. The two technologies complement each other well.
How do I prevent cache stampedes when a popular cache key expires?
Use race_condition_ttl in your Rails.cache.fetch call. This extends the expiring entry briefly so only one process recomputes the value while others continue serving the stale version. Set it to 10-30 seconds for most workloads.
About the Author
Roger Heykoop is a senior Ruby on Rails developer with 19+ years of Rails experience and 35+ years in software development. He specializes in Rails modernization, performance optimization, and AI-assisted development.
Get in TouchRelated Articles
Need Expert Rails Development?
Let's discuss how we can help you build or modernize your Rails application with 19+ years of expertise
Schedule a Free Consultation