Rails Solid Cache: Database-Backed Caching in Rails 8 Without Redis or Memcached
Last summer a client handed me their AWS bill and pointed at the ElastiCache line. “Why is our cache more expensive than our application servers?” It was a fair question. They were running a three-node Redis cluster with replication, snapshots, and cross-AZ traffic, and they were using it to cache rendered fragments and a handful of ActiveRecord query results. Their hit rate was high. Their traffic was modest. Their operational overhead was not.
After nineteen years of Rails I have watched the caching story evolve from file stores nobody trusted to Memcached to Redis to now Rails Solid Cache — and for the first time I can honestly tell most clients that their cache does not need to live outside their database. This post is the practical guide I give teams when we rip out Redis and move to Rails Solid Cache in production.
What Rails Solid Cache Actually Is
Rails Solid Cache is a disk-backed cache store for Active Support that uses a relational database — usually Postgres or MySQL — as its backend. It ships by default in new Rails 8 applications alongside its siblings Solid Queue and Solid Cable. The three of them together are the reason a fresh Rails 8 app can run in production without Redis at all.
The interesting word in that description is disk-backed. Memcached and Redis are memory caches that spill to disk reluctantly. Rails Solid Cache is a disk cache that benefits from Postgres’s buffer cache. That sounds like a downgrade and it is not. Modern NVMe storage with a well-tuned Postgres is fast enough that the latency difference disappears into network overhead anyway, and you get to cache vastly more data per euro than you can in RAM.
One of my clients runs a 400GB Rails Solid Cache against a Postgres instance with 32GB of RAM. They would need a Redis cluster ten times more expensive to hold the same working set in memory. Their p99 cache read latency is under three milliseconds.
When to Choose Rails Solid Cache Over Redis or Memcached
I give clients a short decision rule: if your cache reads are under a few thousand per second and your cache is larger than your RAM budget, use Rails Solid Cache. If you need sub-millisecond reads at tens of thousands of requests per second, stay on Redis.
Rails Solid Cache is the right choice when:
- Your working set is large — tens of gigabytes to terabytes — and you want to cache more than a Redis node can hold.
- You already run Postgres and want one fewer service to operate, patch, and page on.
- Your cache reads are single-digit-millisecond tolerant, which for most web applications they are.
- You want at-rest encryption, easy backups, and point-in-time recovery without building it yourself.
- You are on Rails 7.1+ and want to adopt the Rails 8 operational model.
Stay on Redis or Memcached when:
- You need microsecond cache hits — ad tech, real-time bidding, exchange matching.
- You have a cache working set that fits comfortably in memory and churns faster than Postgres vacuum can keep up with.
- You are using Redis for things a cache store cannot do — pub/sub, sorted sets, rate limiting, leaderboards. Keep Redis for those; just stop using it as a fragment cache.
Most Rails apps I see do not hit the “stay on Redis” bullets. They are running Redis because that is what the 2014 blog post said to do. In 2026, that answer has changed.
Installing Rails Solid Cache
On a fresh Rails 8 app, it is already configured. For an existing app on Rails 7.1 or newer:
# Gemfile
gem "solid_cache"
bundle install
bin/rails solid_cache:install
bin/rails db:migrate
The installer generates a migration for the solid_cache_entries table and updates config/environments/production.rb to use the new store. I strongly recommend running the cache in a separate database from your primary application data — the same way I recommend for Solid Queue. A runaway cache backfill should never block your order table.
# config/database.yml
production:
primary:
<<: *default
database: myapp_production
username: myapp
password: <%= ENV["DATABASE_PASSWORD"] %>
cache:
<<: *default
database: myapp_cache_production
migrations_paths: db/cache_migrate
# config/environments/production.rb
config.cache_store = :solid_cache_store
# config/solid_cache.yml
production:
database: cache
store_options:
max_age: <%= 30.days.to_i %>
max_size: 256.gigabytes
namespace: myapp_production
Three settings carry most of the weight here: max_age, max_size, and namespace. Get those right and the cache mostly runs itself.
How Rails Solid Cache Stores and Evicts Data
Under the hood, the cache table looks almost boring:
create_table :solid_cache_entries do |t|
t.binary :key, null: false, limit: 1024
t.binary :value, null: false, limit: 512.megabytes
t.datetime :created_at, null: false
t.integer :key_hash, null: false, limit: 8
t.integer :byte_size, null: false, limit: 4
end
add_index :solid_cache_entries, :key_hash, unique: true
add_index :solid_cache_entries, [:key_hash, :byte_size]
add_index :solid_cache_entries, :byte_size
Keys are stored as binary along with a 64-bit hash for fast lookups. Values are serialized exactly the way Active Support’s cache would serialize them to Redis — including compression if you enable it. The byte_size column drives the size-based eviction logic.
Eviction is not LRU. Rails Solid Cache uses a FIFO-with-trim model: the oldest entries get removed first once the cache exceeds max_size. This sounds worse than LRU but in practice works fine for web caches, because the hot keys get rewritten every time they are accessed, which moves them to the front of the queue. A thread inside the Rails process runs a background trim job so the eviction happens incrementally, not as a stop-the-world batch.
The tradeoff is real but acceptable: a key that was cached a week ago and just got asked for once might not survive if the cache is full of more recent writes. For fragment caches, that is exactly what you want.
A Realistic Rails Solid Cache Configuration
Here is the configuration I actually ship for medium-traffic clients. Tune the numbers to your working set.
# config/environments/production.rb
config.cache_store = :solid_cache_store, {
expires_in: 7.days,
compress: true,
compress_threshold: 1.kilobyte,
error_handler: ->(method:, returning:, exception:) do
Rails.error.report(exception, handled: true, context: { method: method })
returning
end
}
# config/solid_cache.yml
default: &default
store_options:
max_age: <%= 30.days.to_i %>
max_size: <%= 512.gigabytes %>
max_entries: 50_000_000
namespace: <%= ENV.fetch("SOLID_CACHE_NAMESPACE", "myapp") %>
cluster:
shards: [cache_primary, cache_secondary]
production:
<<: *default
database: cache
The cluster key is how Rails Solid Cache does sharding. If you point it at two Postgres databases, it consistent-hashes keys across them. You probably will not need this on day one — a single well-provisioned Postgres handles most workloads — but it is nice to know the scale-up path exists without touching application code.
Two settings in that snippet that catch people out:
compress_threshold: 1.kilobytepays off fast. Compressed values share the same byte_size index and let you fit more keys per gigabyte. I have seen 3–5x effective capacity gains on fragment-heavy caches.error_handleris essential. Without it, a single flaky cache write can raise into a controller action. With it, cache misses stay cache misses.
Using It in Real Code
Application-level usage is identical to any other Active Support cache. That is the point.
class DashboardsController < ApplicationController
def show
@summary = Rails.cache.fetch(
["user_dashboard", current_user.id, current_user.updated_at.to_i],
expires_in: 10.minutes
) do
DashboardSummary.build(current_user)
end
end
end
Fragment caching works the same:
<% cache [@product, I18n.locale], expires_in: 1.hour do %>
<%= render @product %>
<% end %>
What changes is what you can get away with. With Redis you worry about cache keys because each one costs memory. With Rails Solid Cache you can cache aggressively — rendered pages, computed reports, expensive JSON payloads — because disk is cheap and the backend scales by extending a Postgres volume.
One pattern I now use freely: caching heavyweight API responses for logged-out users.
class Api::V1::ProductsController < ApiController
def index
key = ["api_v1_products", params.permit!.to_h.sort, Product.maximum(:updated_at).to_i]
json = Rails.cache.fetch(key, expires_in: 1.hour) do
ProductSerializer.render_as_hash(filtered_products).to_json
end
render json: json
end
end
On Redis this would have been expensive and fragile. On Rails Solid Cache it is disposable. If it gets evicted, it gets regenerated.
Encryption at Rest
This is the part of Rails Solid Cache that most teams do not realize they wanted until they see it. Because cache entries are just rows in Postgres, they inherit all the security controls you already have — at-rest encryption on the database volume, network TLS, IAM policies, audit logs. With Redis, setting up at-rest encryption is either “pay for the managed version” or “operate your own TLS between every hop.”
If you want an extra layer — and you should if your cache holds serialized ActiveRecord objects that contain PII — Active Record Encryption works on the cache columns the same way it works on any other binary column. I configure it like this:
# config/initializers/solid_cache.rb
Rails.application.config.to_prepare do
next unless defined?(SolidCache::Entry)
SolidCache::Entry.encrypts :value
end
There is a real CPU cost — measured it at about 8% on a hot API — but it is the right default for regulated workloads. I have a client running HIPAA workloads this way and the audit conversation went from multi-hour to fifteen minutes.
Sizing and Monitoring
The two questions I get asked most about Rails Solid Cache in production:
How big should I make it? Start at whatever is 10–20% of your primary database size. Watch the hit rate and the trim frequency for two weeks. Grow if either number says you should.
How do I monitor it? Three SQL queries tell you almost everything.
# Hit rate — instrument in Active Support notifications
ActiveSupport::Notifications.subscribe("cache_read.active_support") do |_, _, _, _, payload|
StatsD.increment("cache.#{payload[:hit] ? 'hit' : 'miss'}")
end
# Size in bytes
SolidCache::Entry.connection.select_value(
"SELECT SUM(byte_size) FROM solid_cache_entries"
)
# Oldest surviving entry — a proxy for trim pressure
SolidCache::Entry.minimum(:created_at)
If the oldest entry is younger than your max_age, you are trimming by size and should either grow the cache or lower your write volume. If it equals your max_age, you are coasting on capacity and can probably shrink.
For most teams this is enough. If you want a web UI, Mission Control’s job dashboard has experimental cache integration in the Rails 8 roadmap.
What Breaks, and How to Handle It
In two years of running Rails Solid Cache across eight client apps, the only real incidents I have seen fall into three categories.
Vacuum pressure on the cache table. Cache writes are a churn workload. Postgres autovacuum needs to run aggressively on solid_cache_entries or the table bloats. I covered the tuning in detail in the autovacuum guide, but the short version is: set autovacuum_vacuum_scale_factor = 0.05 and autovacuum_vacuum_cost_limit = 2000 for the cache table. Do not skip this.
Cold start after a deploy. A fresh cache returns misses until it warms. For fragment caches this is fine. For critical-path queries you may want a small warming job that pre-populates known hot keys. I usually wire this into the post-deploy hook.
Accidental caching of enormous values. The default compress_threshold helps, but someone will eventually try to cache a 200MB CSV. Set a hard size_limit on the store and watch for errors:
config.cache_store = :solid_cache_store, { size_limit: 10.megabytes }
None of these are dealbreakers. All of them are things you have to handle with Redis too, just in different shapes.
The Bigger Picture
Rails Solid Cache is not just a cache store. It is part of a deliberate move by the Rails team to let small teams run serious applications with fewer services in the stack. I wrote last week about Rails Solid Queue replacing Sidekiq. Solid Cache replaces Redis for caching. Solid Cable replaces Redis for Action Cable. Add in Kamal 2 for deploys and the whole operational story fits on one page.
That matters for fractional CTOs and small teams more than for anyone else. Every service you do not run is a pager rotation you do not staff. The Solid trinity is not the fastest answer to every caching problem. It is the simplest one that holds up in production, and simple compounds over years.
FAQ
Is Rails Solid Cache faster than Redis?
No. Redis is faster for raw point reads — typically sub-millisecond versus 1–3ms for Rails Solid Cache on Postgres. The reason to use Rails Solid Cache is not raw speed, it is operational simplicity, larger capacity per euro, and one fewer service to run. For the vast majority of web applications, the latency difference is invisible behind network and serialization overhead.
Can I use Rails Solid Cache with Rails 7?
Yes. Rails Solid Cache requires Rails 7.1 or newer. It ships by default with Rails 8 but works fine on 7.1 and 7.2 once you add the gem and run the installer. I have production deployments on all three versions.
How much disk space does Rails Solid Cache use?
Roughly the size of the cached values plus about 10–15% index overhead. With compression enabled above 1KB, expect 3–5x effective capacity versus raw storage. On NVMe-backed Postgres this is cheap — a 256GB cache costs a few dollars a month in storage.
Do I still need Redis if I use Rails Solid Cache?
Only if you use Redis for something other than caching — pub/sub, sorted sets, rate limiting, Sidekiq. If Redis is purely a fragment cache for you, Rails Solid Cache can replace it entirely. In a Rails 8 app that also uses Solid Queue and Solid Cable, Redis often disappears from the stack completely.
Thinking about killing your Redis bill or moving a cache into Postgres? TTB Software specializes in operational simplification for Rails applications, from caching and background jobs to deploys and observability. We’ve been doing this for nineteen years.
About the Author
Roger Heykoop is a senior Ruby on Rails developer with 19+ years of Rails experience and 35+ years in software development. He specializes in Rails modernization, performance optimization, and AI-assisted development.
Get in TouchRelated Articles
Need Expert Rails Development?
Let's discuss how we can help you build or modernize your Rails application with 19+ years of expertise
Schedule a Free Consultation