Ruby Memoization Patterns: Beyond the Basic ||= Operator
Every Ruby developer learns ||= in their first month. It’s clean, it’s idiomatic, and it handles about 60% of memoization needs. The other 40% is where things get interesting — and where bugs hide.
Here’s what ||= actually does, where it breaks, and what to use instead.
How ||= Actually Works
def current_user
@current_user ||= User.find(session[:user_id])
end
This expands to @current_user || (@current_user = User.find(session[:user_id])). The lookup runs once, and subsequent calls return the cached instance variable.
Simple. But there’s a well-known trap: ||= cannot memoize nil or false.
def admin?
@is_admin ||= User.find(id).admin? # Re-queries every call if admin? returns false
end
If admin? returns false, @is_admin evaluates as falsy, and the query runs again on every call. In a request that checks permissions 8 times, that’s 8 database queries instead of 1.
The defined? Pattern for Nil/False Values
The classic fix uses defined? to check whether the variable has been assigned at all:
def admin?
return @is_admin if defined?(@is_admin)
@is_admin = User.find(id).admin?
end
This works because defined?(@is_admin) returns "instance-variable" once assigned, regardless of the value. It correctly memoizes nil, false, 0 — anything.
I used this pattern for years in production. It’s reliable, but it’s verbose. Three lines instead of one, and the intent isn’t immediately obvious to junior developers reading the code.
Memoizing Methods With Arguments
||= falls apart completely when arguments are involved:
# Broken: ignores the argument after first call
def user_permissions(role)
@permissions ||= Permission.where(role: role).pluck(:name)
end
Call user_permissions(:editor) then user_permissions(:admin) — you get editor permissions both times. The fix is a hash-based approach:
def user_permissions(role)
@permissions ||= {}
@permissions.fetch(role) do
@permissions[role] = Permission.where(role: role).pluck(:name)
end
end
Hash#fetch with a block only executes the block when the key is missing. This handles nil values correctly too, since the key exists in the hash after assignment.
For methods with multiple arguments, use an array key:
def shipping_cost(weight, destination)
@shipping_cache ||= {}
key = [weight, destination]
@shipping_cache.fetch(key) { @shipping_cache[key] = calculate_shipping(weight, destination) }
end
Watch the Memory
Hash-based memoization grows unboundedly. If your method gets called with thousands of unique argument combinations, you’re building a memory leak. In a long-running Rails process, this matters.
One pattern I’ve used in production to cap this:
def geocode(address)
@geo_cache ||= {}
@geo_cache.clear if @geo_cache.size > 1000
@geo_cache.fetch(address) { @geo_cache[address] = Geocoder.search(address).first }
end
Crude but effective. For something more sophisticated, look at the lru_redux gem, which gives you an LRU cache with a configurable max size. Sam Saffron (Discourse co-founder) built it specifically for this kind of use case.
Thread Safety: Where Memoization Gets Dangerous
None of the patterns above are thread-safe. In a multi-threaded server like Puma (the Rails 8 default), two threads can hit an unmemoized method simultaneously:
Thread A: checks @result → nil
Thread B: checks @result → nil
Thread A: computes expensive_operation, assigns @result
Thread B: computes expensive_operation, assigns @result (wasted work)
For most Rails memoization, this is fine. The worst case is duplicate computation, and instance variables on request-scoped objects are naturally thread-isolated. But for memoization on shared objects (singletons, class-level caches, objects in Rails.application.config), you need synchronization.
class ExchangeRateService
include Singleton
def initialize
@mutex = Mutex.new
@rates = {}
end
def rate_for(currency)
@mutex.synchronize do
@rates.fetch(currency) do
@rates[currency] = fetch_rate_from_api(currency)
end
end
end
end
The Mutex#synchronize block ensures only one thread computes a missing rate. Other threads wait and then get the cached result.
If you’re on Ruby 3.2+, Concurrent::Map from the concurrent-ruby gem is a cleaner option — it’s a thread-safe hash that doesn’t require explicit locking:
require 'concurrent-ruby'
class ExchangeRateService
def initialize
@rates = Concurrent::Map.new
end
def rate_for(currency)
@rates.compute_if_absent(currency) { fetch_rate_from_api(currency) }
end
end
compute_if_absent is atomic — exactly one thread runs the block for a given key.
Class-Level Memoization
Sometimes you want to memoize at the class level, not the instance level. Common for configuration, schema lookups, or anything that’s the same across all instances:
class Product < ApplicationRecord
def self.category_names
@category_names ||= Category.pluck(:name).freeze
end
end
The .freeze is intentional — it prevents accidental mutation of shared state. Without it, one piece of code could push onto the array and affect every caller.
The gotcha: class-level memoization persists across requests in production. This is usually what you want, but means stale data if the underlying records change. In Rails, you can bust the cache on relevant model changes:
class Category < ApplicationRecord
after_commit :clear_product_cache
private
def clear_product_cache
Product.instance_variable_set(:@category_names, nil)
end
end
It’s not pretty. If you need this kind of reactive invalidation often, you’re better off using Rails’ built-in caching with Rails.cache.fetch and explicit expiration.
The Memery Gem
If you write a lot of memoized methods, the memery gem (Ruby 3.0+) extracts the boilerplate:
require 'memery'
class WeatherService
include Memery
memoize def forecast(city)
WeatherAPI.fetch(city)
end
memoize def current_temp(city)
WeatherAPI.current(city)
end
end
Under the hood, Memery uses Module#prepend to wrap your method with hash-based memoization. It handles arguments, and you can pass condition: to control when memoization applies. The gem is small (~200 lines) and has no dependencies.
One caveat: Memery memoizes per-instance. If you create 1000 WeatherService instances, each gets its own cache. For shared caching, you still need a class-level or singleton approach.
When Not to Memoize
Memoization isn’t free. Each memoized value is an instance variable that lives as long as the object. In a typical Rails request:
- Controller actions: Objects are short-lived. Memoize freely.
- Models loaded via associations: Watch for memoizing on objects that get loaded in bulk. 500
Orderinstances each memoizing a computedtotalis fine. 500 instances each memoizing an associatedcustomerquery is 500 cached ActiveRecord objects in memory. - Background jobs: Sidekiq workers are reused across jobs. If you memoize on the worker instance, values leak between jobs. Memoize on a job-scoped object instead, or use Solid Queue which creates fresh instances.
A quick benchmark on Ruby 3.3.0 (YJIT enabled) shows the overhead of different approaches:
Plain method call (no memo): 48.2M iterations/sec
||= memoization: 45.7M iterations/sec
defined? pattern: 39.1M iterations/sec
Hash#fetch pattern: 28.3M iterations/sec
Memery gem: 22.6M iterations/sec
Mutex + Hash#fetch: 14.8M iterations/sec
The takeaway: ||= adds almost zero overhead. Hash-based patterns are roughly 40% slower for the cache-hit path, which still means ~28 million lookups per second — unlikely to be your bottleneck. The Mutex version is measurably slower, but you only pay that cost on shared mutable state where correctness requires it.
Picking the Right Pattern
| Situation | Pattern |
|---|---|
| Simple value, never nil/false | ||= |
| Value might be nil or false | defined? check |
| Method has arguments | Hash + fetch |
| Shared across threads | Mutex or Concurrent::Map |
| Lots of memoized methods | Memery gem |
| Need expiration/invalidation | Rails.cache.fetch instead |
Start with ||=. Upgrade when it breaks. Don’t reach for thread-safe patterns on request-scoped objects — you’re adding complexity for a problem that doesn’t exist there.
FAQ
Does Ruby’s ||= operator work with nil values?
No. ||= treats both nil and false as “not yet assigned” because it uses the || operator internally. If your method legitimately returns nil or false, use the defined?(@var) pattern instead to check whether the variable has been set regardless of its value.
Is memoization in Ruby thread-safe?
Instance-level memoization on request-scoped objects (controllers, service objects created per-request) is effectively thread-safe because each thread works with its own object instances. Memoization on shared objects like singletons, class variables, or objects stored in global configuration requires explicit synchronization with Mutex or Concurrent::Map.
Should I use a gem for memoization or write it manually?
For one or two memoized methods, write it manually — the patterns are simple enough. If you have a class with five or more memoized methods, consider the memery gem to reduce boilerplate. Avoid the older memoist gem on Ruby 3.x as it monkey-patches methods and can cause issues with method visibility changes in Ruby 3.0+.
How do I clear memoized values in tests?
The simplest approach is to create fresh instances in each test. If testing class-level memoization, add a reset! class method that sets the memoized variables to nil, and call it in your test setup or before block. The memery gem provides clear_memery_cache! on instances for this purpose.
When should I use Rails.cache.fetch instead of memoization?
Use Rails.cache.fetch when you need time-based expiration, cross-process sharing (multiple Puma workers or servers), or cache invalidation tied to model changes. Memoization is per-object, in-memory only, and dies with the object. If the cached value needs to survive beyond a single request or be shared across processes, it’s not a memoization problem — it’s a caching problem.
About the Author
Roger Heykoop is a senior Ruby on Rails developer with 19+ years of Rails experience and 35+ years in software development. He specializes in Rails modernization, performance optimization, and AI-assisted development.
Get in TouchRelated Articles
Need Expert Rails Development?
Let's discuss how we can help you build or modernize your Rails application with 19+ years of expertise
Schedule a Free Consultation