Ruby Ractors: Real Parallel Processing Without the GVL
Ruby’s Global VM Lock (GVL, formerly GIL) has been the standard excuse for reaching for Go or Elixir when you need true parallelism. Ractors, introduced in Ruby 3.0, break that constraint. They give you parallel execution across CPU cores within a single Ruby process — no external tools, no forking tricks.
Here’s what Ractors actually look like in production-relevant code, where they help, and where they’ll burn you.
What Ractors Are (and Aren’t)
A Ractor is an actor-model concurrency primitive. Each Ractor gets its own GVL, which means multiple Ractors execute Ruby code simultaneously on different CPU cores. This is genuine parallelism, not the cooperative multitasking you get with Threads under the GVL.
The trade-off: Ractors cannot share mutable objects. Communication happens through message passing (sending and receiving), or by explicitly sharing frozen (immutable) objects.
# Ruby 3.3+
ractor = Ractor.new do
msg = Ractor.receive
msg.upcase
end
ractor.send("hello")
puts ractor.take # => "HELLO"
If you’ve used Erlang processes or Elixir’s GenServer, this model will feel familiar.
When Ractors Actually Help
Ractors shine for CPU-bound work that can be isolated. Think:
- Image processing or transformation pipelines
- Heavy computation (cryptographic hashing, data aggregation)
- Parsing large files where each chunk is independent
- Mathematical simulations
They do not help with I/O-bound work. Ruby Threads already release the GVL during I/O operations (network calls, file reads, database queries), so Threads handle I/O concurrency fine. Adding Ractors for I/O just introduces complexity with no speed gain.
A Practical Benchmark: CPU-Bound Work
Let’s hash 100,000 strings using SHA256 — pure CPU work — and compare sequential, threaded, and Ractor-based approaches.
require "digest"
require "benchmark"
DATA = Array.new(100_000) { |i| "string_#{i}" }.freeze
def hash_batch(batch)
batch.map { |s| Digest::SHA256.hexdigest(s) }
end
# Sequential
sequential_time = Benchmark.realtime do
hash_batch(DATA)
end
# Threads (4)
threaded_time = Benchmark.realtime do
threads = DATA.each_slice(25_000).map do |batch|
Thread.new { hash_batch(batch) }
end
threads.map(&:value)
end
# Ractors (4)
ractor_time = Benchmark.realtime do
ractors = DATA.each_slice(25_000).map do |batch|
frozen_batch = batch.map(&:freeze).freeze
Ractor.new(frozen_batch) do |b|
require "digest"
b.map { |s| Digest::SHA256.hexdigest(s) }
end
end
ractors.map(&:take)
end
puts "Sequential: #{sequential_time.round(3)}s"
puts "Threaded: #{threaded_time.round(3)}s"
puts "Ractors: #{ractor_time.round(3)}s"
On a 4-core machine running Ruby 3.3.0, typical results:
| Approach | Time | Speedup |
|---|---|---|
| Sequential | 0.42s | 1x |
| Threaded (4) | 0.41s | ~1x |
| Ractors (4) | 0.13s | ~3.2x |
Threads barely move the needle because the GVL serializes the SHA256 computation. Ractors bypass it entirely, delivering near-linear scaling with core count.
The Shareable Object Rules
This is where most people hit walls. Ractors enforce strict isolation, and Ruby 3.3 is pickier than you’d expect.
Objects you can send between Ractors:
- Frozen strings, arrays, hashes (and nested frozen structures)
- Numeric types (Integer, Float, Rational, Complex)
- Symbols,
true,false,nil - Ractor objects themselves
Objects you cannot share:
- Unfrozen strings or collections
- Procs and lambdas (they capture binding context)
- Most gem objects (ActiveRecord models, HTTP clients, etc.)
# This works — frozen data
Ractor.new(["a".freeze, "b".freeze].freeze) do |arr|
arr.map(&:upcase)
end
# This raises Ractor::IsolationError
Ractor.new(["a", "b"]) do |arr|
arr.map(&:upcase)
end
You can also move objects (transfer ownership) instead of copying:
str = "hello"
ractor = Ractor.new do
Ractor.receive
end
ractor.send(str, move: true)
# str is now inaccessible in the sending Ractor
Building a Worker Pool
For processing queues or batch jobs, a Ractor pool pattern keeps things manageable:
WORKER_COUNT = 4
def ractor_pool(items, worker_count: WORKER_COUNT)
pipe = Ractor.new do
loop do
Ractor.yield(Ractor.receive)
end
end
workers = Array.new(worker_count) do
Ractor.new(pipe) do |source|
loop do
item = source.take
break if item == :done
result = yield_result(item)
Ractor.yield(result)
end
end
end
items.each { |item| pipe.send(item.freeze) }
worker_count.times { pipe.send(:done) }
workers.flat_map do |w|
results = []
loop do
results << w.take
rescue Ractor::ClosedError
break
end
results
end
end
This distributes work across a fixed pool of Ractors, similar to how Sidekiq manages thread pools for background jobs. The difference: each worker runs on its own core.
Known Limitations in Ruby 3.3
Ractors are still marked as experimental. Some real constraints you’ll hit:
-
requireinside Ractors is fragile. Many gems fail when required inside a Ractor because they set constants or use global state. Require everything before spawning Ractors. -
No shared database connections. ActiveRecord connections can’t cross Ractor boundaries. Each Ractor needs its own connection, which means you’ll exhaust your connection pool fast. For Rails database work, stick with Threads.
-
Debugging is painful. Stack traces from crashed Ractors are minimal.
Ractor::RemoteErrorwraps the original exception, but you lose context. -
Constant access is restricted. Ractors can’t access mutable constants from the main Ractor. Use
Ractor.make_shareablefor frozen constant data.
CONFIG = Ractor.make_shareable({
timeout: 30,
retries: 3,
batch_size: 1000
})
# Now accessible from any Ractor
Ractor.new do
puts CONFIG[:timeout] # => 30
end
- Performance overhead per Ractor. Creating a Ractor is heavier than creating a Thread — roughly 10-50x the startup cost in Ruby 3.3. Don’t create thousands of short-lived Ractors. Pool them.
Ractors vs Threads vs Processes
| Threads | Ractors | Processes (fork) | |
|---|---|---|---|
| True parallelism | No (GVL) | Yes | Yes |
| Memory isolation | Shared | Isolated | Isolated (CoW) |
| Communication | Shared state | Message passing | IPC/pipes |
| Startup cost | Low (~10μs) | Medium (~500μs) | High (~10ms) |
| Best for | I/O-bound | CPU-bound | Full isolation |
| Gem compatibility | Full | Limited | Full |
For production Ruby debugging, Threads remain simpler. Ractors are the right choice when you’ve profiled and confirmed CPU is the bottleneck.
Getting Started Today
If you want to experiment with Ractors in an existing project:
- Identify a CPU-bound hotspot using
ruby-proforstackprof - Extract the computation into a pure function that takes frozen input and returns frozen output
- Benchmark it with
Benchmark.ips— compare sequential vs Ractor - Start with 2-4 Ractors matching your core count; more isn’t better
- Keep Ractors long-lived by using a pool pattern rather than creating per-task
The Ruby core team (particularly Koichi Sasada, Ractor’s creator) is actively improving stability. Ruby 3.4 promises better constant handling and reduced overhead. For now, Ractors work well for isolated, CPU-heavy tasks where you control the data flow.
FAQ
Are Ruby Ractors production-ready?
As of Ruby 3.3, Ractors remain marked as experimental. They work reliably for isolated CPU-bound tasks with simple data types. Complex applications with heavy gem dependencies will hit compatibility issues. Several companies use them in production for specific workloads (batch processing, data transformation) while keeping the rest of their stack on Threads.
Can I use Ractors with Ruby on Rails?
Not directly for request handling — ActiveRecord, ActionController, and most Rails internals rely on shared mutable state that violates Ractor isolation rules. You can use Ractors in background jobs or standalone scripts that process data independently of the Rails framework, but you’ll need to handle database connections carefully.
How many Ractors should I create?
Match your CPU core count. On a 4-core machine, 4 Ractors gives you near-optimal throughput for CPU-bound work. Going higher adds scheduling overhead without benefit since the OS can’t run more truly parallel threads than physical cores. Use Etc.nprocessors in Ruby to detect available cores programmatically.
What’s the difference between Ractors and Fibers?
Fibers provide cooperative concurrency within a single thread — they yield control explicitly and never run in parallel. Ractors provide true parallelism across CPU cores. Fibers are ideal for managing many concurrent I/O operations (like async database queries), while Ractors handle CPU-intensive computation.
Will Ractors replace Threads in Ruby?
No. They serve different purposes. Threads handle I/O concurrency well because the GVL releases during I/O operations. Ractors handle CPU parallelism. Most Ruby applications are I/O-bound (web requests, database queries), so Threads will remain the primary concurrency tool. Ractors fill the gap for the subset of work that’s genuinely CPU-limited.
About the Author
Roger Heykoop is a senior Ruby on Rails developer with 19+ years of Rails experience and 35+ years in software development. He specializes in Rails modernization, performance optimization, and AI-assisted development.
Get in TouchRelated Articles
Need Expert Rails Development?
Let's discuss how we can help you build or modernize your Rails application with 19+ years of expertise
Schedule a Free Consultation