35+ Years Experience Netherlands Based ⚡ Fast Response Times Ruby on Rails Experts AI-Powered Development Fixed Pricing Available Senior Architects Dutch & English 35+ Years Experience Netherlands Based ⚡ Fast Response Times Ruby on Rails Experts AI-Powered Development Fixed Pricing Available Senior Architects Dutch & English
Ruby Ractors: Actual Parallel Processing Without Fighting the GVL

Ruby Ractors: Actual Parallel Processing Without Fighting the GVL

TTB Software
ruby
How to use Ruby Ractors for true parallel CPU-bound work in Ruby 3.3+. Practical examples, benchmarks, and pitfalls from production use.

Ruby Ractors give you true parallel execution on multiple CPU cores — something Threads alone cannot do because of the Global VM Lock (GVL, formerly GIL). If you have CPU-bound work and you’re still reaching for Sidekiq or shelling out to separate processes, Ractors might be what you need instead.

Here’s how they work, where they break, and when they’re actually worth using in Ruby 3.3.

What the GVL Problem Actually Is

Ruby threads are concurrent but not parallel for CPU work. The GVL ensures only one thread executes Ruby code at a time. Threads still help with I/O-bound work (network calls, file reads) because the GVL is released during I/O waits. But if you need to crunch numbers, parse large datasets, or do image processing — threads give you zero speedup.

# This does NOT run faster with threads (CPU-bound)
results = 4.times.map do
  Thread.new { (1..10_000_000).reduce(:+) }
end.map(&:value)

Each thread takes turns holding the GVL. Four threads, one core. You’re paying thread overhead for sequential execution.

Ractors: The Fix

Ractors (Ruby Actors) run in isolated memory spaces, each with their own GVL. This means they actually run on separate CPU cores simultaneously.

# This DOES run in parallel
ractors = 4.times.map do
  Ractor.new do
    (1..10_000_000).reduce(:+)
  end
end

results = ractors.map(&:take)

On a 4-core machine running Ruby 3.3.6, here’s what I measured:

Approach Time (seconds) Speedup
Sequential 2.41 1x
4 Threads 2.38 ~1x
4 Ractors 0.64 3.8x

Nearly linear scaling. That’s real parallelism.

The Isolation Rule

Ractors achieve thread safety by forbidding shared mutable state. Each Ractor gets its own heap. You communicate between Ractors by sending and receiving messages — not by sharing objects.

# This works — sending a value
r = Ractor.new do
  name = Ractor.receive
  "Hello, #{name}"
end

r.send("Ruby")
puts r.take  # => "Hello, Ruby"
# This FAILS — sharing mutable state
shared_array = [1, 2, 3]
Ractor.new(shared_array) do |arr|
  arr << 4  # Ractor::IsolationError
end

Objects sent to a Ractor are either moved (the sender loses access) or copied (deep copy via Marshal). Frozen objects and shareable objects (integers, symbols, frozen strings) can be shared without copying.

# Moved — sender can't use it anymore
data = [1, 2, 3]
r = Ractor.new do
  received = Ractor.receive
  received.sum
end
r.send(data, move: true)
# data is now unusable in the main Ractor

A Practical Example: Parallel CSV Processing

Say you have a 2GB CSV file and need to compute aggregates. Splitting the work across Ractors:

require 'csv'

def process_chunk(lines)
  total = 0
  count = 0
  lines.each do |line|
    cols = line.split(',')
    amount = cols[3].to_f
    total += amount
    count += 1
  end
  { total: total, count: count }
end

# Read and split the file
lines = File.readlines('transactions.csv')
header = lines.shift
chunk_size = lines.size / 4

ractors = 4.times.map do |i|
  chunk = lines[i * chunk_size, chunk_size] || []
  Ractor.new(chunk) do |data|
    total = 0
    count = 0
    data.each do |line|
      cols = line.split(',')
      amount = cols[3].to_f
      total += amount
      count += 1
    end
    { total: total, count: count }
  end
end

results = ractors.map(&:take)
grand_total = results.sum { |r| r[:total] }
grand_count = results.sum { |r| r[:count] }

puts "Processed #{grand_count} records, total: #{grand_total}"

Note that I’m reading the file in the main Ractor and sending string chunks. You can’t pass a File handle or CSV::Row objects into a Ractor because they aren’t shareable.

Where Ractors Break Down

Ractors are powerful but limited. Here’s what trips people up:

Most gems don’t work inside Ractors. Any gem that uses global state, class variables, or mutable constants will raise Ractor::IsolationError. As of Ruby 3.3, Rails itself is completely incompatible with Ractors. ActiveRecord, ActiveSupport — none of it works inside a Ractor.

Debugging is painful. Stack traces from Ractor crashes aren’t always clear. A Ractor::RemoteError wraps the original exception, and you need to call .cause to get the real error:

r = Ractor.new do
  raise "something broke"
end

begin
  r.take
rescue Ractor::RemoteError => e
  puts e.cause.message  # => "something broke"
  puts e.cause.backtrace
end

Startup cost is real. Creating a Ractor is heavier than creating a Thread. For trivial workloads, the overhead eats the gains. In my benchmarks, Ractors only beat threads when each unit of work takes at least 10-50ms.

The API is still marked experimental. ruby -W will print warnings about Ractor being experimental. The core team has stated Ractors will remain in Ruby’s future, but the API might change between versions.

Ractor Pools for Reuse

Creating Ractors per-task is wasteful for repeated short jobs. Build a pool:

pool_size = 4
pipe = Ractor.new do
  loop do
    Ractor.yield(Ractor.receive)
  end
end

workers = pool_size.times.map do
  Ractor.new(pipe) do |input|
    loop do
      task = input.take
      result = task.call
      Ractor.yield(result)
    end
  end
end

# Submit work
jobs = 20.times.map do |i|
  pipe.send(-> { i * i })
end

# Collect results
20.times do
  _ractor, result = Ractor.select(*workers)
  puts result
end

Ractor.select waits for any Ractor in the set to produce a value — similar to IO.select for file descriptors. This avoids blocking on a single slow worker.

When to Use Ractors vs Alternatives

Use Ractors when:

  • CPU-bound work that can be split into independent chunks
  • You control all the code (no gem dependencies inside the Ractor)
  • Work per chunk justifies the ~1ms Ractor creation overhead
  • You’re on Ruby 3.2+ and accept the experimental status

Use Threads when:

  • I/O-bound work (HTTP calls, database queries, file operations)
  • You need access to gems and shared libraries
  • Background jobs in Rails handle your concurrency needs

Use processes (fork or Sidekiq) when:

  • You need full Rails context in parallel workers
  • Isolation must include memory protection against segfaults
  • You’re running on a platform that supports fork (not Windows, not JRuby)

Ruby 3.3 Ractor Improvements

Ruby 3.3 brought several Ractor fixes worth noting (see the Ruby 3.3 release notes):

  • Ractor.select timeout parameter added
  • Reduced memory overhead per Ractor (down ~15% from 3.2)
  • Better error messages for isolation violations
  • Ractor.shareable? method for checking if an object can be shared

The core team (Koichi Sasada in particular) has been iterating on Ractor internals. Each Ruby release makes them more stable, but the experimental label persists because the API hasn’t been frozen yet.

FAQ

Can I use ActiveRecord inside a Ractor?

No. ActiveRecord relies heavily on class-level mutable state (connection pools, query caches, schema caches) that violates Ractor isolation rules. If you need database access in parallel, use Threads or separate processes. Ractors are for pure computation.

How many Ractors should I create?

Match your CPU core count for CPU-bound work. Creating more Ractors than cores adds scheduling overhead without speedup. Use Etc.nprocessors to detect available cores at runtime. For I/O-mixed workloads, you might benefit from slightly more, but at that point Threads are usually the better tool.

Are Ractors production-ready?

They’re stable enough for specific use cases — data processing pipelines, computation workers, parallel parsing. Companies like Shopify have experimented with them. But the experimental warning means you should pin your Ruby version, write thorough tests, and be prepared for API changes on upgrade. Don’t build your entire architecture around them yet.

What’s the difference between Ractor.send and Ractor.yield?

send pushes a message to a specific Ractor’s incoming port. yield makes the current Ractor’s result available to whoever calls take on it. Think of send as “push to” and yield as “make available for pull.” The pipe pattern above uses both: the dispatcher receives via send and workers output via yield.

Will Ractors replace the GVL entirely?

Unlikely. The GVL exists to protect C extensions and internal VM state. Removing it would break every native gem. Instead, Ractors give each actor its own GVL, sidestepping the problem for code that can live within isolation constraints. The long-term vision (per Matz’s Ruby 3x3 goal) is that Ractors provide the parallel execution path while Threads remain for concurrent I/O.

T

About the Author

Roger Heykoop is a senior Ruby on Rails developer with 19+ years of Rails experience and 35+ years in software development. He specializes in Rails modernization, performance optimization, and AI-assisted development.

Get in Touch

Share this article

Need Expert Rails Development?

Let's discuss how we can help you build or modernize your Rails application with 19+ years of expertise

Schedule a Free Consultation