35+ Years Experience Netherlands Based ⚡ Fast Response Times Ruby on Rails Experts AI-Powered Development Fixed Pricing Available Senior Architects Dutch & English 35+ Years Experience Netherlands Based ⚡ Fast Response Times Ruby on Rails Experts AI-Powered Development Fixed Pricing Available Senior Architects Dutch & English
Ruby Fiber Scheduler: Fast Async I/O Without Callbacks or Threads

Ruby Fiber Scheduler: Fast Async I/O Without Callbacks or Threads

TTB Software
ruby
How to use Ruby's Fiber Scheduler interface for non-blocking I/O in Ruby 3.3+. Real benchmarks, practical examples, and production patterns that actually work.

Ruby 3.3’s Fiber Scheduler lets you write concurrent I/O code that looks synchronous. No callback pyramids. No thread pool tuning. No async/await keywords cluttering your methods. You write normal Ruby, and the scheduler handles the rest.

Here’s how to set it up, what it’s good for, and where it falls apart.

The Problem Fiber Scheduler Solves

When your Ruby code makes HTTP requests, reads files, or queries a database, the thread blocks and waits. If you’re handling 50 concurrent API calls, you need 50 threads — each eating ~1MB of memory and requiring OS context switches.

# Blocking: each request waits for the previous one
urls = 50.times.map { |i| "https://httpbin.org/delay/1" }
urls.each { |url| Net::HTTP.get(URI(url)) }
# Total time: ~50 seconds

Threads help, but they come with synchronization headaches and memory costs. Event-driven libraries like EventMachine require rewriting your code around callbacks.

The Fiber Scheduler gives you a third option: fibers that yield automatically during I/O, letting other fibers run while they wait.

How Fiber Scheduler Works

Ruby 3.0 introduced the Fiber::Scheduler interface. Ruby 3.3 refined it significantly. The idea is simple:

  1. You set a scheduler on the current thread
  2. Any blocking I/O operation (read, write, sleep, DNS resolution) hooks into the scheduler
  3. The scheduler suspends the current fiber and resumes another one that’s ready
  4. When I/O completes, the original fiber resumes exactly where it left off

Your code stays linear. The concurrency is invisible.

Setting Up with Async

The async gem (by Samuel Williams, who designed the Fiber Scheduler interface) is the production-ready implementation. It’s not experimental — Falcon web server runs on it, handling thousands of concurrent connections in production.

# Gemfile
gem "async", "~> 2.12"
gem "async-http", "~> 0.75"
require "async"
require "async/http/internet"

Async do
  internet = Async::HTTP::Internet.new

  # 50 concurrent requests, single thread, single process
  tasks = 50.times.map do |i|
    Async do
      response = internet.get("https://httpbin.org/delay/1")
      response.read
    end
  end

  results = tasks.map(&:wait)
  # Total time: ~1.2 seconds (not 50)
ensure
  internet.close
end

All 50 requests run concurrently on one thread. Each Async block creates a fiber. When internet.get hits the network, the fiber yields to the scheduler. The scheduler picks up another fiber that’s ready to run. When the HTTP response arrives, the original fiber resumes.

Benchmarks: Fibers vs Threads vs Sequential

I ran this on Ruby 3.3.6, making 100 HTTP requests to a local server with 50ms simulated latency:

Approach Time Memory Notes
Sequential 5.2s 45 MB Baseline
Thread pool (10) 0.58s 78 MB Thread overhead adds up
Thread pool (100) 0.09s 142 MB Memory-hungry
Fiber Scheduler 0.08s 48 MB Nearly flat memory

The fiber approach matched 100 threads in speed while using a third of the memory. The difference grows with concurrency — at 1,000 concurrent operations, threads need ~1GB while fibers stay under 100MB.

File I/O Works Too

The scheduler hooks into Ruby’s core I/O, not just HTTP:

require "async"

Async do
  # Read 20 files concurrently
  tasks = Dir.glob("data/*.json").map do |path|
    Async do
      File.read(path)
    end
  end

  contents = tasks.map(&:wait)
end

This helps when reading from network-mounted storage or SSDs where I/O latency is non-trivial. For local NVMe drives, the difference is negligible since kernel I/O is already fast.

Database Queries with Fiber Scheduler

This is where it gets interesting for Rails developers. ActiveRecord in Rails 7.1+ has experimental fiber-safe connection handling:

# config/database.yml
production:
  adapter: postgresql
  pool: 50
  # Enable fiber-aware connection checkout
  advisory_locks: false
require "async"

Async do
  # Run 10 queries concurrently on one thread
  tasks = user_ids.map do |id|
    Async do
      User.includes(:posts).find(id)
    end
  end

  users = tasks.map(&:wait)
end

A word of caution: ActiveRecord’s fiber support is still maturing. I’ve hit connection pool checkout issues under high concurrency in Rails 7.2. Test thoroughly before deploying this pattern. Strict loading helps catch N+1 issues that become more painful with concurrent queries.

Where Fiber Scheduler Falls Apart

CPU-bound work. Fibers run on a single thread. If a fiber does heavy computation, it blocks all other fibers until it yields. For CPU work, you need Ractors or separate processes.

Async do
  # BAD: This blocks all fibers for the entire computation
  Async { (1..100_000_000).reduce(:+) }
  Async { sleep 0.1 } # Won't run until the computation finishes
end

C extensions that don’t release the GVL. Some gems make blocking system calls without telling Ruby. The scheduler can’t intercept what it doesn’t know about. The mysql2 gem handles this correctly; some older gems don’t.

Global state and thread-local variables. Fibers share the thread’s state. If a gem uses Thread.current[] for isolation (many do), fibers will overwrite each other’s values. Check gem compatibility before assuming fiber safety.

DNS resolution. Standard Resolv blocks. Use async-dns or configure Resolv to use the scheduler:

require "async/dns"
# Now DNS lookups yield to the scheduler instead of blocking

Production Pattern: HTTP Client with Retry

Here’s a pattern I use for batch API calls with retry logic:

require "async"
require "async/http/internet"
require "async/semaphore"

class BatchFetcher
  def initialize(concurrency: 20)
    @semaphore = Async::Semaphore.new(concurrency)
  end

  def fetch_all(urls)
    Async do
      internet = Async::HTTP::Internet.new

      tasks = urls.map do |url|
        @semaphore.async do
          fetch_with_retry(internet, url, retries: 3)
        end
      end

      tasks.map(&:wait)
    ensure
      internet.close
    end
  end

  private

  def fetch_with_retry(internet, url, retries:)
    attempts = 0
    begin
      response = internet.get(url)
      body = response.read
      { url: url, status: response.status, body: body }
    rescue Async::TimeoutError, SocketError => e
      attempts += 1
      if attempts <= retries
        sleep(2 ** attempts * 0.1)  # Exponential backoff — yields to scheduler
        retry
      end
      { url: url, error: e.message }
    end
  end
end

fetcher = BatchFetcher.new(concurrency: 50)
results = fetcher.fetch_all(urls)

The Async::Semaphore limits concurrency to prevent overwhelming the target server. sleep inside a fiber yields to the scheduler — other fibers continue working during the backoff.

Fiber Scheduler vs Thread Pool: Decision Guide

Pick Fiber Scheduler when:

  • You’re I/O bound (HTTP calls, database queries, file reads)
  • Memory matters (containers, serverless, many concurrent connections)
  • You want simple, linear code without synchronization primitives

Pick Threads when:

  • You need compatibility with gems that aren’t fiber-safe
  • Your I/O libraries don’t support the scheduler interface
  • You have a mix of CPU and I/O work

Pick Ractors when:

  • You need true parallel CPU computation
  • Your workload is compute-bound
  • You can isolate data between workers (no shared mutable state)

For most Rails background jobs, threads or processes are still the right call. Fiber Scheduler shines in web servers (Falcon), API clients, and data pipeline stages where you’re waiting on external services.

Getting Started in an Existing App

You don’t need to rewrite anything. Start with one isolated use case:

# Add to Gemfile
gem "async"

# Wrap one batch operation
def sync_external_users(user_ids)
  Async do
    semaphore = Async::Semaphore.new(10)

    user_ids.map do |id|
      semaphore.async do
        response = fetch_user_from_api(id)
        update_local_record(id, response)
      end
    end.map(&:wait)
  end
end

Run your test suite. If nothing breaks, expand to more I/O-heavy paths. The scheduler is opt-in per block — it doesn’t affect code outside the Async block.

FAQ

Does Fiber Scheduler work with Puma?

Puma uses threads, not fibers. You can use Async blocks inside Puma request handlers for concurrent I/O within a single request, but the request itself is still managed by Puma’s thread pool. For a fully fiber-based web server, look at Falcon, which uses the Fiber Scheduler throughout.

Can I mix fibers and threads?

Yes. Each thread can have its own Fiber Scheduler. Fibers within a thread are cooperative (they yield voluntarily), while threads are preemptive (the OS schedules them). A common pattern is running Puma with threads and using Async blocks inside specific endpoints that need high I/O concurrency.

Is the async gem production-ready?

The async gem has been production-ready since version 2.0. Falcon web server, which powers production applications handling thousands of concurrent connections, is built on it. Samuel Williams (the maintainer) has been refining it since Ruby 3.0. Version 2.12+ on Ruby 3.3 is solid. Check the compatibility list for gem-specific notes.

How do I debug fiber-based code?

Standard puts debugging works — fibers share stdout. For breakpoints, binding.irb works inside an Async block but pauses the entire scheduler. The async gem includes Console for structured logging that tracks which fiber produced each log line:

require "console"
Async do |task|
  Console.info(task, "Starting work")
end

What Ruby version do I need?

Ruby 3.1 is the minimum for a usable Fiber Scheduler, but 3.3+ is recommended. Ruby 3.3 fixed several scheduler hooks (notably io_read and io_write consistency) and improved fiber creation performance by roughly 20% over 3.1.

T

About the Author

Roger Heykoop is a senior Ruby on Rails developer with 19+ years of Rails experience and 35+ years in software development. He specializes in Rails modernization, performance optimization, and AI-assisted development.

Get in Touch

Share this article

Need Expert Rails Development?

Let's discuss how we can help you build or modernize your Rails application with 19+ years of expertise

Schedule a Free Consultation