Rails Database Indexing: Fix Slow Queries With the Right Index Strategy
A missing database index is the most common cause of slow Rails queries. Add the wrong index and you waste disk space while writes get slower. Add the right one and a 3-second query drops to 2 milliseconds.
This guide covers how to pick the right index type for your Rails queries, how to find missing indexes in production, and when not to index.
How Rails Migrations Create Indexes
When you generate a migration with add_index, Rails creates a B-tree index by default:
# db/migrate/20260226090000_add_index_to_orders.rb
class AddIndexToOrders < ActiveRecord::Migration[8.0]
def change
add_index :orders, :customer_id
end
end
This covers the simple case: looking up orders by customer_id. But most production queries aren’t this simple.
Composite Indexes: Column Order Matters
When a query filters on multiple columns, you need a composite index. The column order determines which queries it can serve.
add_index :orders, [:customer_id, :status, :created_at]
PostgreSQL uses composite indexes left-to-right. This index helps:
WHERE customer_id = ?— yesWHERE customer_id = ? AND status = ?— yesWHERE customer_id = ? AND status = ? AND created_at > ?— yesWHERE status = ?— no (skips the leftmost column)WHERE customer_id = ? AND created_at > ?— partially (uses customer_id, skips to scan for created_at)
The rule: put equality conditions first, range conditions last. If you frequently query WHERE status = ? AND created_at > ? without customer_id, you need a separate index.
I’ve seen teams create six single-column indexes on one table when two well-designed composite indexes would cover every query pattern. Each unnecessary index slows down INSERT and UPDATE operations. On a table processing 10,000 writes per second, that overhead adds up.
Partial Indexes: Index Only What You Query
If 95% of your orders have status: 'completed' and you only query active ones:
add_index :orders, :customer_id, where: "status != 'completed'", name: 'idx_orders_active'
This partial index is a fraction of the size and faster to scan. In one project, replacing a full index on a 40-million-row table with a partial index reduced the index size from 860MB to 12MB. Query time dropped because PostgreSQL had less data to traverse.
Rails scopes work with partial indexes if the query conditions match:
class Order < ApplicationRecord
scope :active, -> { where.not(status: 'completed') }
end
# This query uses the partial index
Order.active.where(customer_id: 42)
Expression Indexes for Computed Lookups
When you search by a computed value, a regular index won’t help:
# This triggers a full table scan even with an index on 'email'
User.where("LOWER(email) = ?", email.downcase)
Create an expression index:
add_index :users, 'LOWER(email)', name: 'idx_users_lower_email', unique: true
In Rails 8 with PostgreSQL, expression indexes also let you index JSONB paths:
add_index :events, "(metadata->>'event_type')", name: 'idx_events_type'
This makes querying JSONB columns practical at scale without switching to a dedicated column.
GIN Indexes for Full-Text Search and Arrays
B-tree indexes don’t work for array containment or full-text search. Use GIN (Generalized Inverted Index):
# For array columns
add_index :articles, :tags, using: :gin
# For full-text search
execute <<-SQL
CREATE INDEX idx_articles_search ON articles
USING gin(to_tsvector('english', title || ' ' || body));
SQL
GIN indexes are slower to update than B-tree indexes but much faster for containment queries (@>, &&) and full-text search (@@).
Finding Missing Indexes in Production
pg_stat_user_tables: Sequential Scan Detection
SELECT schemaname, relname, seq_scan, seq_tup_read, idx_scan
FROM pg_stat_user_tables
WHERE seq_scan > 1000
ORDER BY seq_tup_read DESC
LIMIT 20;
Tables with high seq_scan counts and high seq_tup_read relative to idx_scan are candidates for missing indexes. A sequential scan on a 50-row lookup table is fine. A sequential scan on a 10-million-row table is a problem.
pg_stat_user_indexes: Unused Index Detection
SELECT indexrelname, idx_scan, pg_size_pretty(pg_relation_size(indexrelid))
FROM pg_stat_user_indexes
WHERE idx_scan = 0
AND schemaname = 'public'
ORDER BY pg_relation_size(indexrelid) DESC;
Indexes with zero scans are wasting space and slowing writes. Drop them. Reset the stats counter first (pg_stat_reset()) and wait a full business cycle before deciding — some indexes only get used during monthly reports or batch jobs.
The active_record_doctor Gem
For automated detection in Rails:
# Gemfile
gem 'active_record_doctor', group: :development
# Then run:
# bundle exec rake active_record_doctor:missing_foreign_key_indexes
# bundle exec rake active_record_doctor:unindexed_foreign_keys
This catches the most common miss: foreign keys without indexes. Every belongs_to association should have an index on the foreign key column. Rails generators add them by default since Rails 5, but legacy apps often have gaps.
EXPLAIN ANALYZE: Verify Your Index Is Used
Adding an index doesn’t guarantee PostgreSQL uses it. The query planner makes cost-based decisions. Verify with EXPLAIN ANALYZE:
# In Rails console
Order.where(customer_id: 42, status: 'pending').explain(:analyze)
Look for Index Scan or Index Only Scan in the output. If you see Seq Scan despite having an index, common causes:
- Table is too small — PostgreSQL decides a sequential scan is faster (usually correct for tables under ~1,000 rows)
- Statistics are stale — Run
ANALYZE orders; - Query doesn’t match index shape — Check column order in composite indexes
- Type mismatch — Comparing a string to an integer column bypasses the index
Covering Indexes (Index-Only Scans)
PostgreSQL 11+ supports INCLUDE columns in indexes. This lets PostgreSQL answer a query entirely from the index without touching the table:
# PostgreSQL-specific syntax in migration
execute <<-SQL
CREATE INDEX idx_orders_covering ON orders (customer_id, status)
INCLUDE (total_amount, created_at);
SQL
For a query like SELECT total_amount, created_at FROM orders WHERE customer_id = ? AND status = ?, PostgreSQL reads only the index. No table heap access. On I/O-bound workloads, this cuts query time significantly.
When Not to Index
Not every column needs an index:
- Low-cardinality boolean columns — An index on a
booleancolumn with 50/50 distribution rarely helps. PostgreSQL will sequential scan anyway. Combine it in a partial or composite index instead. - Write-heavy tables with rare reads — Each index adds write overhead. A logging table that’s rarely queried doesn’t need five indexes.
- Columns only used in
SELECTlists — Indexes helpWHERE,JOIN,ORDER BY, andGROUP BY. Indexing a column you only display doesn’t help. - Small tables — Under a few thousand rows, a sequential scan is often faster than an index lookup because of the overhead of traversing the B-tree.
Monitoring Index Bloat
PostgreSQL indexes bloat over time as rows are updated and deleted. Bloated indexes are larger than necessary and slower to scan.
Check bloat with:
SELECT
nspname, relname,
pg_size_pretty(pg_relation_size(i.indexrelid)) AS index_size,
idx_scan
FROM pg_stat_user_indexes ui
JOIN pg_index i ON ui.indexrelid = i.indexrelid
JOIN pg_class c ON i.indrelid = c.oid
JOIN pg_namespace n ON c.relnamespace = n.oid
WHERE nspname = 'public'
ORDER BY pg_relation_size(i.indexrelid) DESC
LIMIT 10;
For heavily updated tables, schedule periodic REINDEX CONCURRENTLY (PostgreSQL 12+):
REINDEX INDEX CONCURRENTLY idx_orders_active;
The CONCURRENTLY option rebuilds the index without locking the table. Your zero-downtime migration strategy should include index maintenance.
Practical Checklist
- Audit foreign keys — Every
_idcolumn used in associations needs an index - Check your slow query log — PostgreSQL’s
log_min_duration_statementset to 100ms catches the worst offenders - Design composite indexes for your actual queries — Don’t guess, check
EXPLAIN ANALYZE - Use partial indexes — If you have status-filtered queries, index only the rows that matter
- Remove unused indexes — They’re pure overhead
- Monitor bloat — Especially on high-write tables
- Test with production-scale data — An index that’s unnecessary on 1,000 rows becomes critical at 1,000,000
FAQ
How many indexes is too many on a single table?
There’s no hard limit, but each index adds write overhead. I typically aim for under 8 indexes per table. If you have more, you probably have redundant indexes — a composite index on (a, b) makes a single-column index on a redundant. Use pg_stat_user_indexes to find indexes with zero scans and drop them.
Should I add indexes to columns used in ORDER BY?
Yes, if the ORDER BY is on a large result set. PostgreSQL can use a B-tree index to return rows in sorted order without a separate sort step. For queries like ORDER BY created_at DESC LIMIT 20, an index on created_at turns an expensive sort into a simple index scan. Combine it with your WHERE conditions in a composite index for best results.
Do I need to add indexes manually if I use Rails references?
Since Rails 5, t.references :customer in a migration automatically adds an index on customer_id. But if you’re working with a legacy app or created columns manually with add_column, the index won’t be there. Run active_record_doctor to catch these gaps.
Can I create indexes without downtime?
Yes. Use algorithm: :concurrently in Rails migrations:
class AddIndexConcurrently < ActiveRecord::Migration[8.0]
disable_ddl_transaction!
def change
add_index :orders, :customer_id, algorithm: :concurrently
end
end
The disable_ddl_transaction! is required because concurrent index creation can’t run inside a transaction. This builds the index while the table remains fully accessible for reads and writes. It takes longer than a regular index build, but your app stays up. See our guide on zero-downtime database migrations for more on this.
How do I know if PostgreSQL is using my index?
Run EXPLAIN ANALYZE on your query. Look for Index Scan or Index Only Scan in the plan output. If PostgreSQL chooses Seq Scan on a large table despite your index, check that your query conditions match the index columns in the correct order, and that table statistics are up to date (ANALYZE table_name).
About the Author
Roger Heykoop is a senior Ruby on Rails developer with 19+ years of Rails experience and 35+ years in software development. He specializes in Rails modernization, performance optimization, and AI-assisted development.
Get in TouchRelated Articles
Need Expert Rails Development?
Let's discuss how we can help you build or modernize your Rails application with 19+ years of expertise
Schedule a Free Consultation