Rails Full-Text Search: pg_search, tsvector and Postgres Without Elasticsearch
The first production incident I ever got paged for that involved search was also the most unnecessary one. A client’s staging environment was completely down — not slow, down — because their three-node Elasticsearch cluster had crashed and the Rails app had no fallback path. The dataset was fifty thousand product records. I’ve seen SQLite handle fifty thousand rows faster than that cluster handled queries when it was healthy. We replaced it with Postgres full-text search that weekend. That was eight years ago. The pg_search-backed search is still running without a single maintenance incident.
Elasticsearch has legitimate uses. If you’re building a search engine, need cross-language stemming across dozens of locales, or run autocomplete on tens of millions of rows, it earns its operational weight. For the other ninety percent of Rails applications — the ones with under a million rows, a single Postgres database, and a team of fewer than ten engineers — Postgres handles the job, and pg_search makes it approachable.
Setting Up Rails Full-Text Search with pg_search
Add the gem:
# Gemfile
gem "pg_search", "~> 2.3"
No additional services, no separate process to keep running, no index synchronization layer. pg_search generates SQL that Postgres executes natively using its built-in full-text search engine.
Include it in any model you want to search:
class Article < ApplicationRecord
include PgSearch::Model
pg_search_scope :search,
against: [:title, :body],
using: {
tsearch: { prefix: true }
}
end
against declares which columns to search. tsearch uses Postgres’s native tokenizer — it breaks text into lexemes, applies stemming (“deploying”, “deployed”, “deployment” all match “deploy”), and queries using an inverted index. The prefix: true option makes “rails” match “railscast” and “railsconf” — necessary for any search-as-you-type interface.
Querying looks exactly like any other scope:
Article.search("turbo streams")
# Generates:
# WHERE to_tsvector('english', articles.title || ' ' || articles.body)
# @@ to_tsquery('english', 'turbo & streams')
Chainable with standard scopes:
Article.published.where(category: "rails").search("background jobs")
Weighted Columns and Ranked Results
Not all matches carry equal weight. A search for “sidekiq” that hits the article title signals higher relevance than one buried in the body. pg_search maps columns to Postgres’s four weight tiers — A, B, C, D — with A scoring roughly four times higher than D by default.
pg_search_scope :search,
against: {
title: "A",
subtitle: "B",
body: "C"
},
using: {
tsearch: { prefix: true, dictionary: "english" }
}
Sort results by relevance using the virtual rank column:
Article.search("kamal deploy")
.with_pg_search_rank
.order(pg_search_rank: :desc)
with_pg_search_rank adds a pg_search_rank float column to the SELECT. For a search results page where users expect the most relevant result first, this is usually all the ranking logic you need. Custom scoring functions are possible but rarely worth the complexity until you have evidence that users are unsatisfied with default ranking.
Trigram Similarity for Typo-Tolerant Rails Full-Text Search
Full-text search is exact on lexemes. Search for “deplyo” and you get nothing. Users make typos, mobile keyboards autocorrect poorly, and product names get mangled. pg_search supports trigram similarity via Postgres’s pg_trgm extension, which breaks strings into overlapping three-character sequences and computes a similarity score.
Enable the extension in a migration:
class AddPgTrgmExtension < ActiveRecord::Migration[8.0]
def change
enable_extension "pg_trgm"
end
end
Combine trigram with full-text in a single scope:
pg_search_scope :search,
against: [:title, :body],
using: {
tsearch: { prefix: true },
trigram: { word_similarity: true }
}
With both engines combined, “deplyo” matches “deploy” via trigram similarity, “deploying rails 8” matches via prefix full-text search. pg_search merges the rank scores from both. You get typo tolerance without any additional infrastructure.
The word_similarity option — available since pg_trgm 9.6 — matches individual words rather than the full query string against the full column. This matters when searching a title like “Deploying Rails 8 to Production with Kamal 2” for the query “kamal” — word similarity finds it; full-string similarity would score it near zero.
tsvector Columns: Pre-Computing the Index for Scale
The default pg_search configuration builds the tsvector on every query using to_tsvector(column). For tables under a hundred thousand rows, this is fast enough. For larger datasets, or for endpoints under sustained load, pre-compute the vector and let Postgres use a GIN index.
Add the column and index:
class AddSearchVectorToArticles < ActiveRecord::Migration[8.0]
def change
add_column :articles, :search_vector, :tsvector
add_index :articles, :search_vector, using: :gin,
name: "index_articles_on_search_vector"
end
end
Maintain it with a Postgres trigger so it stays in sync without touching application code:
class AddSearchVectorTrigger < ActiveRecord::Migration[8.0]
def up
execute <<~SQL
CREATE OR REPLACE FUNCTION articles_search_vector_update()
RETURNS trigger AS $$
BEGIN
NEW.search_vector :=
setweight(to_tsvector('english', coalesce(NEW.title, '')), 'A') ||
setweight(to_tsvector('english', coalesce(NEW.body, '')), 'C');
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER articles_search_vector_trigger
BEFORE INSERT OR UPDATE ON articles
FOR EACH ROW EXECUTE FUNCTION articles_search_vector_update();
SQL
end
def down
execute <<~SQL
DROP TRIGGER IF EXISTS articles_search_vector_trigger ON articles;
DROP FUNCTION IF EXISTS articles_search_vector_update();
SQL
end
end
Tell pg_search to use the pre-computed column instead of the function:
pg_search_scope :search,
against: [:title, :body],
using: {
tsearch: {
prefix: true,
tsvector_column: "search_vector"
}
}
Queries now hit the GIN index directly. On a half-million-row table, this brings search response times from 200–400ms down to 5–20ms.
Backfill existing rows in batches to avoid locking the table:
class BackfillArticlesSearchVector < ActiveRecord::Migration[8.0]
disable_ddl_transaction!
def up
Article.in_batches(of: 1_000) do |batch|
batch.touch_all
sleep 0.05
end
end
end
touch_all updates updated_at on each row, which fires the trigger and populates search_vector. The sleep keeps it gentle on production databases. For the general principles behind zero-downtime schema changes like this, the database migrations zero-downtime guide is worth reading before you run this on a live table.
Multi-Model Search Across Your Rails App
A common requirement: search articles, products, and users in one query and render a unified results page. pg_search’s multisearch handles this without joining unrelated tables.
Declare each model as multisearchable:
class Article < ApplicationRecord
include PgSearch::Model
multisearchable against: [:title, :body]
end
class Product < ApplicationRecord
include PgSearch::Model
multisearchable against: [:name, :description]
end
pg_search maintains a pg_search_documents table — one row per searchable record — and keeps it in sync via callbacks on save and destroy. Query it:
results = PgSearch.multisearch("rails deployment")
results.each do |result|
record = result.searchable # the original Article or Product
puts "#{record.class.name}: #{record.try(:title) || record.try(:name)}"
end
Filter by model type when needed:
PgSearch.multisearch("kamal").where(searchable_type: "Article")
Add indexes on the documents table to prevent slow scans as it grows:
add_index :pg_search_documents, :searchable_type
add_index :pg_search_documents, [:searchable_type, :searchable_id], unique: true
Rebuild the index after adding a model to multisearch for the first time:
bin/rails pg_search:multisearch:rebuild[Article]
bin/rails pg_search:multisearch:rebuild[Product]
When Postgres Full-Text Search Is Not Enough
After nineteen years of Rails, I’ve genuinely needed a dedicated search service exactly four times. Each time, at least one of these was true:
Cross-language stemming on a single index. Dutch and English records in the same table, with language-specific stop words and stemmers. Postgres tsvector supports multiple dictionaries, but routing each document through the correct one based on a language column, and then querying correctly across the mix, gets fragile fast.
Autocomplete at serious scale. Trigram similarity on a GIN index handles a few million rows well. Beyond that — think product catalogs with tens of millions of SKUs and fifty millisecond SLAs — you want a purpose-built tool like Typesense or Algolia.
Faceted search with aggregations. If you need live counts of results per category, filtered by a date range, sorted by a custom business score, you are essentially rebuilding Elasticsearch’s aggregation pipeline in SQL. At some point the query becomes unmaintainable.
Search analytics. Knowing what users searched for, which results they clicked, and what returned zero results is genuinely valuable product data. Elasticsearch has this instrumented. Postgres does not.
If none of those conditions apply — and for most applications they don’t — ship pg_search. You can always migrate to a dedicated service later when you actually hit the limits. Deferring that complexity until it earns its place is the same discipline as incremental Rails upgrades: don’t solve tomorrow’s problems today.
pg_search gives you ranked, weighted, typo-tolerant full-text search backed by your existing Postgres database in about twenty minutes of setup. Pre-compute a tsvector column and add a GIN index when your data grows past a hundred thousand rows. Add trigram similarity when users start complaining about typos. Reach for a dedicated search service when you genuinely hit the use cases above — not before.
Building search into a Rails app and unsure whether to reach for Elasticsearch? TTB Software has been shipping Rails for nineteen years. We’ll give you an honest answer about what you actually need — and build it properly.
Frequently Asked Questions
What is the difference between pg_search and Elasticsearch for Rails?
pg_search uses Postgres’s built-in full-text search engine — tsvector, tsquery, GIN indexes — running directly in your existing database. No separate service to run or sync. Elasticsearch is a standalone JVM-based search engine with its own HTTP API, separate index, and operational overhead. pg_search is the right default for applications with under a million searchable rows. Elasticsearch earns its complexity when you need faceted aggregations, cross-language stemming, or autocomplete on tens of millions of rows.
How do I add a GIN index for full-text search in a Rails migration?
Use add_index :table_name, :search_vector_column, using: :gin. If you’re not pre-computing the tsvector (no dedicated column), you can index the function expression: add_index :articles, "to_tsvector('english', title)", using: :gin, name: "index_articles_title_fts". GIN indexes are the standard choice for tsvector columns — fast for lookups, slightly slower to update than GiST. For most read-heavy search workloads, GIN is the right call.
How does pg_search handle multiple languages in Rails?
Configure the dictionary per scope: using: { tsearch: { dictionary: "dutch" } } for a Dutch-language app. For multilingual content where a single record may contain multiple languages, compute the tsvector column manually in a trigger that switches dictionaries based on a locale column on the record. For a bilingual site serving separate locales, simpler to maintain two separate search scopes targeting translated column variants.
Can I use pg_search with Rails scopes and pagination?
Yes — pg_search scopes return standard ActiveRecord relations. Chain them freely: Article.published.search("kamal").with_pg_search_rank.order(pg_search_rank: :desc).page(params[:page]).per(20). Both pagy and kaminari work without modification. If you’re using select to limit columns, include pg_search_rank in your select list explicitly — otherwise with_pg_search_rank adds a virtual column that may not appear in a custom SELECT.
About the Author
Roger Heykoop is a senior Ruby on Rails developer with 19+ years of Rails experience and 35+ years in software development. He specializes in Rails modernization, performance optimization, and AI-assisted development.
Get in TouchRelated Articles
Need Expert Rails Development?
Let's discuss how we can help you build or modernize your Rails application with 19+ years of expertise
Schedule a Free Consultation