35+ Years Experience Netherlands Based ⚡ Fast Response Times Ruby on Rails Experts AI-Powered Development Fixed Pricing Available Senior Architects Dutch & English 35+ Years Experience Netherlands Based ⚡ Fast Response Times Ruby on Rails Experts AI-Powered Development Fixed Pricing Available Senior Architects Dutch & English
AI Coding Assistants for Rails: What Actually Works and What Wastes Your Time

AI Coding Assistants for Rails: What Actually Works and What Wastes Your Time

TTB Software
ai
A practical breakdown of using AI coding assistants (Copilot, Cursor, Claude) in Rails projects. What tasks they handle well, where they fail, and how to structure prompts for Rails-specific code.

AI coding assistants can cut hours off your Rails workday — but only if you know which tasks to hand them and which to keep for yourself. After six months of using Copilot, Cursor, and Claude across multiple Rails 7.2 and 8.0 projects, here’s what I’ve found actually moves the needle versus what just generates plausible-looking garbage you’ll delete.

Where AI Assistants Excel in Rails

Test Generation

This is the single highest-ROI use case. Hand an AI your model or controller code and ask for tests. Rails conventions are well-represented in training data, so the output is usually 80-90% correct on the first pass.

# Give the AI this model:
class Subscription < ApplicationRecord
  belongs_to :user
  belongs_to :plan

  validates :starts_at, presence: true
  validates :status, inclusion: { in: %w[active paused cancelled expired] }

  scope :active, -> { where(status: "active") }
  scope :expiring_soon, -> { where(status: "active").where("ends_at < ?", 3.days.from_now) }

  def renewable?
    active? && ends_at.present? && ends_at > Time.current
  end
end

A good AI assistant will generate model specs covering validations, scopes, associations, and the renewable? method — including edge cases around time boundaries that you might skip when writing tests manually at 4 PM on a Friday.

The key prompt pattern: “Write RSpec tests for this model. Include edge cases. Use let declarations and described_class. Use FactoryBot.”

Specifying the testing style matters. Without it, you’ll get a mix of let/let!, inline object creation, and maybe even fixtures.

Migration Generation

Describing what you want in plain English and getting a migration back works surprisingly well:

"Add a jsonb column called preferences to users with a default empty hash,
add an index on it using GIN, and add a check constraint ensuring it's not null"

Produces:

class AddPreferencesToUsers < ActiveRecord::Migration[8.0]
  def change
    add_column :users, :preferences, :jsonb, default: {}, null: false
    add_index :users, :preferences, using: :gin
  end
end

The constraint handling is correct for PostgreSQL. The AI picked null: false over a separate check constraint, which is the right call here. It also knew to use GIN over GiST for jsonb — a detail that trips up developers who don’t work with PostgreSQL daily.

Boilerplate and CRUD

Generating standard Rails controller actions, form objects, service classes, and serializers. The patterns are repetitive and well-documented in training data. Let the AI handle the typing.

Documentation and Comments

Asking an AI to document a complex method or generate YARD docs is almost always faster than writing them yourself. The output quality is high because documentation follows predictable patterns.

Where AI Assistants Fail in Rails

Complex ActiveRecord Queries

Anything beyond basic where/joins/includes gets unreliable fast. I’ve seen AI assistants generate queries that:

  • Use pluck inside a scope chain, breaking further chaining
  • Confuse joins and includes semantics (N+1 vs. filtering)
  • Generate Arel syntax when a simple where with a subquery would work
  • Produce queries that work on SQLite but blow up on PostgreSQL
# AI-generated — looks correct, silently wrong
scope :with_recent_orders, -> {
  joins(:orders).where("orders.created_at > ?", 30.days.ago).distinct
}

# The problem: this loads ALL columns from orders into memory via the join,
# and distinct on the full row set is expensive. Better:
scope :with_recent_orders, -> {
  where(id: Order.where("created_at > ?", 30.days.ago).select(:user_id))
}

The subquery version lets PostgreSQL optimize the execution plan. The AI’s version works on 1,000 rows and chokes on 1,000,000.

Anything Involving Your Specific Business Logic

AI assistants don’t know your domain. They can’t know that your Order model has a state machine with 7 states and specific transition rules, or that your multi-tenant setup routes through a Current.account context. When the AI fills in gaps in its understanding, it invents plausible-sounding but wrong behavior.

Security-Sensitive Code

Authentication flows, authorization logic, payment processing, API token handling. These are areas where “almost correct” is dangerous. I’ve seen AI-generated code that:

  • Skipped before_action :authenticate_user! on sensitive endpoints
  • Used params.permit! (permits everything — a mass assignment vulnerability)
  • Generated JWT handling that didn’t validate expiration
  • Built password reset flows without rate limiting

Always write security code yourself or treat AI output as a first draft that needs line-by-line review.

Rails Version-Specific Features

If you’re on Rails 8.0 using Solid Queue, Solid Cache, or the new authentication generator, AI assistants trained before late 2024 will give you outdated patterns. Copilot in particular tends to suggest config.active_job.queue_adapter = :sidekiq even when your Gemfile has solid_queue.

Check the AI’s training data cutoff. For Rails 8 specifics, you’re better off reading the official Rails guides directly.

Prompt Engineering That Works for Rails

Generic prompts produce generic output. Here’s what I’ve found makes a difference:

Specify Your Stack

Rails 8.0, Ruby 3.3, PostgreSQL 16, RSpec, FactoryBot,
Hotwire (Turbo + Stimulus), Tailwind CSS

Put this at the top of every conversation. It prevents the AI from suggesting jQuery solutions, ERB when you want ViewComponent, or MySQL-specific syntax.

Give Context, Get Quality

Bad prompt:

“Write a service class for processing payments”

Good prompt:

“Write a service class for processing Stripe payments in a Rails 8 app. The app uses the stripe gem v12. Subscriptions have a plan_id and user_id. Handle card_declined, expired_card, and processing_error exceptions separately. Return a Result object with success/failure and an error message. The class should be testable without hitting Stripe’s API.”

The second prompt produces code you can actually use. The first produces a skeleton you’ll rewrite.

Ask for Alternatives

When the AI gives you a solution, ask: “What are two other ways to implement this, and what are the tradeoffs?” This is where AI assistants shine as thinking partners. You’ll often get a comparison between, say, a service object, a concern, and a Turbo Stream approach — each with genuine pros and cons.

Setting Up Your Project for Better AI Output

A few structural choices make AI assistants significantly more effective:

  1. Keep a .ai-context or CONVENTIONS.md file in your repo root documenting your patterns: “We use service objects in app/services/, form objects in app/forms/, queries in app/queries/
  2. Consistent naming: If your background jobs follow a VerbNounJob pattern, document it. The AI will follow the convention if it can see examples.
  3. Type signatures via RBS or Sorbet: AI assistants produce better code when they can see type information. Even partial type coverage helps.
  4. Good test coverage: AI assistants that can see your existing tests will match the style. Cursor and Claude are both good at inferring test patterns from existing specs.

Measuring the Actual Impact

After tracking my own output across three projects over four months:

  • Test writing: 40-60% faster with AI assistance. The generated tests caught 3 bugs I likely would have missed writing tests manually.
  • Boilerplate/CRUD: 50-70% faster. Nearly all output usable without modification.
  • Complex features: 10-20% faster at best. Most time goes to explaining context and fixing the AI’s misunderstandings.
  • Debugging: Roughly break-even. AI is helpful for “what does this error mean” but poor at tracing bugs through multi-layer Rails stacks.

The pattern is clear: the more conventional the task, the more AI helps. The more your code depends on project-specific context, the less useful the AI becomes.

Which Tool for Which Job

GitHub Copilot works best for inline completion while you’re writing code. It’s fast, it’s unobtrusive, and its Rails pattern recognition is solid for standard CRUD.

Cursor shines when you need the AI to understand multiple files at once. Its codebase indexing means you can say “write a controller that follows the same patterns as UsersController” and get consistent output.

Claude (via API or chat) is strongest for architectural discussions, complex refactoring plans, and generating comprehensive test suites. Give it a full model with associations and it’ll produce thorough specs.

None of them replace understanding Rails. They’re multipliers: 10x engineer × AI = productive. 0x engineer × AI = confident but wrong.

FAQ

Can AI assistants generate entire Rails features end-to-end?

For simple CRUD features with standard associations, yes — you can get a working model, migration, controller, views, and tests from a detailed prompt. For anything involving business logic, state machines, or cross-model coordination, you’ll get a starting point that needs significant rework. The more your feature deviates from “standard Rails blog tutorial,” the less complete the AI’s output will be.

Do AI coding assistants work well with Hotwire and Turbo?

They’re getting better but still inconsistent. Stimulus controllers are usually generated correctly. Turbo Streams and Turbo Frames are hit-or-miss — the AI often confuses when to use turbo_stream.replace vs. turbo_stream.update, and it sometimes generates Turbo Frame tags with incorrect DOM IDs. Always test the interactive behavior in a browser.

Should I worry about AI-generated code introducing security vulnerabilities?

Yes. Treat all AI-generated code as untrusted input that needs security review. Common issues include missing authorization checks, overly permissive Strong Parameters, SQL injection through string interpolation in queries, and insecure default configurations. Run brakeman on your codebase after incorporating AI-generated code.

How do I keep AI assistants from suggesting outdated Rails patterns?

Pin your Rails and Ruby versions in every prompt or context file. For Copilot, include a .github/copilot-instructions.md file specifying your stack. For Cursor, use the .cursorrules file. For chat-based assistants, start each conversation with your stack details. Even then, verify version-specific APIs against the official Rails documentation.

T

About the Author

Roger Heykoop is a senior Ruby on Rails developer with 19+ years of Rails experience and 35+ years in software development. He specializes in Rails modernization, performance optimization, and AI-assisted development.

Get in Touch

Share this article

Need Expert Rails Development?

Let's discuss how we can help you build or modernize your Rails application with 19+ years of expertise

Schedule a Free Consultation