Rails Active Storage S3: Direct Uploads, Variants and Production Configuration
The first time I saw a Rails app fall over from file uploads, it wasn’t a traffic spike. It was a single sales rep uploading a 40MB PowerPoint through an admin panel. One request. Forty megabytes landing in a Puma worker’s memory. Every other request queued behind it. The thread sat there reading from the socket while the browser trickled the file in. For a fourteen-second upload, you’ve blocked that Puma worker for fourteen seconds.
That was a long time ago. Rails Active Storage, released with Rails 5.2, gives you a better answer. Add S3 as the storage backend, enable direct uploads, and file data never touches your app server. Your Puma workers return to being what they should be — handlers of logic, not pipes for bytes. Here’s how to set it up correctly, the variants and background processing patterns that actually work, and the production configuration details nobody puts in the README.
Setting Up Rails Active Storage S3
Active Storage ships with Rails. You opt in:
bin/rails active_storage:install
bin/rails db:migrate
This creates active_storage_blobs, active_storage_attachments, and active_storage_variant_records tables. Rails manages these — you rarely query them directly.
Add the AWS SDK:
# Gemfile
gem "aws-sdk-s3", "~> 1.177", require: false
gem "image_processing", "~> 1.2" # for variants
Configure the S3 service in config/storage.yml:
amazon:
service: S3
access_key_id: <%= Rails.application.credentials.aws.access_key_id %>
secret_access_key: <%= Rails.application.credentials.aws.secret_access_key %>
region: eu-west-1
bucket: <%= Rails.application.credentials.aws.s3_bucket %>
upload:
server_side_encryption: "AES256"
Set the service per environment:
# config/environments/production.rb
config.active_storage.service = :amazon
# config/environments/development.rb
config.active_storage.service = :local # disk storage, no S3 needed locally
Attach files to your model:
class Document < ApplicationRecord
has_one_attached :file
has_many_attached :images
end
That’s the basics. Most tutorials stop here. The interesting part starts now.
Rails Active Storage Direct Uploads: Skip the App Server Entirely
The default Active Storage flow sends files through your Rails server to S3. Your Puma worker receives the upload, buffers it, then forwards it to S3. For a 100KB profile picture, this is fine. For a 50MB video, you’re using a Puma worker as an expensive relay.
Direct uploads work differently. The browser asks your Rails app for a presigned S3 URL, your app issues it, and the browser uploads directly to S3. Your server handles two small JSON requests — issue the URL, confirm the blob — instead of proxying fifty megabytes.
Enable it with the JavaScript library that ships with Rails:
// app/javascript/application.js
import * as ActiveStorage from "@rails/activestorage"
ActiveStorage.start()
In your form, add direct_upload: true:
<%= form_with model: @document do |form| %>
<%= form.file_field :file, direct_upload: true %>
<%= form.submit "Upload" %>
<% end %>
That’s the entire client-side change. The @rails/activestorage library intercepts the form submission, exchanges the file for a presigned URL via a POST to /rails/active_storage/direct_uploads, uploads to S3, then substitutes the signed blob ID back into the form before submitting.
For larger files or better UX, listen to the progress events:
import { DirectUpload } from "@rails/activestorage"
const input = document.querySelector('input[type=file]')
const url = input.dataset.directUploadUrl
input.addEventListener('change', (event) => {
const file = event.target.files[0]
const upload = new DirectUpload(file, url, {
directUploadWillStoreFileWithXHR(xhr) {
xhr.upload.addEventListener("progress", (event) => {
const percent = (event.loaded / event.total * 100).toFixed(0)
console.log(`Upload progress: ${percent}%`)
// update a progress bar here
})
}
})
upload.create((error, blob) => {
if (error) {
console.error("Upload failed:", error)
} else {
const hiddenField = document.createElement("input")
hiddenField.type = "hidden"
hiddenField.name = input.name
hiddenField.value = blob.signed_id
input.form.appendChild(hiddenField)
}
})
})
CORS Configuration for S3 Direct Uploads
Direct uploads require your S3 bucket to accept requests from your domain. Without CORS, the browser blocks the upload before it starts. Configure the bucket CORS policy in AWS (or Terraform, or wherever you manage infrastructure):
[
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["PUT", "POST"],
"AllowedOrigins": ["https://yourapp.com"],
"ExposeHeaders": ["Origin", "Content-Type", "Content-MD5", "Content-Disposition"],
"MaxAgeSeconds": 3600
}
]
Lock AllowedOrigins to your actual domain. * works locally but leaves your bucket accepting PUT requests from anywhere in production — including someone crafting requests to upload data to your bucket at your expense.
Image Variants: Transformations on Demand
Active Storage variants let you resize, crop, and convert images without storing multiple copies upfront. The variant is generated on first access and cached in S3 automatically.
class User < ApplicationRecord
has_one_attached :avatar
def avatar_thumbnail
avatar.variant(resize_to_fill: [200, 200], format: :webp)
end
end
In a view:
<%= image_tag current_user.avatar_thumbnail if current_user.avatar.attached? %>
The image_processing gem uses libvips (fast) or ImageMagick (universal). For production, libvips processes images ten to twenty times faster than ImageMagick and uses significantly less memory. Make sure it’s in your Dockerfile:
RUN apt-get install -y libvips
Named Variants for Consistency
For consistent sizing across the app, define named variants directly on the model:
class Product < ApplicationRecord
has_one_attached :photo do |attachable|
attachable.variant :thumb, resize_to_fill: [120, 120], format: :webp
attachable.variant :medium, resize_to_limit: [600, 600], format: :webp
attachable.variant :large, resize_to_limit: [1200, nil], format: :webp
end
end
Use them with:
<%= image_tag product.photo.variant(:thumb) %>
Named variants are validated at declaration time — if you mistype a variant name, you get an error at app boot rather than silently serving a broken image in production.
Background Variant Processing
By default, variants are processed synchronously on first request. Under load, the first user to hit a newly uploaded image waits for libvips to run. Move processing to the background:
# config/application.rb
config.active_storage.variant_processor = :vips
# app/jobs/process_image_variants_job.rb
class ProcessImageVariantsJob < ApplicationJob
queue_as :default
def perform(attachment_id)
attachment = ActiveStorage::Attachment.find(attachment_id)
blob = attachment.blob
return unless blob.image?
attachment.record.class.reflect_on_all_attachments.each do |reflection|
next unless reflection.name == attachment.name.to_sym
reflection.options.fetch(:variants, {}).each_key do |variant_name|
blob.variant(variant_name).processed
end
end
end
end
Trigger it after upload:
class Document < ApplicationRecord
has_one_attached :cover_image
after_commit :process_variants, on: [:create, :update]
private
def process_variants
ProcessImageVariantsJob.perform_later(cover_image.attachment.id) if cover_image.attached?
end
end
For more on background job patterns, see the Solid Queue and Sidekiq guide.
Presigned URLs and CDN: Serving Files Without Touching Your App
By default, Active Storage serving routes traffic through your Rails app: every url_for(user.avatar) generates a redirect to a short-lived Rails URL, which in turn redirects to a short-lived S3 URL. Two redirects per image. Your Rails server handles every image request even though the bytes come from S3.
For a fully public bucket served via CloudFront, set public: true in storage.yml:
# config/storage.yml
amazon_public:
service: S3
access_key_id: <%= Rails.application.credentials.aws.access_key_id %>
secret_access_key: <%= Rails.application.credentials.aws.secret_access_key %>
region: eu-west-1
bucket: <%= Rails.application.credentials.aws.s3_bucket %>
public: true
With public: true, url_for(user.avatar) returns a direct S3 URL with no expiry. Pair it with a CloudFront distribution in front of the bucket and you get global CDN caching without a single byte passing through Puma.
For private files — contracts, invoices, anything user-specific — keep the default (Rails-proxied URLs with short expiry) or generate presigned URLs directly:
# app/helpers/application_helper.rb
def presigned_url(blob, expires_in: 15.minutes)
blob.service.url(
blob.key,
expires_in: expires_in,
filename: blob.filename,
disposition: :inline,
content_type: blob.content_type
)
end
Configure the global expiry in your environment file:
# config/environments/production.rb
config.active_storage.service_urls_expire_in = 1.hour
Rails Active Storage Production Configuration: The Details That Bite You
After nineteen years of shipping Rails apps, these are the Active Storage gotchas I see most often in production:
Content type verification. Rails validates the content type of uploaded files using the file’s magic bytes, not the browser-supplied Content-Type header. If a user renames a .exe to .jpg, Rails catches it. Keep your allowed inline types locked down:
# config/initializers/active_storage.rb
Rails.application.config.active_storage.content_types_allowed_inline = %w[
image/png image/jpeg image/gif image/webp image/svg+xml
application/pdf
]
File size limits. Set them at the controller level before the upload reaches Active Storage, not in a model validation that runs after the bytes are already in memory:
class DocumentsController < ApplicationController
MAX_FILE_SIZE = 50.megabytes
before_action :check_file_size, only: [:create]
private
def check_file_size
return unless params.dig(:document, :file)&.size.to_i > MAX_FILE_SIZE
render json: { error: "File exceeds 50MB limit" }, status: :unprocessable_entity
end
end
Cleaning up orphaned blobs. Direct uploads create a blob record before the form is submitted. If the user closes the tab mid-upload, you have an orphaned blob sitting in S3 forever — and paying for storage. Rails ships a cleanup task:
bin/rails active_storage:purge_unattached
Run it in a scheduled job. Weekly is usually enough:
# app/jobs/purge_orphaned_blobs_job.rb
class PurgeOrphanedBlobsJob < ApplicationJob
def perform
ActiveStorage::Blob.unattached.where(created_at: ..2.days.ago).find_each(&:purge_later)
end
end
The 2.days.ago buffer prevents purging blobs from legitimate in-progress multi-step forms where someone saved draft state before uploading.
Testing without hitting S3. In tests, use the :test service — it stores files in memory and requires no AWS credentials:
# config/environments/test.rb
config.active_storage.service = :test
# spec/models/document_spec.rb
RSpec.describe Document, type: :model do
it "attaches a file" do
document = Document.new(title: "Spec")
document.file.attach(
io: File.open(Rails.root.join("spec/fixtures/files/sample.pdf")),
filename: "sample.pdf",
content_type: "application/pdf"
)
expect(document.file).to be_attached
end
end
Never use the S3 service in tests. Even with a dedicated test bucket, you’ll hit rate limits, add network latency, and create cross-run side effects when specs run in parallel.
Active Storage with S3 direct uploads takes fifteen minutes to configure and protects you from a whole class of production problems that have nothing to do with your application logic — blocked Puma workers, memory spikes from large uploads, timeout cascades under load. Set it up this way once, define variants as named declarations on the model, process them in the background, and you have a file handling stack that scales without maintenance.
For deployment patterns that keep your Rails app running cleanly during infrastructure changes, zero-downtime database migrations covers the same discipline of backwards compatibility — worth reading if you’re ever renaming attachment keys or changing variant configurations on a live database.
Dealing with a file upload stack that’s causing production headaches? TTB Software has been building Rails applications for nineteen years. We’ve seen every S3 configuration permutation and variant processing failure mode. We’ll get yours right.
Frequently Asked Questions
How does Rails Active Storage direct upload work with S3?
When a user selects a file, the @rails/activestorage JavaScript library sends a POST to /rails/active_storage/direct_uploads with the file metadata (name, size, content type, checksum). Rails creates an ActiveStorage::Blob record and returns a presigned S3 PUT URL. The browser uploads the file body directly to that URL. Once the XHR upload completes, the blob’s signed ID is substituted back into the original form field and the form submits normally. The file bytes never pass through your Rails server.
What is the difference between has_one_attached and has_many_attached in Rails?
has_one_attached associates a single file with a record — a user’s avatar, a product’s cover image. Calling attach on a has_one_attached association replaces the existing file. has_many_attached associates multiple files — a post’s image gallery, a report’s attachments. Calling attach appends rather than replaces. Both store file metadata in active_storage_blobs and use active_storage_attachments as the polymorphic join table.
How do I serve private S3 files in Rails without exposing the bucket directly?
Keep the default Active Storage URL strategy, which routes through your Rails app and issues time-limited redirects to presigned S3 URLs. Set config.active_storage.service_urls_expire_in to a duration appropriate for your use case — 15 minutes for download links, 1 hour for embedded images. For download links that should expire immediately after delivery, generate the presigned URL with blob.service.url(blob.key, expires_in: 5.minutes, ...) and hand that directly to the response.
How do I configure Rails Active Storage S3 in CI without real AWS credentials?
Use config.active_storage.service = :test in config/environments/test.rb. The test service stores file data in memory for the duration of the test run and requires zero AWS setup. For integration tests that need to verify actual S3 behavior — presigned URL generation, content-type detection — use a dedicated test S3 bucket with a non-production IAM user scoped to write-only access on that bucket alone. Never share test and production buckets.
About the Author
Roger Heykoop is a senior Ruby on Rails developer with 19+ years of Rails experience and 35+ years in software development. He specializes in Rails modernization, performance optimization, and AI-assisted development.
Get in TouchRelated Articles
Rails Webhook Processing: Signature Verification, Idempotency and Background Delivery
April 14, 2026
Ruby on Rails Feature Flags: Complete Guide with Flipper, Rollout and Custom Redis Implementation
April 13, 2026
Rails Concerns: When They Clean Up Code and When They Create Hidden Complexity
March 13, 2026
Need Expert Rails Development?
Let's discuss how we can help you build or modernize your Rails application with 19+ years of expertise
Schedule a Free Consultation