Kamal 2 Deploy Rails: Zero-Downtime Deployments Without Kubernetes
Kubernetes solves problems most Rails teams don’t have. You’re running two app servers, a database, and Redis. You don’t need a container orchestration platform designed for thousands of microservices.
Kamal 2, the deployment tool that ships with Rails 8, handles zero-downtime deploys to any server you can SSH into. No cluster management, no YAML sprawl, no managed Kubernetes bill that costs more than your servers.
I’ve migrated three client apps from Capistrano to Kamal in the past year. Each migration took less than a day. The deploys got faster, the configuration got simpler, and the teams stopped needing a “deployment expert” to ship code.
What Kamal Actually Does
Kamal uses Docker and kamal-proxy (a lightweight HTTP proxy) to deploy containers to your servers. During a deploy, it:
- Builds your Docker image locally or on a remote builder
- Pushes it to a container registry
- Boots the new container on each server
- Waits for health checks to pass
- Switches kamal-proxy to route traffic to the new container
- Stops the old container
The switchover happens per-server with no dropped requests. No load balancer reconfiguration, no rolling restart scripts.
Prerequisites
You need:
- A server running Ubuntu 22.04+ (or any Linux with Docker support)
- SSH access with key-based authentication
- A container registry (Docker Hub, GitHub Container Registry, or a private registry)
- Ruby 3.2+ and Rails 8+ locally
Installation and Initial Setup
Kamal ships with Rails 8 new apps. For existing apps:
bundle add kamal
bundle exec kamal init
This generates two files: config/deploy.yml and .kamal/secrets.
Configuring deploy.yml
Here’s a production-ready configuration for a typical Rails app with Sidekiq:
service: myapp
image: your-registry/myapp
servers:
web:
hosts:
- 203.0.113.10
- 203.0.113.11
labels:
kamal-proxy.http.response_timeout: 30s
options:
memory: 1g
worker:
hosts:
- 203.0.113.12
cmd: bundle exec sidekiq -q default -q mailers
options:
memory: 2g
proxy:
ssl: true
host: myapp.com
app_port: 3000
healthcheck:
interval: 3
path: /up
timeout: 5
registry:
server: ghcr.io
username: your-github-user
password:
- KAMAL_REGISTRY_PASSWORD
builder:
arch: amd64
cache:
type: registry
image: your-registry/myapp-build-cache
env:
clear:
RAILS_LOG_TO_STDOUT: "1"
RAILS_SERVE_STATIC_FILES: "true"
secret:
- RAILS_MASTER_KEY
- DATABASE_URL
- REDIS_URL
accessories:
redis:
image: redis:7-alpine
host: 203.0.113.12
port: 6379
directories:
- data:/data
cmd: redis-server --appendonly yes --maxmemory 512mb --maxmemory-policy allkeys-lru
A few things to note about this config:
The proxy block replaced Traefik from Kamal 1. kamal-proxy handles SSL certificates through Let’s Encrypt automatically. Set ssl: true and point your DNS — it handles the rest.
The builder cache saves significant time on subsequent deploys. Without registry caching, every deploy rebuilds all Docker layers from scratch. With it, only changed layers get rebuilt. This cut deploy times from 8 minutes to under 2 minutes for a mid-sized Rails app.
Memory limits prevent runaway processes from taking down the server. Set them based on your app’s actual usage, then add 20% headroom.
Setting Up Secrets
Edit .kamal/secrets to pull credentials from your environment or a secret manager:
KAMAL_REGISTRY_PASSWORD=$GITHUB_TOKEN
RAILS_MASTER_KEY=$(cat config/master.key)
DATABASE_URL=$PRODUCTION_DATABASE_URL
REDIS_URL=$PRODUCTION_REDIS_URL
Kamal reads this file during deployment and injects the values as environment variables into your containers. The file itself never gets copied to the server.
Your Dockerfile
Rails 8 generates a production-ready Dockerfile. If you’re upgrading, here’s what a solid one looks like:
FROM ruby:3.3-slim AS base
WORKDIR /rails
ENV RAILS_ENV="production" \
BUNDLE_DEPLOYMENT="1" \
BUNDLE_PATH="/usr/local/bundle"
FROM base AS build
RUN apt-get update -qq && \
apt-get install --no-install-recommends -y \
build-essential git libpq-dev node-gyp pkg-config python-is-python3
COPY Gemfile Gemfile.lock ./
RUN bundle install && \
rm -rf ~/.bundle/ "${BUNDLE_PATH}"/ruby/*/cache
COPY . .
RUN bundle exec rails assets:precompile
RUN rm -rf node_modules
FROM base
RUN apt-get update -qq && \
apt-get install --no-install-recommends -y \
curl libpq5 && \
rm -rf /var/lib/apt/lists /var/cache/apt/archives
COPY --from=build /usr/local/bundle /usr/local/bundle
COPY --from=build /rails /rails
RUN groupadd --system --gid 1000 rails && \
useradd rails --uid 1000 --gid 1000 --create-home --shell /bin/bash && \
chown -R rails:rails db log storage tmp
USER 1000:1000
ENTRYPOINT ["./bin/docker-entrypoint"]
EXPOSE 3000
CMD ["bundle", "exec", "thrust", "./bin/rails", "server"]
The multi-stage build keeps the final image small. The build stage installs compilation dependencies, builds everything, then the production stage only copies the compiled artifacts. A typical Rails app image comes out around 250-350MB this way.
First Deploy
kamal setup
This command does everything: installs Docker on your servers if needed, sets up kamal-proxy, pulls your image, and starts the app. First deploy takes longer because it provisions the server.
Watch the output. Kamal shows you exactly what it runs on each server. If something fails, you’ll see the exact command and error.
Subsequent Deploys
kamal deploy
That’s it. Build, push, boot new container, health check, switch traffic, stop old container. Typical deploy time for a Rails app: 1-3 minutes depending on image size and build cache hits.
Health Checks Matter
Kamal won’t switch traffic until the health check passes. The default Rails health check endpoint at /up works:
# config/routes.rb
Rails.application.routes.draw do
get "up" => "rails/health#show", as: :rails_health_check
end
If your app needs database connectivity to be “healthy,” customize the health check:
# app/controllers/health_controller.rb
class HealthController < ApplicationController
def show
ActiveRecord::Base.connection.execute("SELECT 1")
render plain: "OK"
rescue StandardError => e
render plain: "FAIL: #{e.message}", status: :service_unavailable
end
end
A bad health check configuration is the most common source of deployment problems. If the check is too strict (requires all external services), deploys will fail when any dependency hiccups. If it’s too loose (just returns 200), you’ll route traffic to broken containers.
Check that the app boots and can serve requests. That’s the right threshold.
Running Migrations
Kamal doesn’t run migrations automatically, and that’s intentional. Database migrations and code deployments are separate concerns.
kamal app exec --roles=web "bin/rails db:migrate"
kamal deploy
Run migrations before deploying when adding new tables or columns. Run them after deploying when removing columns (the old code still references them during the switchover).
For zero-downtime migrations specifically, use the strong_migrations gem to catch dangerous migrations before they hit production.
Debugging Production Issues
Kamal gives you direct access to your running containers:
# View logs
kamal app logs -f
# Open a Rails console
kamal app exec -i "bin/rails console"
# Check container resource usage
kamal app exec "cat /proc/1/status | grep VmRSS"
# Run a one-off command
kamal app exec "bin/rails runner 'puts User.count'"
Compare this to Kubernetes, where you’d need kubectl exec -it deployment/myapp -- bin/rails console after configuring contexts, namespaces, and RBAC. Kamal keeps the mental overhead low.
When Kamal Isn’t Enough
Kamal works well for most Rails deployments. It starts to strain when you need:
- Auto-scaling: Kamal deploys to a fixed set of servers. If you need to scale from 2 to 20 instances based on traffic, you need something else (or scripting around Kamal + your cloud provider’s API).
- Multi-region failover: Kamal doesn’t handle geographic routing. You’d add a CDN or DNS-based failover in front.
- Complex service meshes: If your architecture has 15 services that need service discovery and circuit breaking, Kubernetes earns its complexity.
For a Rails monolith with a background job processor and maybe a separate service or two, Kamal handles everything you need.
Migrating from Capistrano
The migration path is straightforward:
- Add a Dockerfile and verify it builds locally
- Create
config/deploy.ymlwith your existing server IPs - Run
kamal setupon a staging server first - Test thoroughly, especially background jobs and file storage
- Point DNS to the Kamal-managed servers
- Run
kamal setupon production
The biggest adjustment is mental. With Capistrano, your code lives on the server and you think in terms of releases, shared directories, and symlinks. With Kamal, your app is a container image, and the server is just a host that runs it. Logs go to stdout. Files go to object storage. Environment variables replace shared config files.
Wrapping Up
Kamal gives Rails teams a deployment tool that matches the framework’s philosophy: convention over configuration, sensible defaults, and staying out of your way. You get zero-downtime deploys, automatic SSL, and container-based reproducibility without the operational tax of Kubernetes.
Set it up once. Deploy with a single command. Spend your time building features instead of managing infrastructure.
About the Author
Roger Heykoop is a senior Ruby on Rails developer with 19+ years of Rails experience and 35+ years in software development. He specializes in Rails modernization, performance optimization, and AI-assisted development.
Get in TouchRelated Articles
Need Expert Rails Development?
Let's discuss how we can help you build or modernize your Rails application with 19+ years of expertise
Schedule a Free Consultation