Time to start a new Rails app. Wanting to live on the bleeding edge, let’s use the absolute lated Ruby and Rails both. Uh-oh! The go-to environment manager, asdf doesn’t seem to have Ruby 3.0.1 available. And we’ll probably want to use Docker anyway. Docker Compose can spin up a production-like local environment with the app server, database, Redis if needed, and so on. So, why even have Ruby, Rails, and all that installed on the host machine at all?

What we want is a script of some kind that will use Docker to create an image, install Rails on that image, and ideally be useful for further work. It turns out that the Docker folks themselves tackled this problem some time ago. (How long ago is hard to say given no dates on the article. Don’t you hate that?) We can use that with a few tweaks to the Dockerfile and docker-compose.yml to get us where we need to go.

Dockerfile

Let’s start with the Dockerfile. Create a new directory for the app and add the code below to Dockerfile.dev.

from ruby:3.0-alpine as runtime

LABEL app-name="project-web"

#Install packages used for dev
RUN apk add --no-cache \
    bash \
    build-base \
    curl-dev \
    git \
    linux-headers \
    nodejs \
    yarn \
    postgresql-client \
    postgresql-dev \
    tzdata \
    tini

ENV HOME /root
ENV APP_HOME /usr/src/app
WORKDIR $APP_HOME

FROM runtime as application

EXPOSE 3000

# Run with tini to ensure zombies are cleaned and that signals are
# forwarded correctly for `docker stop`, etc.
ENTRYPOINT [ "tini", "--" ]
CMD [ "rails", "server", "-b", "0.0.0.0" ]

Some changes beyond simple updates from Docker’s original:

docker-compose.yml

Now in the same directory, we add the code below as docker-compose.yml.

version: '3'

volumes:
  app_bundle:

services:
  app_server:
    build:
      context: .
      dockerfile: Dockerfile.dev
    volumes:
      - .:/usr/src/app
      - app_bundle:/bundle
    ports:
      - "3000:3000"
    environment:
      BUNDLE_PATH: /bundle
      BINDING: 0.0.0.0
      PORT: 3000
    networks:
      default:
        aliases:
          - app.local
    depends_on:
      - postgres13
  
  postgres13:
    image: postgres:13-alpine
    environment:
      POSTGRES_USER: app_dev
      POSTGRES_PASSWORD: dev_password
    networks:
      default:
        aliases:
          - db.local

Other than basic updates, the only significant differences here is adding the volume for the bundled gems.

Helpful Script for QoL

Who wants to type that set of commands every time we do this? Not me. So we encapsulate it in a script along with other commands as shortcuts for typing out docker-compose run ... --rm app_server ... all the time.

#!/bin/bash

export COMPOSE_FILE=docker-compose.yml

case "$1" in
  bash)
    docker-compose run --service-ports --rm app_server /bin/bash
    ;;
  build)
    docker-compose build
    ;;
  down)
    docker-compose down
    ;;
  new)
    if [ -f ./Gemfile ]; then
      echo "Gemfile exists! Will not create new rails app over old app."
      exit 1
    fi
    cat > Gemfile << EOF
source 'https://rubygems.org'
gem 'rails'
EOF
    touch Gemfile.lock
    docker-compose run --rm app_server bundle install
    docker-compose run --rm app_server bundle exec rails new . --force --database=postgresql
    ;;
  setup)
    docker-compose run --rm app_server /bin/bash -c "bin/setup"
    ;;
  test)
    docker-compose run -e RAILS_ENV=test --rm app_server /bin/bash -c "bundle exec rspec"
    ;;
  up)
    docker-compose run --service-ports --rm app_server
    ;;
esac

The new command (or case) is where the magic happens. We also auto-create the raw Gemfile and prevent running new again if that file exists. Note that the command separately spins up the app server twice. Once to run bundle install to get the Rails gem into place. The second actually initializes the new Rails app, overwriting our Gemfile (and lock) in the process. Ideally, we would only start up the server container once. It would be relatively simple to add those two lines to a script, but other things I’ve tried have not worked yet, and keeping the file count down seems like a good thing.

We’ve also got:

It’s worth noting that all of this can work with another DB with a few small tweaks. PostgreSQL was in the original example, and it is a good, scalable database for most work, so it’s left in as our default.

Last Steps

Once we run ./dev new and wait a few minutes for it to finish, we’re not 100% done. We need to at least go in and edit our database.yml to connect to the dev database we have configured in docker-compose.yml. Please keep in mind that the username and password configuration here could and should be much more secure. But for quickly spinning up an app, it’ll do for now.

Once the DB config is updated, we can run our ./dev setup to initialize the databases for dev and test. At this point, our server should start via ./dev up.

Last but not least, if we want to use that ./dev test command, we will want to add the following to our newly generated Gemfile.

group :test, :development do
  gem 'rspec-rails', '~> 5.0.0' # Or whatever the current version is
end

Then run ./dev bash in order to bundle install and bin/rails generate rspec:install.

Summary

There’s an old saying about laziness being a virtue in programming. This seems like a bit of work for something most folks don’t do often. But, now we can be very lazy and just use these three files the next time we need a Rails project, but don’t have the tools and libraries already installed.

To make it so we can be even lazier in the future, all of this is available in a GitHub repository.