Engineering November 13, 2023 11 min read

Ruby on Rails CI\CD with Bitbucket Pipelines

Ruby on Rails CI\CD with Bitbucket Pipelines

In today’s fast-paced software development landscape and here at Runtime Revolution, delivering high-quality applications with speed and efficiency is crucial.

Continuous Integration (CI) and Continuous Delivery (CD) play a vital role in achieving this goal. By automating the testing, building, and deployment processes, you can ensure that your Ruby on Rails application is always in a releasable state.

In this blog post, we’ll explore how to implement CI/CD using Bitbucket Pipelines to streamline your Ruby on Rails development workflow in 6 core steps.

Step 1

Enable Pipelines

The first thing we need to do before we start defining our CI\CD process is to open our repository web page in the browser and Enable Pipelines. To do that you click on the Repository settings from the side menu, then you scroll down to the PIPELINES section and click Settings. From that option, you will have access to a switch you need to activate.

Step 2

Create the bitbucket-pipelines.yml.

Navigate to the root of your Ruby on Rails application repository and create a new file named bitbucket-pipelines.yml. This will be the file that will trigger the Bitbucket Pipelines and will have both logic for CI and CD.

Step 3

Choosing a image

Bitbucket Pipelines is an integrated service built into Bitbucket Cloud, that allows us to run commands inside of containers. They allow you to run your commands inside the default image atlassian/default-image:latest or you can specify any public or private image that isn't hosted on a private network. The image can be set at the global level, and overridden for individual steps.

If you want to know more about the default environments and tools Bitbucket provides you can visit: https://support.atlassian.com/bitbucket-cloud/docs/use-docker-images-as-build-environments/

To not need to configure all services and tools needed to run our Ruby on Rails commands let’s use an official ruby image with the version of our project.

Some notes about these images, depending on the tags slim-bullseye, slim-bookworm, slim, bullseye, and bookworm, it will have some packages already installed or not, and whether their size will be greater or smaller. For example, comparing the image ruby:3.2.2-slim and ruby:3.2.2 you can see that the slim version is only 74.23MB and the regular is 365.67MB, and comparing their packages the slim version, for example, doesn't have the git package, which would require us to add extra commands in our .yml to install it if we want to use it.

With all this in mind let’s start defining our bitbucket-pipelines.yml file with a global image ruby:3.2.2.

Code
image: ruby:3.2.2

Step 4

Common definitions

In Bitbucket Pipelines we have access to the property definitions to define common resources for our steps. We will take advantage of this property in our CI process for two things:

  • First, we are splitting our commands through steps, and since each step runs in its own container we need a way to optimize the setup of our dependencies.
  • Secondly, our project uses PostgresSQL to save users data, which doesn’t exist inside the image we are using.

Step 4.1 Caching

By defining a common cache we will prevent the process of downloading our project dependencies (if they have already been downloaded before) every time we run our CI process, and in every step that we need to run our Ruby on Rails application commands.

Code
image: ruby:3.1.3

definitions:
  caches:
    bundler:
      key:
        files:
          - Gemfile.lock
      path: /usr/local/bundle

Some notes. Using the Gemfile.lock as the key for the cache will guarantee the invalidation of it if the dependencies have changed. About the path, it is the default place where our dependencies are being downloaded. If you did change the bundler path you need to update it accordingly to where it was defined.

Troubleshooting, if you are facing problems with the location of your dependencies you can temporarily add the following command bundle config path to print where the dependencies have been saved.

Step 4.2 PostgresSQL

To combine additional services, like PostgresSQL, MySQL, Redis, even Docker, etc. you need to specify them in the definitions property. This makes those services available to use in your steps. To add them you use the property services, on which you add a custom name for reference, and at that level, you specify a image. Depending on the service, you need to specify their required environment variables. For this PostgresSQL example they are POSTGRES_DB, POSTGRES_USER, and POSTGRES_PASSWORD.

Code
image: ruby:3.1.3

definitions:
  #...
  services:
    postgres:
      image: postgres:latest
      environment:
        POSTGRES_DB: $POSTGRES_DB
        POSTGRES_USER: $POSTGRES_USER
        POSTGRES_PASSWORD: $POSTGRES_PASSWORD

The previous snippet uses Repository variables to set the required values to demonstrate how you can use them. But since we are on a testing level you can have them hard-coded in this bitbucket-pipelines.ymlfile, as long your values are not confidential critical. For example you could set POSTGRES_DB: postgres, POSTGRES_USER: postgres, POSTGRES_PASSWORD: postgres and have a database.yml.ci in your repository prepared just for the CI (if you don't want to modify the original database.yml) and right after checking out the project you would overwrite the original database.ymlwith this one using a terminal command such as mv database.yml.ci database.yml.

To define Repository variables open your project website on the browser and select Repository settings from the side menu, then you scroll down on the side menu to Repository variables and click on it. Now click on Add and fill in the text fields with the required information.

Now, before we go into the next step and start defining our bitbucket-pipelines.yml let's discuss Git branching strategies. Git branching strategies are rules that developers follow to stipulate how they interact with a shared codebase. This is necessary as it helps keep repositories organized to avoid errors and conflicts when merging work.

If you are not aware there are already some strategies defined by the top companies such as:

We could discuss all of them, but let’s keep it simple. Despite all their rules one thing they share in common: you never work directly on the production branch, you perform changes on a specific branch up-to-date with production.

Step 5

Defining the Continuous Integration phase

In this phase, we want to handle the following topics:

A — When should it run;

B — Setting up dependencies;

C — Validate the project.

Step 5.A — When should it run

Based on the previous discussion, and taking a simpler approach, we want to run our CI steps every time a developer updates their working branches. And those working branches will never be the production branch. Which for our guide is named main.

The pipelines property can have several options, but for this case we can use the default property. For now, this will trigger on every change on the repository, even when the production branch gets updated, but at the CD phase, we will fix it.

Code
image: ruby:3.1.3

definitions:
# ...

pipelines:
  default:
    # ...

About the pipelines property, if you want to specify more complex rules, please check the Pipeline start conditions at: https://support.atlassian.com/bitbucket-cloud/docs/pipeline-start-conditions/

Step 5.B — Setting up dependencies

The first step we will add to our default property will be the setting up of our Ruby on Rails dependencies. In this step we include as well the defined cache bundler from the common definitions. Bitbucket Pipelines will restore an existing cache if valid, and will update the cache at the end of the step if need.

Code
image: ruby:3.1.3

definitions:
# ...

pipelines:
  default:
    - step:
      name: Setup dependencies
      caches:
      - bundler
      script:
      - bundle install
    
  # ...

Step 5.C — Validate the project

In this guide, to validate our project, we want to check 3 points:

1. The result of our tests;

2. If the project respects a defined code styling;

3. And if our tests have a good enough coverage of the project code.

The first two points, tests, and code styling, don’t share any dependency between them, and since Bitbucket Pipelines can run commands in parallel through the property parallel, we will take advantage of it in this case.

About coverage, the way we have implemented requires tests to run first. For this, we will include a step after the parallel steps, resulting in the following structure for the default property:

Code
image: ruby:3.1.3

definitions:
# ...

pipelines:
  default:
    - step:
      name: Setup dependencies
      # ...
    - parallel:
      - step:
        name: Run tests
        # ...
      - step:
        name: Check code style
        # ...
    - step:
      name: Check coverage
       # ...

# ...

Before we continue defining the bitbucket-pipelines.yml file, let’s talk about how we will check the test coverage. For that, we will use two additional gems in our project, and some terminal commands.

The main gem to archive a calculation of test coverage we will use is named simplecov. If you need to install please visit: https://github.com/simplecov-ruby/simplecov

The second gem we will use, to simplify the parsing\reading of the result values, is named simplecov-json. This gem allows us to configure the main simplecov gem with another formatter output. For more information about this gem please visit: https://github.com/vicentllongo/simplecov-json

With both gems added to your project, you can update your spec/spec_helper.rb with the following code at the beginning of the file:

Code
# frozen_string_literal: truerequire 'simplecov'
require 'simplecov-json'module SimpleCov
 module Formatter
   class MergedFormatter
     def format(result)
       SimpleCov::Formatter::HTMLFormatter.new.format(result)
       SimpleCov::Formatter::JSONFormatter.new.format(result)
     end
   end
 end
end
SimpleCov.formatter = SimpleCov::Formatter::MergedFormatter
​
SimpleCov.start
​
# ...

We could simply replace the default formatter with the simplecov-json but this demonstrates how can you have both working.

Now if you run bundle exec rspec it will appear a new folder on the root of your project named coverage. In this folder, we will find among the index.html page a coverage.json file we can easily parse.

Step 5.C.1 — Running tests

In this step, the first thing we want to include is the cache bundler. By doing this we will get the required dependencies available to use in this step.

Then as shown above, after running the tests with the simplecov configured it will generate files with the detailed coverage. And remembering what was already discussed in this guide each step runs in its container. So we need to have a way to pass those files to the latest step that will check the coverage, to do that we will take advantage of artifacts. Artifacts not only allow you to pass data from one step to another they allow you to download those files if you need them after the process ends.

In this step we also need to indicate the PostgresSQL service for our tests to run, resulting in:

Code
image: ruby:3.1.3

definitions:
  # ...

pipelines:
  default:
    - step:
      # ...
    
    - parallel:
      - step:
        name: Run tests
        caches:
          - bundler
        services:
          - postgres
        script:
          - bundle exec rspec
        artifacts:
          - coverage/coverage.json
  
  # ...

Step 5.C.2 — Code styling

In this step and like the tests step, the first thing we want to include is the cache bundler. By doing this we will get the required dependencies available to use in this step.

At the same time the tests run, we can check our code styling. To do that we will use rubocop. Rubocop is a static code analyser and code formatter. Out of the box, it will enforce many of the guidelines outlined in the community Ruby Style Guide. If you need to install please visit: https://github.com/rubocop/rubocop

Code
image: ruby:3.1.3

definitions:
  # ...

pipelines:
  default:
    - step:
      # ...
    
    - parallel:
      - step:
        name: Run tests
        # ...

      - step:
        name: Check code style
        caches:
          - bundler
        script:
          - bundle exec rubocop

# ...

Step 5.C.3 — Test coverage

Finally, let’s add the final step of our CI in our bitbucket-pipelines.yml. At this point, we have already everything we need to check the test coverage, the only thing missing is the processing of the files. To do that we just need to use terminal commands, it can be a bit verbose but don’t get scared.

Code
image: ruby:3.1.3

definitions:
  # ...

pipelines:
  default:
    - step:
    # ...
  
    - parallel:
    # ...
  
    - step:
    name: Check coverage
      script:
        - >
          if ! command -v jq &> /dev/null; then
            apt-get update && apt-get install -y jq
          fi
        - covered_percent=$(cat coverage/coverage.json | jq -r '.metrics.covered_percent')
        - re='^[+-]?[0-9]+([.||,][0-9]+)?$'
        - >
          if ! [[ $covered_percent =~ $re ]]; then 
            echo "WARNING :: Couldn't get coverage from artifact.";
            exit 0
          fi
        - required_coverage=$MINIMUM_COVERAGE
        - >
          if [ $covered_percent -le $required_coverage ]; then
            echo "Coverage ($covered_percent%) is below the required threshold of $required_coverage%.";
            exit 1
          else
            echo "Coverage ($covered_percent%) passed the required threshold of $required_coverage%."
          fi

Some notes for this step. At the beginning of the script, we are manually downloading the package jq which will make it much easier the parsing of the coverage.json file. We do this because the image we are using ruby:3.2.2. doesn’t have it.

The check of the coverage doesn’t mark the step failed if was not possible to retrieve the current coverage percentage. Which could be caused due to a failure already reported by the tests. If you think it should be a reason to fail coverage change the result if the exit to 1.

Step 6

Define the continuous delivery steps

In this guide, we will use Heroku to deploy our Ruby on Rails application. Depending on the service you use you may or may not have a pipe to handle this process.

In this phase, we want to handle the following topic:

A — When should it run;

B — Prepare project;

D — Deploy the project.

Step 6.A — When should it run

Based on the previous discussion of Git strategies, and again taking a simpler aproach, we want to run our CD steps everytime our production branch main gets updated. From our approach those changes were already tested from a developer branch and there is no need to run the CI in that branch.

To do that we will include one more property below the pipelines property, which is the branches property. Using the branches property we can then write our production branch name, and by doing this it will not trigger the default steps for our production branch.

Code
image: ruby:3.1.3

definitions:
  # ...

pipelines:
  branches:
    main:
      # ...

  default:
    # ...

Step 6.B — Prepare the project

We have this step because we will use the official bitbucket pipe for Heroku. This pipe uses underthehood the Heroku API and for that, we need to zip our project.

Code
image: ruby:3.1.3

definitions:
  # ...

pipelines:
  branches:
    main:
      - step:
        name: "Prepare zip"
        script:
          - tar --exclude='.git' -cvzf /tmp/app.tar.gz .
          - mv /tmp/app.tar.gz .
        artifacts:
          - app.tar.gz

# ...

To pass the zip file to the next step and also to make it available for download after the process ends we will take advantage of the artifacts property.

Step 6.C — Deploy the project

To deploy our project using the official bitbucket pipe for Heroku we just need to configure the required Repository variables and we are ready to go:

Code
image: ruby:3.1.3

definitions:
  # ...

pipelines:
  branches:
    main:
      - step:
        name: Prepare
        script:
          - tar --exclude='.git' -cvzf /tmp/app.tar.gz .
          - mv /tmp/app.tar.gz .
        artifacts:
          - app.tar.gz

      - step:
          name: Deploy
          script:
            - pipe: atlassian/heroku-deploy:2.1.0
              variables:
                HEROKU_API_KEY: $HEROKU_API_KEY
                HEROKU_APP_NAME: $HEROKU_API_APP_NAME
                ZIP_FILE: app.tar.gz
                WAIT: 'true'

# ...

In this case, enable the secrets option for these Repository variables and never hard-code production keys in your CI\CD files.

Conclusion

In this blog post, we explored the concepts of Continuous Integration and Continuous Delivery and learned how to implement them using Bitbucket Pipelines for a Ruby on Rails application. By automating the testing and deployment processes, you can ensure your application is always in a reliable state and ready for release. Bitbucket Pipelines’ flexibility and integration with your existing repositories make it an ideal choice for streamlining your development workflow.

Remember, CI/CD is not just a one-time setup; it’s an ongoing process. As your application evolves, keep iterating and refining your workflows to accommodate new requirements and improve overall efficiency. Happy coding!

Final file

bibucket-pipelines.yml