Code Coverage for Rust Projects with GitHub Actions

I particularly don’t like using the code coverage as a metric to drive the tests I write. Nevertheless, there is a lot of value in knowing what parts of your system are covered by tests or not.

Rust projects have great test tools that are out of the box, like `cargo test, ‘ but do not include a code coverage tool. In this post, I will show how I introduced it to my project NunDb.

The tarpaulin tool

Tarpaulin is the tool I chose; tarpaulin is not in a stable version yet, so make sure to make a version that works for your case. Read more about it in the tarpaulin repo.

Install tarpaulin

cargo install cargo-tarpaulin

In my case I had to install the nightly version of the tool, so I had to run the following:

cargo +nightly install cargo-tarpaulin

By the time I am writing this post, the latest version of tarpaulin is v0.30.0.

Running it locally

Starting from the simplest case, to run all the tests, run the next command in the base of your project:

cargo tarpaulin

That may work for your case, but it did not work in my case, I have many tests that are not well suited to run in parallel because they write to disk (NunDb is a database and all tests run in the default config saving to the same place), so I had to run the tests with only one thread, like this:

cargo tarpaulin -- --test-threads=1

Ignoring performance tests

It still did not run all tests successfully. As tarpaulin has to calculate the coverage, which slows down the tests quite a bit. Therefore, I had to skip some performance tests by adding #[cfg(not(tarpaulin))] in the tests I wanted to skip, like the following:

    #[test]
    #[cfg(not(tarpaulin))]
    fn restore_should_be_fast() {
    // ...
    }

Checking the e2e tests

That was progress, but my E2E tests were still not working because in order to run all tests for NunDb, I had to run multiple instances of the database and kill the process at the end of each test. To support that, You have to enable the ` –engine llvm` flag in the tarpaulin command since the default engine “relies on the sigtrap signal to catch when the instrumentation points are hit” read more here.

Additionally, to report the code coverage from the running process called in my E2E tests, I had to add the --follow-exec flag.

Here is the final command I used to run the tests:

cargo tarpaulin --engine llvm --follow-exec -- --test-threads=1

All tests were running locally as expected now \o/.

By default, tarpaulin will force you to clean the build, which is most likely what you want for CI, but it slows down quite a bit for testing locally, so you may want to add the flag --skip-clean to speed it up when running locally.

It would look like the following:

cargo tarpaulin --engine llvm --follow-exec --skip-clean -- --test-threads=1

Running it on GitHub Actions

At this point, I was happy locally. Let’s start the work to get it to run on GitHub Actions.

The first step is to create a .github/workflows/coverage.yml. That file will be used to run the tests and push the result to Codecov.

For that, you will need three steps:

1. Checkout the repository

This is the most straightforward step. You need to add the following lines to your workflow file:

      - name: Checkout repository
        uses: actions/checkout@v2

2. Run the tests with tarpaulin (where you can pass the flags you want)

This one is a little bit more complex, first I set the environment variables I need to run the tests, TIME_TO_START and NUN_ELECTION_TIMEOUT and I also use the command we came up with before:

      - name: Generate code coverage
        env:
          TIME_TO_START: 4
          NUN_ELECTION_TIMEOUT: 3000
        run: |
          mkdir dbs&&cargo +nightly tarpaulin --follow-exec --engine llvm --verbose --all-features --workspace --timeout 120 --out xml -- --test-threads=1

3. Upload the results to Codecov (Needs setup in the Codecov website prior)

Now we are close, it is time to upload the result to Codecov. For that you will need to add the following lines to your workflow file:

      - name: Upload to codecov.io
        uses: codecov/codecov-action@v2
        with:
          token: $ # not required for public repos
          fail_ci_if_error: true

To get the CODECOV_TOKEN, go to the Codecov website, authenticate with your GitHub account, find your repo in the list, and click on Configure; on the next page, you will see the token to be copy pasted into the GitHub Secrets.

To set up the secret in GitHub, go to your repository, click on Settings -> Secrets and variables -> Actions, and add a new secret.

The final file should look like this:

name: Code Coverage
on:
  push:
    branches:
      - master
  pull_request:
jobs:
  coverage:
    name: coverage
    runs-on: ubuntu-latest
    container:
      image: xd009642/tarpaulin:develop-nightly
      options: --security-opt seccomp=unconfined
    steps:
      - name: Checkout repository
        uses: actions/checkout@v2

      - name: Generate code coverage
        env:
          TIME_TO_START: 4
          NUN_ELECTION_TIMEOUT: 3000
        run: |
          mkdir dbs&&cargo +nightly tarpaulin --follow-exec --engine llvm --verbose --all-features --workspace --timeout 120 --out xml -- --test-threads=1
      - name: Upload to codecov.io
        uses: codecov/codecov-action@v2
        with:
          token: $ # not required for public repos
          fail_ci_if_error: true

Now, when you push code, you should see the tests running and the coverage being uploaded to Codecov.

GitHub UI

Codecov UI

Adding the badge to the README

In your Codecov project page, you will find a link to Settings and then Badge; there, you will find the markdown to add to your README file.

Something like this:

[![codecov](https://codecov.io/YOUR_REPO/branch/master/graph/badge.svg?token=YOUR_TOKEN)](https://codecov.io/gh/YOUR_REPO)

Conclusion

Adding cover coverage to your project can help you monitor the modules your automated tests are neglecting. Keep in mind that a single number does not define the quality of your project. Enjoy the coding and use the tools for what they are good for.

Written on June 15, 2024