Continuous monitoring of test coverage with Opencover and Coveralls

This entry is part of a 10 part series: Exploring modern .NET development with ConTabs

After hooking my ConTabs project up to AppVeyor for continuous integration, the next thing I want to explore is automating test coverage reporting. In this post, I’ll talk briefly about the importance of measuring code coverage. I’ll then introduce OpenCover and Coveralls. Finally, I’ll go through my experience of wiring it all together, using AppVeyor as the platform.

Why monitor code coverage?

Before we get stuck in, I was planning on explaining why code coverage is important. I’m assuming readers of my blog will already understand that unit tests are a good thing. (If you’re not sold on this, read the answers to this Stackoverflow question.) Having unit tests that pass is great, but how can you be confident that the code you’re about to refactor is covered by one of those tests? In other words, how can you safely refactor your code without the possibility of an untested regression? This is where test coverage can help.

Now, don’t get me wrong, good test coverage is not a panacea. Just because a line of code is covered by a test, it doesn’t mean the test captures the business requirements adequately. Nor is 100% code coverage necessarily worth striving for in complex projects. You may find that the first 80% of the code is easy to test with unit tests, but that the last 20% is more challenging (perhaps due to interactions with other systems) and that other forms of testing may be more appropriate.

Nevertheless, establishing and maintaining good levels of code coverage are good indicators that should give some confidence when refactoring code. Additionally, it is also helpful to be able to measure changes in coverage and to identify regions (even down to specific lines) of the code that have poor test coverage.

Why OpenCover and Coveralls?

My test coverage pipeline consists of two tools: OpenCover and Coveralls. OpenCover is a free (as in speech) utility for doing the heavy lifting involved in analysing the codebase and determining which lines are covered by tests. Coveralls is a cloud-based service that accepts data from utilities like OpenCover and presents it in a useful way.

Coveralls is great because it makes complex coverage data easy to interpret and track (e.g. “has coverage gone up or down?”). As its online, the data is always available for review by you and your contributors. Plus, GitHub badges give an immediate signal to potential collaborators and consumers. Best of all, it’s relatively straightforward to use AppVeyor to run OpenCover and send the data to Coveralls as part of the build process.

Running OpenCover

Before we automate anything, let’s just figure out how to get everything working manually. The first step is to install OpenCover from NuGet. Then we can run the following command to tell OpenCover to use NUnit (other testing frameworks are available) to run the tests, before tracing through the PDB files to determine which lines they cover.

Breaking this down, the meaningful parts are:

  • Filtering onto the module called “ConTabs”, including all (“*”) namespaces inside.
  • Specifying that the target is the NUnit runner.
  • Telling the NUnit runner which assembly contains the tests (“ConTabs.Tests.dll”).

There are a couple of details to be mindful of here. Firstly, it’s important to get the tests and the tested code the right way around here! -filter for the application code whose coverage you care about, -targetargs for the assembly containing the tests that cover your code.

Secondly, if you’re using .NET standard (as I am in this project), you’ll need to ensure that the generated PDB files are “full” rather than “portable”. OpenCover doesn’t understand portable PDBs and will skip any modules that use them, complaining quietly about “missing PDBs”.

Once everything is working nicely, running the above command will generate an XML file. We can plug this file into ReportGenerator to get a quick human-readable report and validate that everything is working as expected.

Sending the data to Coveralls

Setting up a repo in Coveralls is the easiest thing in the world, if you’ve got a public GitHib repo anyway. Simply log in using your GitHub account, then head to the repositories tab and toggle the status of the repo you’re interested in. Go to the repository page and make a note of the token.

Once we’ve got our coverage data in an XML file and a Coveralls repo ready to go, we can start to send the former to the latter. To do this, I’ve used coveralls.net. (Be advised there are a couple of packages with the same name – the one that works is currently at version 1.4.2.) Add coveralls.net via NuGet, then run the following command:

Replacing %TOKEN% with your Coveralls token.

What could be easier? You should now be able to go to Coveralls and see your coverage statistics.

Automating with AppVeyor

OK, now we’re cooking on gas! We’ve got all our commands working, so now we can bake them into our automated build process.

Since we don’t want to share our secret Coveralls token with the world, the first step is to use AppVeyors encrypted variable functionality. Simply click on the down arrow next to your username in the top right corner, then select “Encrypt Data” from the tools section, near the bottom of the menu. You can then paste in your secret value and AppVeyor will return an encrypted version, which can only be accessed from your account. This makes it safe to commit to Git(Hub), as no-one can do anything with the encrypted version.

Once we’ve got an encrypted token, we can create a variable in our appveyor.yml file:

And finally add the commands from earlier to the after_test section:

Then, when we push to GitHub, our next build will include the generation of test coverage statistics and the sending of this data to Coveralls.

Summary

In this post, we’ve pulled together several different technologies into an automated test coverage pipeline. We used OpenCover to generate the raw data, ReportGenerator to turn this into a human-readable report, coveralls.net to send the raw data to Coveralls, and AppVeyor to automate the whole thing.

Part of the beauty of using AppVeyor is that we can include appveyor.yml in the repo. For me, this has the advantage of keeping the build process under source control. For anyone playing along at home, you can see the whole file in all its glory – there’s really no magic here.

The result is that our test coverage data is available for all to see on Coveralls. This means we can keep an eye on how our test coverage changes as the project develops, as well as honing in on areas of the codebase that may be problematic. Finally, we can also add a badge to the ConTabs GitHub repo, to show visitors that test coverage is something we take seriously.

Series Navigation

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.