Edit this page on GitHub

Home > docs > playbooks > metrics > Code Coverage

Alexis Dixon , Bryan Finster , Nathan Nicholson , Preston Gibbs

Code Coverage

Measure of how many lines, branches, and functions are executed when automated tests are run. Industry average is ~80%.

What is the intended behavior?

Notify the team of risky or complicated portions of the code that are not sufficiently covered by tests.

How is it improved?

  • Write tests for code that SHOULD be covered but isn’t.
  • Refactor the application to improve testability.

How is it gamed?

  • Tests are written for code that receives no value from testing.
  • Test code is written that does not check for failures.
  • Code is inappropriately excluded from test coverage reporting.

Example: The following test will result in 100% function, branch, and line coverage with no behavior tested.

/* Returns the sum of two integers */
/* Returns NaN for non-integers */
function addWholeNumbers(a, b) {

  if (a % 1 === 0 && b % 1 === 0) {
    return a + b;
  } else {
    return NaN;
  }
}

it('Should add two whole numbers' () => {
  expect(addWholeNumbers(2, 2)).to.not.be.NaN;
  expect(addWholeNumbers(1.1, 0)).to.not.be.null;
})

The following will report the same code coverage results

it('Should add two whole numbers' () => {
  addWholeNumbers(2, 2)
  addWholeNumbers(1.1, 0)
})

Guardrail Metrics

The following metrics could degrade if not tracked with this metric

  • Development Cycle Time increases with additional development time dedicated to chasing the coverage metric.
  • Quality decreases as poor quality tests hide lack of real code coverage.