Is code coverage a good criteria for DoD?
Code coverage is a measurement indicating the amount of product code that is exercised by tests.
A lot of the papers found in “Agile World” focus on the disadvantage of code coverage.
But, basically, code coverage provides transparent test results for DT to inspect and shows alerts for DT to adapt.
Is it a criteria of your teams’ DoD?
How do you use it to improve the value of a sprint’s delivery?
Thanks for any input!
In my experience, it is a valuable metric:
The definition of done should ensure that the product is shippable, which means it has to be tested.
How do you that you have a complete test harness, if you don't measure the code coverage?
In my development teams there are usually different opinions about which is the best coverage to measure (class coverage, branch coverage, line coverage) and what is a good threshold.
In a retrospective we identified a quality problem, had an insight in reasons and decided to add a bullet point to the definition of done:
"The number of missed branches must not raise"
This means: Branch coverage needs to be measured, and if you check in some new functionality which is not covered by 100%, you have to cover some of the existing functionality to keep the technical debt constant and the code coverage in % will raise.
This is the first step in paying back technical debt.
A team may choose to include several stipulations of this nature in its DoD. For example, coverage by BDD tests might be included and distinguished from TDD coverage. Also, there may be assertions that other "code quality" measures (e.g. cyclomatic complexity, brittleness) will not increase.
Hi Ludwig,
Thanks for your input.
Your valuable experience should be widely promoted.
In my compagny, agile teams mesure Acceptance Test coverage (using jacoco as an agent where the apps are deployed) and Unit Test coverage.
Unmature teams use "Test OK" as a criteria in there DOD.
More mature teams use "Test coverage > XX". (with XX ranging from 50 to 80 in Java code)
And more mature teams use "Coverage in Sprint N > coverage in Sprint N-1".
Ludwig is right, but we don't go so deep with the line / branch details, a global metric is enough for us.
Non-agile teams just don't pay any attention to automated test coverage :-(
Posted By Olivier LEDRU on 31 Jul 2015 07:49 AM
Non-agile teams just don't pay any attention to automated test coverage :-(
No, they do pay attention to code coverage.
But, most of non-agile teams just focus on the fxxking number of code coverage rather than the meaning and value of it.
You are facing Campbell's law:
https://en.wikipedia.org/wiki/Campbell%27s_law
It is important to understand what a metric means and that the goal is not to improve the metric but to improve quality.
Olivier, when you say global metric, do the developers understand how this global metric is calculated?
JaCoCo is a great tool, another one is SonarQube which can even calculate your total technical debt in person days, including metrics mentioned by Ian.
Of course that metric is even more global and you shouldn't take it too seriously, but it can be a good indicator for accumulating technical debt.
Posted By Ludwig Harsch on 05 Aug 2015 03:07 AM
It is important to understand what a metric means and that the goal is not to improve the metric but to improve quality.
I could not agree with you more.
Code coverage analystion is a must-do-task in my teams' DoD.
DT must INPECT and analyze the untested code. Sometimes, they might do some refactoring to improve quality and coverage. Sometimes, they might just give some reasons why lines of code were not tested.
@Ludwig, I'm not sure the Dev Teams around me understand how the coverage is calculated :-)
Actually, we do use Jacoco with SonarQube, so line coverage and branch coverage are shown, but they are also mixed in what I call the "global" coverage value.
In order to avoid to take coverage too seriously, as a goal and not as a mean, and to go in a stupid run for high coverage, I'm using brown bag lunch to show the power of "mutation testing" (with pitest.org in my context.)
I'm glad that all of us agree on coverage is a useful tool for finding untested parts of a codebase for quality improvement, not as a numeric statement of how good the tests are.
Actually, it's stupid to say "you can't go into production with less than 90% coverage".
I always ask my teams "Why it is 90%?". They must understand.
There is an excellent article on the misuse of code coverage.
http://www.exampler.com/testing-com/writings/coverage.pdf
Code coverage indeed a good element since it bring a discipline and cultivate best practice among team to deliver production quality code. In agile world, where accepted delivery should be potentially shippable to production, it means that sprint on sprint deliver code should be of utmost quality & executable in all environments with NO effort. Continuous delivery is NOT possible without continuous integration & to make it happen effective unit test cases should be written to pass all TIME, therefore code coverage should be 100%.