Our team has to refactor a rather large C++ legacy code base. Bringing the whole beast under test has turned out to be difficult, but possible; however, we have by now figured out that it's not feasible to aim for code coverage equal or even close to 100%. Thus, we need to define some reasonable criteria for deciding which code to cover.
Our current idea is as follows: We would like to execute our code under a couple of realistic scenarios (in fact, we could use real data for this), and we would like to collect the code coverage data resulting from each of these scenarios. We then would like to merge the coverage results such that a line is covered if it is covered within at least one of our scenarios. Finally, we would try to come up with tests such that at least the code identified in the last step is covered by our tests. The hope is that we then have at least covered the "most important" parts of our code base.
Technically, we are on Visual Studio Premium 2012 (2013 would be an option, though); our code base consists of a large chunk of C++ code plus some C# code; we use Google Test and Google Mocks as test frameworks.
And here's my question: Is there any tool support out there for supporting that approach? In particular, I am looking for ways to merge several coverage results of the same code base into one (as described above), and to compute the difference between that coverage and the code coverage resulting from our current test suite.
As a side note, I'm also interested in any other opinion on how to define reasonable coverage criteria.
Aucun commentaire:
Enregistrer un commentaire