When we launched Insight, we thought that the medals we granted to projects would be used to certify that projects reached a certain quality level. A project couldn't possibly show a Silver medal if there was some ignored major alerts - that would jeopardize the credibility of our medal grading, and that would prevent project quality benchmarks. Therefore, as soon as an analysis was altered (the project configuration was tweaked, or a violation was ignored), it received a chocolate medal.
It turns out that about 11% of all our analyses ended up with chocolate medals. That's much higher than we thought. Also, our users took their analysis medals quite seriously. They complained very loudly if a false positive violation prevented them from reaching a given medal. "No problem," we used to reply, "just ignore the false positive and you'll get the medal you want". The invariable answer sounded like: "But that will give me a chocolate medal. Chocolate medals are cheap, everyone can get one for free in a cereal box. I want a genuine medal!".
That revealed that our medals had a great value in the eyes of our users. But that was also a huge burden. A statical code analysis tool always has false positives. For Insight, every false positive had to be fixed in the analysis engine to prevent chocolate medals. We ended up tweaking our engine very thoroughly, completely rewriting some rules to take into account all the false positives reported by our users.
In retrospect, the chocolate medal idea was a mistake. When users consider a given violation as a false positive, we can't possibly interfere with that - they know their codebase, we don't. When users tweak / disable some rules because, in their organization, different coding standards apply, we can't tell them they're wrong. Dealing with false positives took us away from adding new rules to the analysis engine, and adding more features to Insight. Our rules must apply to 80% of the cases, not 100% - that's just way too expensive.
Today, we're retiring chocolate medals. Even if you tweak project configurations, even if you ignore violations, you will always get a genuine medal. And if you want to benchmark your projects with other projects, don't stop at the medal. Also look at the score, the number of ignored violations, and the project configuration.
That doesn't mean that we'll stop fixing false positives in our rules. But we'll certainly spend less time doing that, and more time implementing new rules and new features.comments powered by Disqus