On Thu, 13 Jul 2017 14:26:35 +0300 Andrey Karpov <kar...@viva64.com> wrote:
Could you: 1. Include this as a bugs per 1k lines of code or similar metric? Total bugs is not that useful without knowing total size of code looked at. At least in the summary. 2. Include metrics calculated similarly for other major projects (Linux kernel, etc. etc.). Why? The below is like saying "you're doing 120km/h!!!!!!" ... but if it's on a freeway and the speed limit is 130km/h ... in context it's very different. This here lacks context. As I haven't used PVS studio before (it's on a list of things to try out and see if it's good), but I do know Coverity's scan service very well, I'll do some back of a napkin numbers: 1. In my experience about ~10-15% of bugs are false positives etc. with coverity. 2. Coverity says Linux kernel gets 0.48 issues peer 1k lines of code. applying the above false positive rate, let's call that 0.40. Qt gets 0.72, so lets call that 0.61 adjusting for false positives. Glib gets 0.45, so 0.38 accounting for false positives. So: With your numbers, Tizen sees 900 issues in 2.4 million lines of code. that comes out at 0.38. Linux kernel = 0.40 Qt = 0.61 Glib = 0.38 Tizen = 0.38 Yes PVS studio is a different tool to coverity. I'm making an assumption (much like you do too in many ways) that these two tools are in the same ballpark and will report similar issues and numbers, but may be disjoint sets. I'm going with this assumption because you didn't provide other numbers to go by, and it'd be nice to. My conclusion is that Tizen code quality is pretty decent in the scheme of things. It's bug rate is pretty low-ish. Now on the other side, it';s always great to have tools point out possible errors. Another tool is another weapon in a war chest to improve code quality. That's a good thing. Bugs should be looked into and addressed accordingly based on actual severity and context. just blindly fixing issues will result in misallocation of time and resources because it may be an issue in a debug tool that is rarely used and only for gathering quick information by a developer when something goes wrong... it may be a seriously exploitable bug in code that is always able to be triggered remotely. So context is important. Knowing issues are there and what a tool thinks they are is a great speedup vs full code review. PVS Studio is indeed such a tool. There are others too. We have tools of our own we're using more and more. > Hello All, > > This article will demonstrate that during the development of large > projects static analysis is not just a useful, but a completely > necessary part of the development process. This article is the first > one in a series of posts, devoted to the ability to use PVS-Studio > static analyzer to improve the quality and reliability of the Tizen > operating system. For a start, I checked a small part of the code of > the operating system (3.3%) and noted down about 900 warnings > pointing to real errors. If we extrapolate the results, we will see > that our team is able to detect and fix about 27000 errors in Tizen. > Using the results of the conducted study, I made a presentation for > the demonstration to the Samsung representatives with the offers > about possible cooperation. The meeting was postponed, that is why I > decided not to waste time and transform the material of the > presentation to an article: https://www.viva64.com/en/b/0519/ > > ---- > Best regards, > Andrey Karpov, Microsoft MVP, > Ph.D. in Mathematics, CTO > "Program Verification Systems" Co Ltd. > > _______________________________________________ > Dev mailing list > Dev@lists.tizen.org > https://lists.tizen.org/listinfo/dev > > _______________________________________________ Dev mailing list Dev@lists.tizen.org https://lists.tizen.org/listinfo/dev