On Mon, 17 Jul 2017 16:15:53 +0300
Andrey Karpov <kar...@viva64.com> wrote:

Hi Andrey.

Before I get into this... I need to clarify something. I wear 2 hats. A
Samsung hat, and an Open Source developer hat. The first is directly
related to company projects like Tizen.

My other hat is the open source dev leader/founder etc. The open source
projects are contributed to by lots of people and I've written a vast
amount of the code myself. These projects have their own rules
irrespective of my Samsung hat, and have existed long before I was at
Samsung. Their long term health is my primary focus with this hat on.

They also may be used by Samsung e.g. in Tizen. What I do see is they
get heavily modified and forked off old codebases. The code quality in
Tizen's forks of enlightenment and efl would be light years better if
Tizen didn't fork as much. (upstream EFL has coverity complain about
0.06 bugs per 1k lines for EFL and actually 0.00 for enlightenment for
example).

The open source projects have used coverity for a very long time
independently of anything to do with Samsung. This is why I know
coverity well and have been pleased with its reports and the results
overall.

PVS Studio is new to me. I knew of it and ignored it for years because
it was a windows/visual studio only thing. That changed towards the end
of last year. It's on my list of things to check out. For now my
assessment is that it's roughly very similar to coverity, but coverity
is leaps and bounds ahead with their scan user interface vs. your text
reports. That certainly makes coverity the more desirable tool right
now, BUT... I'm willing to look at PVS and see what it says. I just
have to take the time to do it, and I'm very busy.

So I wear 2 hats. What I respond with may be under either of those
hats.

> Hello Carsten,
> 
> Today I published another note related to Tizen.
> 
> Exploring Microoptimizations Using Tizen Code as an Example - 
> https://www.viva64.com/en/b/0520/

I just read it. I'd say about 50-75% of the issues are useful and
valuable. I think you split some which are essentially the same issue
like pass by value vs reference/ptr (in various situations). Some
issues seem key to C++ land and not to C. Also be aware I spent 6 years
doing pure assembly... I tend to view everything through those glasses.
Some of the issues like structs size/packing I just tend to get right
anyway unless it just isn't important or it's needed for ABI. A checker
would be good though for larger teams. Things like passing by ptr I do
by default in C, and pretty much always except basic types (and then it
depends on how many of them and if passing a ptr to them might be
better). A certain number of args get passed by register so sometimes
passing by reference for smallish sets is worse... depending on
architecture/ABI... :)

But all in all I agree with many of the things you point out there.
Some of them I think have been obsoleted by compiler warnings like
"unused variables"... :)

> I wanted to show that the PVS-Studio analyzer can not only find
> errors, but it also helps write code slightly more compact and
> efficient. I just want to say that I do not appeal to correct code in
> dozens of thousands fragments. The old code is better to leave it as
> it is. Nevertheless, when you write new code, these recommendations
> will be useful.

I think it's good to slowly go over old code and "fix it". That way any
bugs in the fixing can get caught in a "small region of development"
and easily addressed. If you just go fix every one at once... it gets
harder in "real life" to find.

> Now some comments on your letter.
> 
> I understand that there are always more important priorities than
> fixing of potential errors. However, a nuisance is that over time a
> potential error may suddenly become real and harmful. That is why I

Aaah now I wear my Samsung hat. At Samsung we have an internal static
analysis tool (SVACE). We're using that at the moment to point out
issues. We have in the past used prevent (coverity) too.

> suggest Samsung not just to consider buying our analyzer but hire us
> to perform a "dirty work". We have experience in such events. For

Now here I put my open source hat on... my experience tell sme that
the less well you know the code, the worse your fixes will be. even
if you know it well they sometimes add bugs. I think it's best to
have the teams that know their code fix THEIR code. They have the
context and big picture.

> example, we worked on the Unreal Engine project. Or we ported a big
> application on 64-bit platform: How to Port a 9 Million Code Line
> Project to 64 bits? - https://www.viva64.com/en/b/0342/

I am sure you did. :)

> Generally, I ask to perceive it in the right way, that we do not
> impose a tool with which you should fret and spend time on it. Our
> offer: for the money we improve code. Yes, everyone can use a license
> for PVS-Studio, but that's not the point. The main thing is that we
> improve the code.

Actually IMHO the tool is of more value than the service. but I need to
spend some time playing with it first (several months of using the
reports to find and fix issues) and then I'll have an idea of how good
a tool it is. i haven't gotten that far yet. I'd put on my open source
hat and do this on the upstream open source projects and see. I'd want
to do the fixes myself (or withing the development team). Hands dirty.
that's my style. This is open source hat on of course.

I'm open to PVS studio being a good tool. I just haven't tried it yet.

> At the same time we're not saying that you should give up using
> Coverity and other tools. We simply offer to use our analyzer
> additionally. :)
> 
>   ----
> Best regards,
> Andrey Karpov, Microsoft MVP,
> Ph.D. in Mathematics, CTO
> "Program Verification Systems" Co Ltd.
> URL: www.viva64.com
> E-Mail: kar...@viva64.com
> 
> 
> On 17.07.2017 7:17, Carsten Haitzler wrote:
> > On Fri, 14 Jul 2017 17:19:36 +0300
> > Andrey Karpov <kar...@viva64.com> wrote:
> >
> >> Hello Carsten,
> >>
> >> After publishing my article, several unreasonable news appeared on
> >> the Internet. That's why, I'd like to note that in my article I
> >> didn’t write about the bad/good quality of Tizen code or that
> >> PVS-Studio analyzer is magic, best of the best. I only gave those
> >> numbers that I’d got. I can't talk about the quality of Tizen code,
> >> since I have insufficient data for this purpose. I understand
> >> clearly what you are talking about and agree with you.
> > Indeed that is one reason I ask for numbers that can be compared
> > because when you publish some number, and that number is large (in
> > this case due to the fact that codebase you extrapolated for is
> > large) ... a lot of people don't see the forest from the trees.
> >
> > Don't get me wrong. Code quality is important. Fixing bugs is
> > important. Tools to point out "here is a potential issue" are very
> > useful. But you do need to stand back and see the possibly urgency
> > in light of everything else and that requires comparative numbers.
> >
> >> My objective was to show that despite the already used techniques,
> >> PVS-Studio analyzer can help to make Tizen code better and more
> >> reliable. As I think, I managed to demonstrate this by pointing to
> >> the 900 fragments of code, which, in my opinion, deserve attention,
> >> fixing and refactoring.
> > At this point, for me at least, the jury is out on PVS Studio. We
> > have internal tools and we're using those first at this point on
> > Tizen's codebase. If PVS Studio is better or not than those tools,
> > I'm not going to comment, as I am not in a good position to do so.
> >
> > For the upstream projects tizen depends on that I look after, I'm
> > VERY familiar with Coverity scan and to a lesser extent clang's
> > static analyser. I can say the text reports PVS studio produce pale
> > in comparison to what Coverity provides in the Web UI. It provides
> > code flow analysis (what code path was used to get there) which is
> > very useful in learning the "why", A full browsable view of the
> > code (so I don't keep having to match it back to the source
> > file/line when reading), and a collaborative environment where many
> > deveopers can work together live on the issue list and see what is
> > triaged by who, whenand a log about it, BUT it offers something I
> > have not seen so far in the information you provided for PVS Studio
> > on the blog references here, ... the ability to say "No. False
> > positive. Tool - you're wrong. Here's why". Coverity lets you do
> > this... which lets you move on and not modify code that works fine
> > and is correct.
> >
> > My experience shows that a decent percentage of "fixes for warnings
> > from coverity" lead to ADDING bugs. I've seen it happen many times.
> > It made things WORSE. It went from a "theoretical bug but actually
> > due to environment will never happen" to "I wake up, update code
> > and suddenly app x, y, z are broken in some way and it was a fix
> > for a warning". This is also why I'm wary of having people
> > unfamiliar with code "fix it" based on such warnings. Even people
> > who know the code well mess up.
> >
> >> Unfortunately, the question "Include this as a bugs per 1 k lines
> >> of code or similar metric?" is not very clear.
> > It's simple. "This code has an average of 0.123 bugs per 1k lines of
> > code". But measure that on everything you analyse. You at least
> > have a ROUGH measuring stick. They may be all major exploitable
> > issues, maybe all minor unlikely ones or maybe some combination so
> > a rating system ultimately would score them based on that, BUT
> > let's keep it simple. If you quote a number, quote numbers for
> > everything you look at and to make it comparable... make it
> > proportional to size.
> >
> >> In my opinion, the article presents all the necessary data. I've
> >> got:
> >>
> >>    * The density of detected errors in code (c) 2015 Samsung
> >> Electronics: 0.41 errors on 1000 lines of code.
> > Your numbers say it's 0.375. 900 in 2.4 million lines of code.
> > 900/2400 = 0.375. The 0.41 is including false positives? Now we
> > have a report disagreeing with itself. It's small, but important.
> >
> > OK - found it now. I missed that paragraph towards the end (it is a
> > long article), but these numbers disagrees with others you give
> > right at the top and in the headline (the headline number is an
> > extrapolation). Why do your numbers differ?
> >
> >>    * The density of detected errors in the third-party libraries:
> >> 0.36 errors on 1000 lines of code.
> >>
> >> (I did not consider comments as the lines of code).
> >>
> >> Can these data be incorrect? Yes, they can. This is not a
> >> scientific research, this is a demonstration on practice that the
> >> tool may be useful.
> >>
> >> Moreover, some errors, in Tizen developers’ opinion, canbe not that
> >> erroneous. Well, at least, there is no sense to fix them. Then the
> >> density of detected errors will diminish.
> > This is another issue entirely - if the error is "worth worrying
> > about".
> >
> >> On the other hand, I might highlight not all the errors. I
> >> approached to the study of the report very carefully, but without
> >> bigotry. For example, I was a bit lazy to study the warnings V730 -
> >> https://www.viva64.com/en/w/V730/. This is a very time consuming
> >> and thankless work, when you work with someone else's code. All
> >> the time it is unclear, if it is dangerous or not, that some class
> >> member has been left uninitialized. It is a tedious long labour
> >> that needs to be done carefully. So, perhaps, with more careful
> >> reviewing of the log, other errors can be found.
> > Indeed it is time consuming to examine them all. I think you did a
> > good job at explaining what errors are there and why in the general
> > sense these are potential issues.
> >
> >> About the comparison with the quality of other projects...
> >> Difficult question. I ask to understand, that while writing the
> >> articles we don’t have the aim to compare which code is better or
> >> worse.Therefore, we usually stop when found enough of quite
> > The problem is - when you publish numbers like you did, it becomes a
> > comparison thing. That's how people respond. Headlines like "27000
> > bugs found in Tizen!" are very much designed to catch attention with
> > numbers but when there is no context provided... the only thing
> > people see is a headline.
> >
> >> interesting bugs for writing an article. Performing the careful
> >> analysis of all warnings for a large project will take a lot of
> >> time. I also ask to consider that it is difficult and long to deal
> >> with unknown code. Therefore, we sometimes mention about density
> >> of errors only for small projects, since is not very difficult to
> >> view the whole report. Examples:
> >>
> >>    * Notepad++: we detect about 2 errors per 1000 lines of code.
> >>      https://www.viva64.com/en/b/0511/
> > WOW. that's a lot. Thanks for the number.
> >
> >>    * Far Manager for Linux: we detect about 0.464 errors per 1000
> >> lines of code. https://www.viva64.com/en/b/0478/
> > Thanks. your numbers are in line with what coverity indicates too.
> >
> >>    * Tor project: we do not find anything. Density
> >>      0.https://www.viva64.com/en/b/0507/
> >> <https://www.viva64.com/en/b/0507/>
> > Indeed that is good. I also noted openssl last i checked on coverity
> > scan had a density of 0 too.
> >
> >> As we can see, the results are different. However, it seems to me
> >> they are not worth to concentrate on. Static analyzer is a tool of
> > They are a tool for prioritization. As I said - bugs are bugs and
> > should be fixed. Knowing if your bug count is particularly bad or
> > now compared to "the industry at large" etc. lets you know how much
> > effort or money or time to spend on these issues.
> >
> > I have taken a position in upstream projects to just slowly fix the
> > reported issues over a long period of time. every release - fix some
> > more so every release has the bug rate go down. So you mix feature
> > addition and static analysis triage, but focus on the feature
> > development etc. and less so on triage.
> >
> >> finding bugs in fresh code. Yes, old mistakes are also worth to be
> >> fixed, but generally they are not as critical as new ones.
> >> Actually, if the error is in the code for several years,it means
> >> that it rarely reveals itself or interferes no one. That’s why it
> >> is interesting to look to the future rather than the past. Sure,
> >> the PVS-Studio analyzer can be a good assistant for a programmer.
> > Indeed I agree there.
> >
> >> About the percent of false positives. It makes no sense to talk
> >> about them without first configuring the analyzer. It's a lot of
> >> work, which we are ready to engage, if a cooperation begins
> >> someday. Can we deal with it? Yes, we can:
> > That sounds like a fair bit of work.
> >
> >>    *
> >> https://www.unrealengine.com/en-US/blog/how-pvs-studio-team-improved-unreal-engines-code
> >>    *
> >> https://www.unrealengine.com/en-US/blog/static-analysis-as-part-of-the-process
> >>
> >> I think when it comes to projects of such a large size as Tizen, it
> >> makes sense to speak not only on product licensing, but also on a
> >> great support, carried out by our team.
> >>
> >> P.S. On Monday I will demonstrate that the analyzer can be useful
> >> not only for finding bugs, but also regarding
> >> micro-optimizations. :)
> > Actually ... that does interest me. That's a new use of static
> > analysis. I think that this would be generally well received in
> > Tizen too. If PVS Studio can do this too... it just bumped up in
> > value for me. I'd like to see what these reports are and what they
> > point out etc.
> >
> >>    ----
> >> Best regards,
> >> Andrey Karpov, Microsoft MVP,
> >> Ph.D. in Mathematics, CTO
> >> "Program Verification Systems" Co Ltd.
> >>
> >>
> >> On 14.07.2017 4:35, Carsten Haitzler wrote:
> >>> On Thu, 13 Jul 2017 14:26:35 +0300
> >>> Andrey Karpov <kar...@viva64.com> wrote:
> >>>
> >>> Could you:
> >>>
> >>> 1. Include this as a bugs per 1k lines of code or similar metric?
> >>> Total bugs is not that useful without knowing total size of code
> >>> looked at. At least in the summary.
> >>> 2. Include metrics calculated similarly for other major projects
> >>> (Linux kernel, etc. etc.).
> >>>
> >>> Why? The below is like saying "you're doing 120km/h!!!!!!" ... but
> >>> if it's on a freeway and the speed limit is 130km/h ... in context
> >>> it's very different. This here lacks context.
> >>>
> >>> As I haven't used PVS studio before (it's on a list of things to
> >>> try out and see if it's good), but I do know Coverity's scan
> >>> service very well, I'll do some back of a napkin numbers:
> >>>
> >>> 1. In my experience about ~10-15% of bugs are false positives etc.
> >>> with coverity.
> >>> 2. Coverity says Linux kernel gets 0.48 issues peer 1k lines of
> >>> code. applying the above false positive rate, let's call that
> >>> 0.40. Qt gets 0.72, so lets call that 0.61 adjusting for false
> >>> positives. Glib gets 0.45, so 0.38 accounting for false
> >>> positives. So:
> >>>
> >>> With your numbers, Tizen sees 900 issues in 2.4 million lines of
> >>> code. that comes out at 0.38.
> >>>
> >>>      Linux kernel = 0.40
> >>>      Qt           = 0.61
> >>>      Glib         = 0.38
> >>>      Tizen        = 0.38
> >>>
> >>> Yes PVS studio is a different tool to coverity. I'm making an
> >>> assumption (much like you do too in many ways) that these two
> >>> tools are in the same ballpark and will report similar issues and
> >>> numbers, but may be disjoint sets. I'm going with this assumption
> >>> because you didn't provide other numbers to go by, and it'd be
> >>> nice to.
> >>>
> >>> My conclusion is that Tizen code quality is pretty decent in the
> >>> scheme of things. It's bug rate is pretty low-ish.
> >>>
> >>> Now on the other side, it';s always great to have tools point out
> >>> possible errors. Another tool is another weapon in a war chest to
> >>> improve code quality. That's a good thing. Bugs should be looked
> >>> into and addressed accordingly based on actual severity and
> >>> context. just blindly fixing issues will result in misallocation
> >>> of time and resources because it may be an issue in a debug tool
> >>> that is rarely used and only for gathering quick information by a
> >>> developer when something goes wrong... it may be a seriously
> >>> exploitable bug in code that is always able to be triggered
> >>> remotely. So context is important. Knowing issues are there and
> >>> what a tool thinks they are is a great speedup vs full code
> >>> review. PVS Studio is indeed such a tool. There are others too.
> >>> We have tools of our own we're using more and more.
> >>>
> >>>
> >>>
> >>>> Hello All,
> >>>>
> >>>> This article will demonstrate that during the development of
> >>>> large projects static analysis is not just a useful, but a
> >>>> completely necessary part of the development process. This
> >>>> article is the first one in a series of posts, devoted to the
> >>>> ability to use PVS-Studio static analyzer to improve the quality
> >>>> and reliability of the Tizen operating system. For a start, I
> >>>> checked a small part of the code of the operating system (3.3%)
> >>>> and noted down about 900 warnings pointing to real errors. If we
> >>>> extrapolate the results, we will see that our team is able to
> >>>> detect and fix about 27000 errors in Tizen. Using the results of
> >>>> the conducted study, I made a presentation for the demonstration
> >>>> to the Samsung representatives with the offers about possible
> >>>> cooperation. The meeting was postponed, that is why I decided
> >>>> not to waste time and transform the material of the presentation
> >>>> to an article: https://www.viva64.com/en/b/0519/
> >>>>
> >>>> ----
> >>>> Best regards,
> >>>> Andrey Karpov, Microsoft MVP,
> >>>> Ph.D. in Mathematics, CTO
> >>>> "Program Verification Systems" Co Ltd.
> >>>>
> >>>> _______________________________________________
> >>>> Dev mailing list
> >>>> Dev@lists.tizen.org
> >>>> https://lists.tizen.org/listinfo/dev
> >>>>
> >>>>
> >
> 

_______________________________________________
Dev mailing list
Dev@lists.tizen.org
https://lists.tizen.org/listinfo/dev

Reply via email to