Re: Is finding security holes a good idea?
On Wed, Jun 16, 2004 at 02:12:18PM -0700, Eric Rescorla wrote: > > Let's assume for the sake of argument that two people auditing > the same code section will find the same set of bugs. So, how > to account for the fact that obvious errors persist for long > periods of time in popular code bases? It must be that those > sections were never properly audited, since by hypothesis > the bugs are obvious and yet were not found. However, this > happens fairly often, which suggests that coverage must > be pretty bad. Accordingly, it's easy to see how you could > get low re-finding rates even if people roughly think alike. Actually, I think that in this regard the answer lies in a sort of critical mass of interest. Ideas about where to look for what sort of bug (or which sections of code to revisit, perhaps because a relevant new fundamental technique has appeared in the literature, or someone else's parser has added a popular feature, or because a protocol specification has changed) tend to propagate through the population of experts who might find bugs slowly at first, and then faster and faster in what's probably an exponential way. This suggests to me that if I happen to be out in front of the curve in casting my eyes over code fragment _X_ or _Y_ (either because of random luck or because I'm at the front of the wave of interest), I'd really better fix what I can, and make that fix public; because trailing out behind me are many, many more people each of whom has _some_ chance of finding the bug that is correlated in a nonzero way with my having found it. I think that this sort of thing is going to turn out to be _very_ hard to tease out evidence for or against using naive studies of bug commission, discovery, or rediscovery rates; but it is my intuition based on many years of making, finding, and fixing bugs, and of watching others eventually redo my work in the cases in which I'd managed to fail to let them know about it. I would argue that in fact this pattern is not the exception; it is the rule. Thor - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Is finding security holes a good idea?
Thor Lancelot Simon <[EMAIL PROTECTED]> writes: > On Tue, Jun 15, 2004 at 09:37:42PM -0700, Eric Rescorla wrote: > If you won't grant that humans experienced in a given field tend to think > in similar ways, fine. We'll just have to agree to disagree; but I think > you'll have a hard time making your case to anyone who _does_ believe that, > which I think is most people. If you do grant it, I think it behooves you > to explain why you don't believe that's the case as regards finding bugs; > or to withdraw your original claim, which is contingent upon it. I'm sorry, but I don't think this follows at all. Let's assume for the sake of argument that two people auditing the same code section will find the same set of bugs. So, how to account for the fact that obvious errors persist for long periods of time in popular code bases? It must be that those sections were never properly audited, since by hypothesis the bugs are obvious and yet were not found. However, this happens fairly often, which suggests that coverage must be pretty bad. Accordingly, it's easy to see how you could get low re-finding rates even if people roughly think alike. Now, you could argue that because people think alike, everyone looks at the exact same sections of the code, but I think that this is belied by the fact that many of these self-same obvious bugs are found in obvious places, such as protocol parsers. So, while I think it's almost certainly not true that bug finding order is completely random, I think it's quite plausible that it's mostly random. Ultimately, however, it's an empirical question and I'd be quite interested in seeing some studies on it. I think I've said enough on this general topic. If you'd like to have the last word, feel free. -Ekr - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Hiawatha's research
"Hiawatha's Research" Jason Holt <[EMAIL PROTECTED]> June, 2004, released into the public domain. Dedicated to Eric Rescorla, with apologies to Longfellow. ("E. Rescorla" may be substituted for "Hiawatha" throughout.) Hiawatha, academic, he could start ten research papers, start them with such mighty study, that the last had left his printer, ere the first deadline extended. Then, to serve the greater purpose, he would post these master papers, post them with such speed and swiftness, to gain feedback from his cohorts, for their mighty learned comments. from his printer, Hiawatha took his publication paper, sent it to the preprint archive, sent it out to all the newsgroups Then he waited, watching, listening, for the erudite discussion, for the kudos and the errors, that the others soon would send him. But in this my Hiawatha was most cruelly mistaken, for not one did read his papers, not one got past the simple abstract. Still did they all grab their keyboards, writing with great flaming fury of the folly of his venture, of his paper's great misgiving. Of his obvious omissions, of his great misunderstandings, of his utter lack of vision, of his blatant plagiarism. (This last point he found most galling, found it really quite dumbfounding, since for prior art, he'd listed ninety-three related papers.) Now the mighty Hiawatha, in his office still is sitting, contemplating on his research, thinking on his chosen topic. Wondering, in idle moments, if he had not chosen wrongly, the position he had taken as a research paper author And he thinks, my Hiawatha, if he might not have been better served by a more lowly station, as a cashier at McDonalds, as a washer at the car wash, as a cleaner of the bathrooms. Thus departs my Hiawatha. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Is finding security holes a good idea?
At 08:40 AM 6/16/04 -0700, Eric Rescorla wrote: >> the search patterns used by blackhats - we are all human and are likely >> to be drawn to similar bugs. Prof Nancy Levenson once did a study where separate teams coded solutions to the same problem. The different teams' code often erred in the same places (eg corner cases). This was taken as an argument against N-version programming IIRC. It supports the argument that H. saps are succeptible to common cogntive flaws. While this was a code *generation and test* experiment, it does bear on the "evaluate for bugs" question too. As far as whether finding holes is a good idea, remember that the Pros do not report what they find. No means or methods, remember? - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Is finding security holes a good idea?
On Tue, Jun 15, 2004 at 09:37:42PM -0700, Eric Rescorla wrote: > "Arnold G. Reinhold" <[EMAIL PROTECTED]> writes: > > My other concern with the thesis that finding security holes is a bad > > idea is that it treats the Black Hats as a monolithic group. I would > > divide them into three categories: ego hackers, petty criminals, and > > high-threat attackers (terrorists, organized criminals and evil > > governments). The high-threat attackers are likely accumulating > > vulnerabilities for later use. With the spread of programming > > knowledge to places where labor is cheap, one can imagine very > > dangerous systematic efforts to find security holes. In this context > > the mere ego hackers might be thought of as beta testers for IT > > security. We'd better keep fixing the bugs. > > This only follows if there's a high degree of overlap between the > bugs that the black hats find and the bugs that white hats would > find in their auditing efforts. That's precisely what is at > issue. Indeed it is -- and unless I misunderstand, you're claiming that there is _not_ such a degree of overlap. I think most people would tend to agree that humans working in the same field generally work in similar ways; some, of course, are innovative and exceptional, but in general most run-of-the-mill system programmers have a lot of the same tools in their mental toolboxes and use them in much the same way; and some of the time, even the innovative and exceptional ones work in the same way as us drudges. This, to me, makes your claim extremely counterintuitive and questionable; it contradicts not only my intuition but my experience. I can't even begin to count the number of bugs I've found by inspection of code (with some other purpose in mind), forgotten to tell coworkers about or to fix "right" such that the fixes could be committed, and then seen others discover when they happened to cast their eyes over the same code fragment days, weeks, or months later. And I have deliberately audited large sections of code, prepared fixes, paused a couple of days or weeks to test my results, and seen others deliberately or accidentally find and fix (or, worse, exploit) the same bugs I'd laboriously churned up. If you won't grant that humans experienced in a given field tend to think in similar ways, fine. We'll just have to agree to disagree; but I think you'll have a hard time making your case to anyone who _does_ believe that, which I think is most people. If you do grant it, I think it behooves you to explain why you don't believe that's the case as regards finding bugs; or to withdraw your original claim, which is contingent upon it. Thor - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Is finding security holes a good idea?
Damien Miller <[EMAIL PROTECTED]> writes: > Eric Rescorla wrote: >> I don't think that's clear at all. It could be purely stochastic. >> I.e. you look at a section of code, you find the bug with some >> probability. However, there's a lot of code and the auditing >> coverage isn't very deep so bugs persist for a long time. > > I suspect that auditing coverage is usually going to be very similar to > the search patterns used by blackhats - we are all human and are likely > to be drawn to similar bugs. Auditing may therefore yield a superlinear > return on effort. Is that enough to make it a "good idea"? I agree that this is a possibility. We'd need further research to know if it's in fact correct. -Ekr - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Is finding security holes a good idea?
Eric Rescorla wrote: >>I don't find that argument at all convincing. After all, these bugs *are* >>being found! > > Well, SOME bugs are being found. I don't know what you mean by > "these" bugs. We don't have any real good information about > the bugs that haven't been found. What makes you think that > there aren't 5x as many bugs still in the code that are basically > like the ones you've found? If developers are just treating bugs as isolated defects, and not searching for typologies of problems then this may be true. If we search for common problems and repair all that we find, then we will do better. In many cases, this doesn't take as much additional work as finding the first instance. Sometimes it can be as little work as writing a regexp. Of course, not all bugs are as easy as replacing strcpy (e.g. integer overflows are subtle), but this approach is working. What was the last trivial buffer overflow in a *BSD? > I don't think that's clear at all. It could be purely stochastic. > I.e. you look at a section of code, you find the bug with some > probability. However, there's a lot of code and the auditing > coverage isn't very deep so bugs persist for a long time. I suspect that auditing coverage is usually going to be very similar to the search patterns used by blackhats - we are all human and are likely to be drawn to similar bugs. Auditing may therefore yield a superlinear return on effort. Is that enough to make it a "good idea"? -d - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Interview with Glenn Henry, founder of VIA processor subsidiary Centaur (fwd from [EMAIL PROTECTED])
From: Eugen Leitl <[EMAIL PROTECTED]> Subject: Interview with Glenn Henry, founder of VIA processor subsidiary CeTo: [EMAIL PROTECTED] Date: Tue, 15 Jun 2004 18:51:21 +0200 http://linuxdevices.com/articles/AT2656883479.html [ker-snip] The third one, is one you haven't asked me about, this is actually my pet hobby, here -- we've added these fully sophisticated and very powerful security instructions into the... Q19: That was my last question! A19: So the classic question is, hey, you built some hardware, who's going to use it? Well, the answer is, six months after we first started shipping our product with encryption in it [story], we have three or four operating systems, including Linux, OpenBSD, and FreeBSD, directly supporting our security features in the kernel. Getting support that quickly can't happen in the Microsoft world. Maybe they'll support it someday, maybe they won't. Quite honestly, if you want to build it, and hope that someone will come, you've got to count on something like the free software world. Free software makes it very easy for people to add functionality. You've got extremely talented, motivated people in the free software world who, if they think it's right to do it, will do it. That was my strategy with security. We didn't have to justify it, because it's my hobby, so we did it. But, it would have been hard to justify these new hardware things without a software plan. My theory was simple: if we do it, and we do it right, it will appeal to the really knowledgeable security guys, most of whom live in the free software world. And those guys, if they like it, and see it's right, then they will support it. And they have the wherewithal to support it, because of the way open software works. So those are my three themes, ignoring the fourth one, that's obvious: that without competition, Windows would cost even more. To summarize, for our business, [Linux is] important because it allows us to build lower-cost PC platforms, it allows people to build new, more sophisticated embedded applications easier, and it allows us, without any software costs, to add new features that we think are important to the world. Our next processor -- I haven't ever told anyone, so I won't say what it is -- but our next processor has even more things in it that I think will be just as quickly adopted by the open source software world, and provide even more value. It's always bothered me that hardware can do so many things relatively easily and fast that aren't done today because there's no software to support it. We just decided to try to break the mold. We were going to do hardware that, literally, had no software support at the start. And now the software is there, in several variations, and people are starting to use it. I actually think that's only going to happen in the open source world. Q20: We'd like a few words from you about your security strategy, how you've been putting security in the chips, and so on. A20: Securing one's information and data is sort of fundamental to the human need -- it's certainly fundamental to business needs. With the current world, in which everyone's attached to the Internet -- with most peoples' machines having back-door holes in them, whether they know it or not -- and with all the wireless stuff going on, people's data, whether they know it or not, is relatively insecure. The people who know that are using secure operating systems, and they're encrypting their data. Encrypting of data's been around for a long time. We believe, though, that this should be a pervasive thing that should appear on all platforms, and should be built into all things. It turns out, though, that security features are all computationally intensive. That's what they do. They take the bits and grind them up using computations, in a way that makes it hard to un-grind them. So, we said, they're a perfect candidate for hardware. They're well-defined, they're not very big, they run much faster in hardware than in software -- 10 to 30 times, in the examples we use. And, they are so fundamental, that we should add the basic primitives to our processor. How did we know what to add? We added government standards. The U.S. government has done extensive work on standardizing the encryption protocols, secure digital signature protocols, secure hash protocols. We used the most modern of government standards, built the basic functions into our chip, and did it in such a way that made it very easy for software to use. Every time you send an email, every time you send a file to someone, that data should be encrypted. It's going out on the Internet, where anyone with half a brain can steal it. Second, if you really care about not letting people have access to certain data that's on your hard drive, it ought to be encrypted, because half the PCs these days have some, I don't know what the right word is, some "spy" built into it, through a virus or worm, that can steal data and pass it back. You
Re: Is finding security holes a good idea?
"Arnold G. Reinhold" <[EMAIL PROTECTED]> writes: > My other concern with the thesis that finding security holes is a bad > idea is that it treats the Black Hats as a monolithic group. I would > divide them into three categories: ego hackers, petty criminals, and > high-threat attackers (terrorists, organized criminals and evil > governments). The high-threat attackers are likely accumulating > vulnerabilities for later use. With the spread of programming > knowledge to places where labor is cheap, one can imagine very > dangerous systematic efforts to find security holes. In this context > the mere ego hackers might be thought of as beta testers for IT > security. We'd better keep fixing the bugs. This only follows if there's a high degree of overlap between the bugs that the black hats find and the bugs that white hats would find in their auditing efforts. That's precisely what is at issue. -Ekr - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Is finding security holes a good idea?
Jerrold Leichter <[EMAIL PROTECTED]> writes: > | Thor Lancelot Simon <[EMAIL PROTECTED]> writes: > | > | > On Mon, Jun 14, 2004 at 08:07:11AM -0700, Eric Rescorla wrote: > | >> Roughly speaking: > | >> If I as a White Hat find a bug and then don't tell anyone, there's no > | >> reason to believe it will result in any intrusions. The bug has to > | > > | > I don't believe that the premise above is valid. To believe it, I think > | > I'd have to hold that there were no correlation between bugs I found and > | > bugs that others were likely to find; and a lot of experience tells me > | > very much the opposite. > | > | The extent to which bugs are independently rediscovered is certainly > | an open question which hasn't received enough study. However, the > | fact that relatively obvious and serious bugs seem to persist for > | long periods of time (years) in code bases without being found > | in the open literature, suggests that there's a fair amount of > | independence. > I don't find that argument at all convincing. After all, these bugs *are* > being found! Well, SOME bugs are being found. I don't know what you mean by "these" bugs. We don't have any real good information about the bugs that haven't been found. What makes you think that there aren't 5x as many bugs still in the code that are basically like the ones you've found? > It's clear that having access to the sources is not, in and of itself, > sufficient to make these bugs visible (else the developers of close-source > software would find them long before independent white- or black-hats). I don't think that's clear at all. It could be purely stochastic. I.e. you look at a section of code, you find the bug with some probability. However, there's a lot of code and the auditing coverage isn't very deep so bugs persist for a long time. -Ekr - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Is finding security holes a good idea?
"The Mythical Man-Month" is a great book, but it's almost 30 years old. Brooks considered OS/360 to be hopelessly bloated. My favorite quote (from Chapter 5, The Second System Effect, p. 56): "For example, OS/360 devotes 26 bytes of the permanently resident date-turnover routine to the proper handling of December 31 on leap years (when it is Day 366). That might have been left to the operator." Modern operating system are 2 to 3 orders of magnitude larger than OS/360.. They are far more reliable than OS/360 was in its early days and do not presume the availability of an on-site team of operators and system programmers. For the most part they are still maintained one bug at a time The bug fixing process has not reached Brook's predicted crisis. My other concern with the thesis that finding security holes is a bad idea is that it treats the Black Hats as a monolithic group. I would divide them into three categories: ego hackers, petty criminals, and high-threat attackers (terrorists, organized criminals and evil governments). The high-threat attackers are likely accumulating vulnerabilities for later use. With the spread of programming knowledge to places where labor is cheap, one can imagine very dangerous systematic efforts to find security holes. In this context the mere ego hackers might be thought of as beta testers for IT security. We'd better keep fixing the bugs. Arnold Reinhold At 5:10 PM -0400 6/14/04, Steven M. Bellovin wrote: In message <[EMAIL PROTECTED]>, Ben Laurie writes: What you _may_ have shown is that there's an infinite number of bugs in any particularly piece of s/w. I find that hard to believe, too :-) Or rather, that the patch process introduces new bugs. Let me quote from Fred Brooks' "Mythical Man-Month", Chapter 11: The fundamental problem with program administration is that fixing a defect has a substantial (20-50 percent) chance of introducing another. So the whole process is two steps forward and one step back. Why arene't defects fixed more cleanly? First, even a subtle defect shows itself as a local failure of some kind. In fact it often has system-wide ramifications, usually nonobvious. Any attempt to fix it with minimum effort will repair the local and obvious, but unless the structure is pure or the documentation very fine, the far-reaching effects of the repair will be overlooked. Second, the repairer is usually not the man who wrote the code, and often he is a junior programmer or trainee. As a consequence of the introduction of new bugs, program maintenance requires far more system testing per statement written than any other programming. ... Lehman and Belady have studied the history of successive release in a large operating system. They find that the total number of modules increases linearly with release number, but that the number of modules affected increases exponentially with release number. All repairs tend to destroy the structure, to increase the entropy and disorder of the system. Less and less effort is spent on fixing original design flaws; more and more is spent on fixing flaws introduced by earlier fixes. As time passes, the system becomes less and less well-ordered. Sooner or later the fixing ceases to gain any ground. Each forward step is matched by a backward one. Etc. In other words, though the original code may not have had an infinite number of bugs, the code over time will produce an infinite series - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]