Re: Debunking the PGP backdoor myth for good. [was RE: Hypothesis: PGP backdoor (was: A security bug in PGP products?)]
| On 8/28/06, Ondrej Mikle <[EMAIL PROTECTED]> wrote: | > Take as an example group of Z_p* with p prime (in another words: DLP). | > The triplet (Z, p, generator g) is a compression of a string of p-1 | > numbers, each number about log2(p) bits. | | Pardon my mathematical ignorance, but isn't Z just a notation to indicate | a ring, as opposed to a parameter that you'd have to store? Z is universally used to represent the integers. (From Zahlen, German for numbers.) In printed mathematics, Z used this way is taken from a special "blackboard bold" font. A common representation uses two parallel strokes for the Z, with somewhat thickened horizontal bars. (Back when math was typed on a typewriter, you produced this by typing Z, backspacing almost but not quite all the way, then typing it again.) The same font is also used for the reals (R), rationals (Q - from quotient?) and the complexes (C). The Hamiltonians are less common, but you'll sometimes see an H from this font to name them. N is sometimes used for the natural numbers (positive integers). (The naturals are not much used beyond elementary-school texts) The other letters in the font have no universal meaning, but get used in specialized areas. I think I've seen a black-board bold A used for an affine space, for example. In all cases, the "usual" operations are assumed, so R is the reals as a complete ordered field, Z is the ring of integers under the usual addition and multiplication (with the usual ordering, though there is no common formal name I know of for a ring with an associated ordering), and so. There are a bunch of associated notions, like Z_n (_ for subscript - TeX notation) for the group of integers mod n under addition. When n\p is a prime, Z_p^* (^ for superscript) for the group of integers 1..p mod p under multiplication. Z_n is actually a ring under addition and multiplication mod n, and Z_p a field, and where appropriate, they are taken to be so. Q_p is the p-adics, but that's getting specialized. In ASCII, we don't of course have blackboard bold fonts, but Z is mainly taken to be the integers, and Z_p is universally taken to be the integers mod p, in discussions even vaguely related to integer properties. R and the others are less commonly used, and you'd have to understand the context. Mr. Mikle's notation is ... a bit odd. What else might one conceivably substitute for the integers in (Z, p, generator g)? If it has to be the integers, why describe this as a triplet? -- Jerry - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Debunking the PGP backdoor myth for good. [was RE: Hypothesis: PGP backdoor (was: A security bug in PGP products?)]
On 8/28/06, Ondrej Mikle <[EMAIL PROTECTED]> wrote: Take as an example group of Z_p* with p prime (in another words: DLP). The triplet (Z, p, generator g) is a compression of a string of p-1 numbers, each number about log2(p) bits. Pardon my mathematical ignorance, but isn't Z just a notation to indicate a ring, as opposed to a parameter that you'd have to store? -- "If you're not part of the solution, you're part of the precipitate." Unix "guru" for rent or hire -><- http://www.lightconsulting.com/~travis/ GPG fingerprint: 9D3F 395A DAC5 5CCC 9066 151D 0A6B 4098 0C55 1484 - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Impossible compression still not possible. [was RE: Debunking the PGP backdoor myth for good. [was RE: Hypothesis: PGP backdoor (was: A security bug in PGP products?)]]
Dave Korn asked: > Is it *necessarily* the case that /any/ > polynomial of log N /necessarily/ grows slower than N? Yes. Hint: L'Hôpital's rule. > if P(x)==e^(2x) That's not a polynomial. x^Q is a polynomial. Q^x is not. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Hypothesis: PGP backdoor (was: A security bug in PGP products?)
On 8/23/06, Ondrej Mikle <[EMAIL PROTECTED]> wrote: We discussed with V. Klima about the "recent" bug in PGPdisk that allowed extraction of key and data without the knowledge of passphrase. I skimmed the URL and it appears this claim was answered several times in the original thread. Did you not read it, or not understand it? You have to have a valid passphrase from before the change, because the passphrase unlocks the disk key which doesn't change, unless you explicitly tell it to. -- "If you're not part of the solution, you're part of the precipitate." Unix "guru" for rent or hire -><- http://www.lightconsulting.com/~travis/ GPG fingerprint: 9D3F 395A DAC5 5CCC 9066 151D 0A6B 4098 0C55 1484 - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
RE: Impossible compression still not possible. [was RE: Debunking the PGP backdoor myth for good. [was RE: Hypothesis: PGP backdoor (was: A security bug in PGP products?)]]
On 28 August 2006 17:12, Ondrej Mikle wrote: > We are both talking about the same thing :-) Oh! > I am not saying there is a finite deterministic algorithm to compress > every string into "small space", there isn't. BTW, thanks for "There > is ***NO*** way round the counting theory." :-) > > All I wanted to say is: > For a specific structure (e.g. movie, picture, sound) there is some > "good" compression algorithm. > > E.g.: if you take a GIF 65536x65536, all white, with just one pixel > black, it can be compressed into 35 bytes, see here: > http://i.iinfo.cz/urs-att/gif3_6-115626056883166.gif > If you wanted to compress the same picture using JPEG (i.e. discrete > cosine transform), then two things would happen: > > The compressed jpeg file would be a) much bigger b) decompressed image > would have "artifacts", because Fourier transform of a pulse is sync > (infinitely many frequencies). Sure, JPEG is a lossy compression, but > good enough for photos and images that don't have a lot of high > frequencies. Absolutely, absolutely! A compression algorithm achieves the best results if it is designed with statistical knowledge of the target domain taken into account. In any compression scheme you're balancing the set of inputs that grow smaller on compression against the necessary counterpart of inputs that grow larger. Whatever you gain in the first set, you lose in the second. The secret is to arrange that the inputs that tend to grow larger are the ones that are less-common in real-world usage, and thus that the ones that are more common tend to grow smaller. In practice, this means eliminating 'redundancy', where 'redundancy' is defined as 'whatever similar properties you can tease out from the more-common-real-world cases'. Of course, I could point out that there is precisely *1* bit of information in that huge GIF, so even compressing it to 35 bytes isn't a great achievement... it's one of the set of less-common inputs that grow bigger as a compromise so that real pictures, which tend to have at least *some* variation, grow smaller. cheers, DaveK -- Can't think of a witty .sigline today - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Impossible compression still not possible. [was RE: Debunking the PGP backdoor myth for good. [was RE: Hypothesis: PGP backdoor (was: A security bug in PGP products?)]]
We are both talking about the same thing :-) I am not saying there is a finite deterministic algorithm to compress every string into "small space", there isn't. BTW, thanks for "There is ***NO*** way round the counting theory." :-) All I wanted to say is: For a specific structure (e.g. movie, picture, sound) there is some "good" compression algorithm. E.g.: if you take a GIF 65536x65536, all white, with just one pixel black, it can be compressed into 35 bytes, see here: http://i.iinfo.cz/urs-att/gif3_6-115626056883166.gif If you wanted to compress the same picture using JPEG (i.e. discrete cosine transform), then two things would happen: The compressed jpeg file would be a) much bigger b) decompressed image would have "artifacts", because Fourier transform of a pulse is sync (infinitely many frequencies). Sure, JPEG is a lossy compression, but good enough for photos and images that don't have a lot of high frequencies. Cheers, OM On 8/28/06, Dave Korn <[EMAIL PROTECTED]> wrote: On 28 August 2006 15:30, Ondrej Mikle wrote: > Ad. compression algorithm: I conjecture there exists an algorithm (not > necessarily *finite*) that can compress large numbers > (strings/files/...) into "small" space, more precisely, it can > compress number that is N bytes long into O(P(log N)) bytes, for some > polynomial P. [ My maths isn't quite up to this. Is it *necessarily* the case that /any/ polynomial of log N /necessarily/ grows slower than N? If not, there's nothing to discuss, because this is then a conventional compression scheme in which some inputs lead to larger, not smaller, outputs. Hmm. It would seem to me that if P(x)==e^(2x), P(log n) will grow exponentially faster than N. I'm using log to mean natural log here, substitute 10 for e in that formula if we're talking about log10, the base isn't important. However, assuming that we're only talking about P that *do* grow more slowly, I'll discuss the problem with this theory anyway. ] There are many, but there are no algorithms that can compress *all* large numbers into small space; for all compression algorithms, some sets of input data must result in *larger* output than input. There is *no* way round the sheer counting theory aspect of this. There are only 2^N unique files of N bits. If you compress large files of M bits into small files of N bits, and you decompression algorithm produces deterministic output, then you can only possibly generate 2^N files from the compressed ones. > Take as an example group of Z_p* with p prime (in another words: DLP). > The triplet (Z, p, generator g) is a compression of a string of p-1 > numbers, each number about log2(p) bits. > > (legend: DTM - deterministic Turing machine, NTM - nondeterministic > Turing machine) > > There exists a way (not necessarily fast/polynomial with DTM) that a > lot of strings can be compressed into the mentioned triplets. There > are \phi(p-1) different strings that can be compressed with these > triplets. Exact number of course depends on factorization of p-1. > > Decompression is simple: take generator g and compute g, g^2, g^3, > g^4, ... in Z_p* This theory has been advanced many times before. The "Oh, if I can just find the right parameters for a PRNG, maybe I can get it to reconstruct my file as if by magic". (Or subsitute FFT, or wavelet transform, or key-expansion algorithm, or ... etc.) However, there are only as many unique generators as (2 to the power of the number of bits it takes to specify your generator) in this scheme. And that is the maximum number of unique output files you can generate. There is ***NO*** way round the counting theory. > I conjecture that for every permutation on 1..N there exists a > function that compresses the permutation into a "short" > representation. I'm afraid to tell you that, as always, you will find the compression stage easy and the decompression impossible. There are many many many more possible permutations of 1..N than the number of unique "short representations" in the scheme. There is no way that the smaller number of "unique representations" can possibly produce any more than the same (smaller) number of permutations once expanded. There is no way to represent the other (vast majority) of permutations. > It is possible that only NTM, possibly with infinite > algorithm (e.g. a human) can do it in some "short finite time". Then it's not really an algorithm, it's an ad-hoc collection of different schemes. If you're allowed to use a completely different encryption scheme for every file, I can do better than that: for every file, define an encryption scheme where the bit '1' stands for the content of that file, and everything else is represented by a '0' bit followed by the file itself. Sure, most files grow 1 bit bigger, but the only file we care about is compressed to just a single bit! Great! Unfortunately, all you've done is moved information around. The amount of information you'd have t
Impossible compression still not possible. [was RE: Debunking the PGP backdoor myth for good. [was RE: Hypothesis: PGP backdoor (was: A security bug in PGP products?)]]
On 28 August 2006 15:30, Ondrej Mikle wrote: > Ad. compression algorithm: I conjecture there exists an algorithm (not > necessarily *finite*) that can compress large numbers > (strings/files/...) into "small" space, more precisely, it can > compress number that is N bytes long into O(P(log N)) bytes, for some > polynomial P. [ My maths isn't quite up to this. Is it *necessarily* the case that /any/ polynomial of log N /necessarily/ grows slower than N? If not, there's nothing to discuss, because this is then a conventional compression scheme in which some inputs lead to larger, not smaller, outputs. Hmm. It would seem to me that if P(x)==e^(2x), P(log n) will grow exponentially faster than N. I'm using log to mean natural log here, substitute 10 for e in that formula if we're talking about log10, the base isn't important. However, assuming that we're only talking about P that *do* grow more slowly, I'll discuss the problem with this theory anyway. ] There are many, but there are no algorithms that can compress *all* large numbers into small space; for all compression algorithms, some sets of input data must result in *larger* output than input. There is *no* way round the sheer counting theory aspect of this. There are only 2^N unique files of N bits. If you compress large files of M bits into small files of N bits, and you decompression algorithm produces deterministic output, then you can only possibly generate 2^N files from the compressed ones. > Take as an example group of Z_p* with p prime (in another words: DLP). > The triplet (Z, p, generator g) is a compression of a string of p-1 > numbers, each number about log2(p) bits. > > (legend: DTM - deterministic Turing machine, NTM - nondeterministic > Turing machine) > > There exists a way (not necessarily fast/polynomial with DTM) that a > lot of strings can be compressed into the mentioned triplets. There > are \phi(p-1) different strings that can be compressed with these > triplets. Exact number of course depends on factorization of p-1. > > Decompression is simple: take generator g and compute g, g^2, g^3, > g^4, ... in Z_p* This theory has been advanced many times before. The "Oh, if I can just find the right parameters for a PRNG, maybe I can get it to reconstruct my file as if by magic". (Or subsitute FFT, or wavelet transform, or key-expansion algorithm, or ... etc.) However, there are only as many unique generators as (2 to the power of the number of bits it takes to specify your generator) in this scheme. And that is the maximum number of unique output files you can generate. There is ***NO*** way round the counting theory. > I conjecture that for every permutation on 1..N there exists a > function that compresses the permutation into a "short" > representation. I'm afraid to tell you that, as always, you will find the compression stage easy and the decompression impossible. There are many many many more possible permutations of 1..N than the number of unique "short representations" in the scheme. There is no way that the smaller number of "unique representations" can possibly produce any more than the same (smaller) number of permutations once expanded. There is no way to represent the other (vast majority) of permutations. > It is possible that only NTM, possibly with infinite > algorithm (e.g. a human) can do it in some "short finite time". Then it's not really an algorithm, it's an ad-hoc collection of different schemes. If you're allowed to use a completely different encryption scheme for every file, I can do better than that: for every file, define an encryption scheme where the bit '1' stands for the content of that file, and everything else is represented by a '0' bit followed by the file itself. Sure, most files grow 1 bit bigger, but the only file we care about is compressed to just a single bit! Great! Unfortunately, all you've done is moved information around. The amount of information you'd have to have in the decompressor to know which file to expand any particular '1' bit into is equal to the amount you've saved by compressing the original to a single bit. There is ***NO*** way round the counting theory. > Again, > I could've/should've proven or disproven the conjecture, I just don't > want to do it - yet ;-) I seriously advise you not to waste much time on it. Because ... There is ***NO*** way round the counting theory. cheers, DaveK -- Can't think of a witty .sigline today - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Debunking the PGP backdoor myth for good. [was RE: Hypothesis: PGP backdoor (was: A security bug in PGP products?)]
On 8/28/06, Dave Korn <[EMAIL PROTECTED]> wrote: The author has made the *exact* same error as when someone comes up with a magical compression algorithm that they say can compress absolutely any data down to a tiny size. They always get the data to compress, sure, but they always have problems with the decompression stage. They tend to think that this is just a minor bug in their decompressor they need more time to work on, but no amount of time will let them work around the fundamental mathematical fact that they've not got sufficient information in their compressed file to reconstruct the original. Thanks, sorry for the hassle, me (and others) should've checked it before asking. Ad. compression algorithm: I conjecture there exists an algorithm (not necessarily *finite*) that can compress large numbers (strings/files/...) into "small" space, more precisely, it can compress number that is N bytes long into O(P(log N)) bytes, for some polynomial P. Here's a lead: Take as an example group of Z_p* with p prime (in another words: DLP). The triplet (Z, p, generator g) is a compression of a string of p-1 numbers, each number about log2(p) bits. (legend: DTM - deterministic Turing machine, NTM - nondeterministic Turing machine) There exists a way (not necessarily fast/polynomial with DTM) that a lot of strings can be compressed into the mentioned triplets. There are \phi(p-1) different strings that can be compressed with these triplets. Exact number of course depends on factorization of p-1. Decompression is simple: take generator g and compute g, g^2, g^3, g^4, ... in Z_p* I conjecture that for every permutation on 1..N there exists a function that compresses the permutation into a "short" representation. It is possible that only NTM, possibly with infinite algorithm (e.g. a human) can do it in some "short finite time". Again, I could've/should've proven or disproven the conjecture, I just don't want to do it - yet ;-) Thanks OM - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Debunking the PGP backdoor myth for good. [was RE: Hypothesis: PGP backdoor (was: A security bug in PGP products?)]
On 24 August 2006 03:06, Ondrej Mikle wrote: > Hello. > > We discussed with V. Klima about the "recent" bug in PGPdisk that > allowed extraction of key and data without the knowledge of passphrase. > The result is a *very*wild*hypothesis*. > > Cf. http://www.safehack.com/Advisory/pgp/PGPcrack.html > > Question 1: why haven't anybody noticed in three months? Why has not > there been a serious notice about it? Because it is completely incorrect. Utterly wrong. This was explained on this list just a couple of days ago, look for the thread "A security bug in PGP products?" in the list archives. > According to the paper, both "standard" .pgd and self-extracting SDA > (self-decrypting archives) are affected. Systematic backdoor maybe? No, the paper is wrong. They aren't affected, you can't break the encryption on them, and therefore there is no backdoor. > Possibilities: > 1) it is a hoax. Though with very low probability. The text seems to > include a lot of work and makes perfect sense (REPE CMPS, all the > assembly), i.e. we suppose it is highly improbable that somebody would > make such hoax. It is not a hoax. It is the work of an incompetent. Like many of those who invent perpetual motion machines, he genuinely believes that what he has done is correct, but it isn't. Unfortunately, but also very much like many of those who invent perpetual motion machines, when this is pointed out to him he assumes that everyone else is either stupid or malicious, rather than accept that his theory has a massive flaw which completely undermines it. > This can be either proven or disproven simply by > checking the Win program using hex editor/debugger (using an already > downloaded copy). I haven't had the time to check it yet (no Win). Actually, it can't, because the instructions he has given are not sufficient to follow. At the critical point, he says you must replace the bytes where the disk encryption key is stored. Unfortunately, he cannot tell you what to replace them with, unless you already happen to have a copy of the bytes representing that *exact* *same* disk encryption key stored *under* *a* *known* *passphrase*, and that is why the only example on his website that "works" is the one where you change the passphrase on a disk but don't re-encrypt it. He even admits that in all other cases you will "extract crap". Examine the instructions at http://www.safehack.com/Advisory/pgp/PGPcrack.html#Two_Ways_to_bypass_PGP_SDA_ Authentication "Two Ways to bypass PGP SDA Authentication and EXTRACT with success After spending a lot of time debugging and analyzing PGP SDA, we came up with a conclusion that we can successfully extract the contents of PGP SDA in 2 ways. 1) Modifying the contents of the address 00890D70. (Screen Capture) The modification should be done in: 0040598F |. E8 AC3D CALL filename_s.00409740 At: 00409740 /$ 8B4424 0C MOV EAX,DWORD PTR SS:[ESP+C] At this point change the contents of 00890D70. After the bytes change, you will have to bypass authentication. After bypassing authentication you will be able to extract. 2) Modifying the contents of the address 00BAF670. (Screen Capture) The Modification should be done in: 0040595F FF15 90324100 CALL DWORD PTR DS:[413290] At: 004019DA /$ FF7424 08 PUSH DWORD PTR SS:[ESP+8] At this point change the contents of 00BAF670. NOTE: At this point if you change the contents of 00BAF670, you won't have to bypass authentication, it will work like a charm, and it will grant auth/extract. Notice the crucial phrases "At this point change the contents of 00890D70", and "At this point change the contents of 00BAF670". He gives you absolutely no information what it is that you need to change those bytes to. Well, I can tell you. You have to change them to be the value of the disk encryption key as encrypted by whatever passphrase you chose to enter. You cannot do this unless you already know the disk encryption key. In other words, if you already know the key to decrypt the disk, you can decrypt the disk. If you don't, however, you can't. Examine the writing a bit further down the page, where it says Accessing ANY PGP VIRTUAL Disk . (Need more testing and free time, Check Debugging Notes at the end) At this point you can add users change existing users passphrase Re-encrypt disk and do other stuff. But when you try to access the disk you will get Disk is not formatted. This is when you need to use your debugger. Notice how he doesn't say what you need to *do* with the debugger, so let me explain what he has skipped over: Using only your debugger, you need to guess the decryption key for the disk. Think that's something you can do with a
Re: Hypothesis: PGP backdoor (was: A security bug in PGP products?)
On Thu, 24 Aug 2006, Ondrej Mikle wrote: > 2) AFAIK, Zimmerman is no longer in control of the company making PGP. > AFAIK the company (NAI) has been bought by another group couple of years > ago. The rescue of PGP from NAI's gross neglect and mismanagement of the product line was orchestrated by individuals involved in the original PGP, Inc. startup, and lead by respected cryptographic engineer Jon Callas (also known for being the editor of RFC 2440) and Phil Dunkelberger (the original PGP, Inc., CEO.) As part of their acquisition of the PGP product line, they hired (nearly?) the entire PGP programming team, including such familiar faces as Will Price and Hal Finney. http://www.pgp.com/company/management.html As a former NAI employee who worked on the PGP products, I firmly believe the software is in far more capable hands now from a management standpoint. As a PGP Universal user, I'm delighted by the significant improvements in usability that the new management has allowed the engineering team to make. The myopia of NAI's executives toward the usability problems in PGP was one of the reasons I quit the company in frustration. Also, for what it's worth, Phil was ousted from NAI in 2000, prior to the discontinuation of NAI's commitment to the PGP product line, but he *is* involved with the current PGP Corporation, as a member of the technical advisory board. http://www.pgp.com/company/boards/tab.html I also have no question, personally, that if there's a backdoor in PGP, neither Mr. Callas nor any of the PGP engineers I had the pleasure to work with know of it. Your theory is indeed wild, and though I don't mean to discourage vigilance in questioning these sorts of potential subversions of integrity in software as important as PGP, you might consider doing more research into the background of people against whom you choose to levy hypothetical accusations in public forums in the future. --Len. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Hypothesis: PGP backdoor (was: A security bug in PGP products?)
Hello. We discussed with V. Klima about the "recent" bug in PGPdisk that allowed extraction of key and data without the knowledge of passphrase. The result is a *very*wild*hypothesis*. Cf. http://www.safehack.com/Advisory/pgp/PGPcrack.html Question 1: why haven't anybody noticed in three months? Why has not there been a serious notice about it? According to the paper, both "standard" .pgd and self-extracting SDA (self-decrypting archives) are affected. Systematic backdoor maybe? Possibilities: 1) it is a hoax. Though with very low probability. The text seems to include a lot of work and makes perfect sense (REPE CMPS, all the assembly), i.e. we suppose it is highly improbable that somebody would make such hoax. This can be either proven or disproven simply by checking the Win program using hex editor/debugger (using an already downloaded copy). I haven't had the time to check it yet (no Win). 2) AFAIK, Zimmerman is no longer in control of the company making PGP. AFAIK the company (NAI) has been bought by another group couple of years ago. www.pgp.org says: " 2002/03/08 - NAI drops PGP Desktop 2001/10/15 - NAI to sell PGP division " It may be therefore quite possible that NSA/CIA/FBI/etc. couldn't force Zimmerman to compromise his own product directly, so they have bought the company. The backdoor might have been introduced in the latest releases (e.g. 8.x, 9.x). 3) there was a lazy programmer, or a programmer-infiltrator from the ranks of intelligence services. What does one do when a cryptosystem seems unbreakable? He circumvents it. AFAIK the code has been checked many times in NAI, until some point in time. As you all probably know, there has been a lot of mischief around Zimmerman and PGP in the '90-ties. We don't think NSA/CIA/FBI/etc would "just give up without fight". You know, the "three-line PERL RSA implementations on T-shirts" and so on. Code of PGPdisk 9.x looks like this according to the paper: when the passphrase is changed, the key itself remains untouched. If at least the encryption key has been encrypted by a symmetric key generated e.g. by PBDFK2 from the passphrase. Conclusion: it seems that NSA/CIA/FBI/etc. haven't called truce. Thought, very clever solution. Nevertheless, nothing we haven't had already seen in 1st/2nd world war tactics. What do you think? Your input is welcome. OM P.S. sorry for any misspellings of names - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]