Re: GPG's vulnerability to brute force

2014-05-17 Thread Peter Lebbing
On 17/05/14 01:12, Leo Gaspard wrote:
> Well... If the operation the bit just underwent was a bitflip (and, knowing 
> the
> bruteforcing circuit, it's possible to know that), the bit was a '0'.

I admit this is beyond my knowledge, but maybe the following is rather
intuitive and not too incorrect.

"Flipping one bit" is not enough. You don't make any progress toward a
solution if you only keep flipping the same bit. At the least, you need
to decide to flip which bit. That is also information, information that
is not stored in the resultant bit array where you flipped one bit.

More in general, I agree with Rob that this is not a physics course, and
this is just a thought I had and wanted to share.

You can't object to scientific theories on the basis that you did not
study them properly. It might have a bit of a Socratic feel to it, but
it quite falls short of the real thing.

Physics and computation at this level are pretty unintuitive, I think.
Maybe my little attempt to introduce some intuition about information
content is grossly wrong, and maybe it's a folly to attempt intuition at
all.

HTH,

Peter.

-- 
I use the GNU Privacy Guard (GnuPG) in combination with Enigmail.
You can send me encrypted mail if you want some privacy.
My key is available at 

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: GPG's vulnerability to brute force [WAS: Re: GPG's vulnerability to quantum cryptography]

2014-05-17 Thread Peter Lebbing

(This mail originally got dropped by the list managing software because
I had accidentally misused a new webmail plugin. I'm resending it
with all original identifiers so it hopefully threads correctly. I'm
also completely ignoring section 3.6.6 of RFC 2822, but who cares? ;)

---

I suddenly realised something about Rob's Crypto FAQ[1]. You state that 
you need 64 bitflips per attempt, because otherwise your attacker can 
simply adapt his key to the order in which you try them. But to save on 
those 64 bitflips, you could also simply do the whole keyspace in 
order[2]. Your expected number of computations does rise from 2^127 to 
2^128, because obviously everybody will then use the key 2^128-1, which 
is the last one you try ;).


The save of 64 bits to 1 bit loses you 6 bits exponential complexity, 
the increase of the expected number of tries increases it again by 1 
bit, so you have saved 2^5 = 32 = 10^1.5 on the numbers Rob gives. When 
I'm quickly reading through the calculations, it seems we changed it 
from 100 nuclear warheads to only 3, to scan the whole keyspace.


However, this whole thing completely falls apart. We were able to save 
63 bitflips per trial in exchange for a twofold increase in total work. 
In the math done in [1], this changes the feel of the numbers radically, 
because to us, the difference between 3 and 100 is big (it's not when 
discussing exponential complexity). But that is because (at least) one 
very large source of bitflips is not looked at: the number of bitflips 
one trial of a key actually takes. Leo called it 10^5, Rob called it 
10^3. If you save 63 bitflips on a total of a million, that doesn't 
change the final numbers in the least. Pull out some hairs and you still 
have a beard: 10^3 - 63 = 10^3. Incidentally, we went from 100 nuclear 
warheads to 3 to 1000[3].


The thing I'm saying is: the explanation for taking 10^2 as the amount 
of bitflips for a single try doesn't seem convincing. It makes it seem 
that you can actually save computation by linearly searching your 
keyspace.


Peter.

[1] http://sixdemonbag.org/cryptofaq.xhtml#entropy
[2] Actually, make that Gray code order, which actually achieves 1 
bitflip changes per succession, unlike normal binary encoding.
[3] Or 10,000 depending on whether 2^128 is better approximated by 
10^38 or 10^39, when you're really nitpicking. Which you shouldn't do 
when discussing exponential complexity. Let's say that with exponential 
complexity, your fingers are too large to pick a nit.


--
I use the GNU Privacy Guard (GnuPG) in combination with Enigmail.
You can send me encrypted mail if you want some privacy.
My key is available at 




___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: GPG's vulnerability to brute force

2014-05-17 Thread Robert J. Hansen
> I admit this is beyond my knowledge, but maybe the following is rather
> intuitive and not too incorrect.

Another way of looking at it: RAM is normally implemented as a flipflop.
 (The EEs insist on calling them "bi-stable multivibrators," [1] but I
think that's just too kinky for a family-friendly mailing list.)  The
way a flipflop works, the contents are refreshed and/or changed with
each clock cycle.  Each and every clock, the former contents are
replaced with whatever the current state should be.  If a bit held 1
before and it holds 1 now, that still counts as a bit erasure for
thermodynamic purposes.

[1] No, I'm not kidding.  See, e.g.,
http://en.wikipedia.org/wiki/Bistable_multivibrator

> Physics and computation at this level are pretty unintuitive, I think.

*Very* unintuitive, yeah.  You flat-out can't trust your intuition: you
have to take refuge in math and physics.

To really understand computation at the limits of physics requires
general relativity (Riemann geometries, tensors, really high-end
calculus), quantum mechanics (matrices, Dirac brakets, eigenvalues,
probabilities), computational theory (discrete math, state transforms,
etc), statistical entropy, thermodynamic entropy, Shannon entropy, and
more.  It's hard.  I wasn't kidding about this field making me feel like
a dog sitting at a table with Ed Witten and David Deutsch.  Woof woof.

Niels Bohr is supposed to have said anyone who is not shocked by quantum
mechanics clearly has not understood it.  The same can be said about
computational limits.

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Gnupg-users Digest, Vol 128, Issue 24

2014-05-17 Thread Michael Anders
> nt-Type: text/plain; charset=ISO-8859-1
> 
> > Now where did you calculate that from?
> 
> $dS = \frac{\delta Q}{T}$
> 
> Second Law of Thermodynamics, which you just broke.  Have a nice day.
> 
The (cold) system where the calculation is done and the (hot) system the
result is transferred only exchange negligible energy and entropy.
No oceans to be boiled off.
Citing the second law of thermodynamics in this context is wrong. 
Besides, just citing a physical law and selling that as reasoning is not
the way we argue in physics.

:-)


___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: GPG's vulnerability to brute force

2014-05-17 Thread Peter Lebbing

On 2014-05-17 15:28, Robert J. Hansen wrote:
Another way of looking at it: RAM is normally implemented as a 
flipflop.


I think the register bank in a processor is still implemented as 
flipflops, and all computation ends up there (on a register machine)[1], 
so your statement is correct in that respect. A register bank is a RAM.


However, the word "normally" is not quite apt. What you normally call 
the RAM of your computer is DRAM, and DRAM is implemented by a charge on 
a capacitor. This achieves much higher densities on a chip than SRAM, 
but is also slower.


HTH,

Peter.

[1] Alternatively, in the registers between pipeline stages of the 
processor. If somebody knows about the latest CPU techniques and 
disagrees, by all means, enlighten me :). My knowledge pretty much ends 
at basic pipeline design, and is not up to speed with current CPU 
technology.

--
I use the GNU Privacy Guard (GnuPG) in combination with Enigmail.
You can send me encrypted mail if you want some privacy.
My key is available at 



___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Gnupg-users Digest, Vol 128, Issue 24

2014-05-17 Thread Robert J. Hansen
> The (cold) system where the calculation is done and the (hot) system the
> result is transferred only exchange negligible energy and entropy.

Go build this system, demonstrate you can break Landauer, and collect
your Nobel.  Seriously.

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: GPG's vulnerability to brute force

2014-05-17 Thread Robert J. Hansen
> However, the word "normally" is not quite apt. What you normally call
> the RAM of your computer is DRAM, and DRAM is implemented by a charge on
> a capacitor. This achieves much higher densities on a chip than SRAM,
> but is also slower.

Point, but I think it's equivalent: whether it's a flipflop getting a
signal or a microcapacitor that's charging/discharging, in both cases
previous state is getting obliterated and the entropic cost accrues.  :)

Thank you for the correction, though!

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: GPG's vulnerability to brute force

2014-05-17 Thread Peter Lebbing

On 2014-05-17 19:52, Robert J. Hansen wrote:

Point, but I think it's equivalent: whether it's a flipflop getting a
signal or a microcapacitor that's charging/discharging, in both cases
previous state is getting obliterated and the entropic cost accrues.  
:)


Absolutely, no argument there. In fact, currently, we can't make 
transistors work without capacitance, and a flipflop is built of 
transistors, so the parallels go even further.


Peter.

--
I use the GNU Privacy Guard (GnuPG) in combination with Enigmail.
You can send me encrypted mail if you want some privacy.
My key is available at 



___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users