thanks for the cluestick hit. so we can't trade multiple sigs for length,
which means for public benefit reasons adding more visible signers at the
top does irredeemably increase the dataset size because the key size has to
stay high.

there are no free lunches for public accountability.


On Wed, Jan 15, 2014 at 10:16 AM, Paul Hoffman <paul.hoff...@vpnc.org>wrote:

> On Jan 14, 2014, at 3:04 PM, George Michaelson <g...@algebras.org> wrote:
>
> > If multiple independent entities sign, can't they elect to use shorter
> algorithms?
> >
> > I know 'short can be spoofed' is out there, but since there are now n *
> <512> instead of 1 * 2048 is it not theoretically possible that at a cost
> of more complexity, it can be demonstrated that as long as 1) the sigs are
> all current 2) all the sig agree then the risk of n 512-bit signings is not
> necessarily worse than one 2048 or 4096 bit signing, for the specific need
> we have: proof of correctness. (n is unstated. 512 is a nonce. I have no
> idea what the sweet spot of keysize and number of keys would be.)
> >
> > therefore, if this is true, trading complexity for keysize might not
> increase the initial bootstrap zone transfer size that much.
> >
> > I am not a cryptographer and do not play one on TV
>
> I am not a cryptographer but I *do* play one on television, and that
> proposal is terrible. Breaking asymmetric keys gets logarithmically harder
> as the key size goes up; that's why no one has publicly broken a 1024-bit
> RSA key, even though they have had much, much more time (and now-faster
> CPUs) than the earlier breaks.
>
> It would orders of magnitude easier to break 4 512-bit keys than one
> 2048-bit key. The work would also parallelize better so that you need
> smaller systems to do the work.
>
> --Paul Hoffman
_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to