Ed replied to me,

> [EMAIL PROTECTED] wrote:
>
> > Ed says,
> >
> > > The solution is to use a multifold of links, arranged in time and
space
> > > such that rather than making the impossible assumption that "no part
> > > will fail at any time," we can design a system where up to M parts
can
> > > fail at any time provided that not all M parts fail at the same time
--
> > > where M can be the entire number of parts.
> >
> > This sounds like `proactive security`, as defined in several
cryptographic
> > works. You may want to check it out at
http://www.hrl.il.ibm.com/proactive
>
> But you make the assumption that "as long as most systems are secure most
of the time."
> which I do not find necessary.

Different works (algorithms, protocols) are able to achieve different
thresholds in the number of corrupt
systems tolerated. Of course ideally we would like solutions where the
system is secure provided that one part is secure - but this may be hard or
even impossible to achieve for some tasks. BTW, we do have one result which
works in this extreme case (one part secure is enough) but with an
additional assumption (I'm refering to the randomization works I've done
with Ran Canetti and Chee-seng Chow around 1994-1995, see in the site). But
for many harder tasks like signatures, secret sharing, clock sync... a
majority (or even more) is provably necessary. BTW, often just increasing
the threshold of faults from say a quarter to a third or to half, is often
a very challenging task.

Best, Amir




Reply via email to