> This had been discussed earlier.  Brian and I talked about it for a little
> while, he came up with the original idea.

Doh!  Curse my memory! :-)

> > I think the idea has definite merit.  If an error does occur,
> it's equally
> > likely to happen at any step along the way, statistically.
> Errors are every
> > bit as likely to happen on the very first iteration as they are
> during the
> > 50% mark, or the 32.6% mark, or on the very last iteration.
>
> True, but if the system is malfunctioning then the errors should start
> early.

Even more reason why it makes sense.

> > Just for example, every 10% along the way, it'll send it's
> current residue
> > to the Primenet server.
>
> I'm guessing that you mean a certain amount of the residue.  Sending in
> 10 2meg files for *each* exponent in the 20,000,000 range would get very
> unwieldy, and inconvenient for people and primenet.

Just a partial residue, like the one sent at the end of the test.  Even
smaller ones, like a 32 bit instead of 64 bit residue seems like it would do
the job splendidly.

> > I forget the numbers being tossed around,
> > but you'd only save 50% of (the error rate) of the
> > checking time.
>
> As I pointed out above, the error rate should increase with the
> square of the
> exponent (plus change).  This means that if 1% have errors at
> 7mil, 22% will
> have errors at 30mil.

Frightening to think so.  Are you sure the error rate increases?  Errors
seem like they'd show up more as a result of faulty hardware, to my
thinking.  I'd imagine that if a certain machine ran through about 10 10M
exponent error free, it has a very high likelihood of running a single 20M
exponent error free.

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

Reply via email to