Ian Clarke <I.Clarke at dynamicblue.com> wrote:
> Firstly, you have yet to convincingly justify your belief that my version
> won't work (see above), and secondly, it is an understatement to say that
> it will lead to an increase in the number of messages, it *will* lead to
> the slashdot effect.  Your protests against this that the slashdot effect
> won't happen because the DataRequests will be for small data redirecting
> to CHKs is rather weak - some poor sod running a Freenet node across his
> small ISDN line which happens to be the epi-centre of the next Starr
> Report is not going to survive regardless of how small the data which is
> being requested is!

The reason the slashdot effect won't occur is not that the data is assumed
to be small.  It is that the epicenter will not be serving data at all,
only control messages.

The first time a follow-through request comes in on a particular route, the
updated data will be sent out and cached at each upstream point (i.e.
closer to the client).  The next time, the epicenter simply replies, "you
already have the latest version" and the actual serving of data is handled
upstream.

What is the load on the epicenter?
- sending data once per immediate neighbor -- this is the same as for inserts
- sending control message once per follow-through requestor

It is possible that a very large number of follow-through requestors could
still overwhelm the epicenter.  But to what effect?  The first few
follow-through requests will have already spread the updated data upstream.
When the upstream nodes are unable to contact the epicentre, they will say,
"oh well, can't find another version, I'll just send mine" -- which will be
the latest version anyway.  So the data will still be served correctly.  

> I agree that this LM factor might be useful, although I would still be
> concerned about it leading to a SlashDot effect - it may lead to a
> reduction in hits on the central server, but this is not enough, it must
> lead to the hits on any given server not increasing at all in proportion
> to the total number of requests (as I think is the effect of the current
> dynamic caching), anything less will only delay the /. effect, not
> prevent it.  To suggest that it is sufficient to merely delay the /.
> effect would be very short-sighted.

If the ultimate decision to perform follow-throughs is placed with nodes
rather than clients (removing the psychological element), and the LM factor
is used, an exponentially decreasing number of hits will result.  With the
LM threshold set to 50%, each new follow-through will only be permitted
after twice as long a wait as the previous follow-through.  Surely this
should be sufficient to prevent any overwhelming.  Even if it isn't, I
believe the data will still get through, as I said above.

You might say falling over still sucks for the epicenter.  But this sort of
thing can happen under the current system as well -- suppose someone
misspells a key on slashdot.  Then the epicenter for the misspelling will
get hit by requests it can't fend off with data because there is no data
under that key.

theo


_______________________________________________
Freenet-dev mailing list
Freenet-dev at lists.sourceforge.net
http://lists.sourceforge.net/mailman/listinfo/freenet-dev

Reply via email to