On October 29, 2003 12:53 pm, Toad wrote:
> On Wed, Oct 29, 2003 at 07:57:56AM -0500, Ed Tomlinson wrote:
> > On October 28, 2003 11:53 pmperspectivehuck wrote:
> > > So what should we do about this? I propose we should do two things.
> > > First make sure that it starts taking network resources into account.
> > > Meaning that in the NGrouting formula when calculating the predicted
> > > time, we need to consider that given finite bandwidth (as listed in the
> > > config file) initiating any new request will slow all the others
> > > currently processing down. This additional time should be added to the
> > > request time of the request we are making. It currently assumes that
> > > new requests never slow things down, so making more requests, which are
> > > likely to fail, looks more favorable than it really is.
> >
> > What sort of formula are/do you suggest for this?   I do think we need to
> > take this effect into account more than we do.  I wonder if the ngr
> > estimate is the right place though.  It estimates the time it will take
> > to get data.  Bandwidth limits are on outgoing data.  A request is small
> > and will not add much to outgoing data.  On the otherhand, when we decide
> > to send data (trailers) to another node, this will slow down all the
> > other transmitting trailers.  This _is_ something we probably should be
> > taking into account.  It would probably be better, from and overall
> > perspectve, to queue new trailers if we are using our bandwidth quota.  
> > This way the current sends would not be slowed down
>
> Anyone who implements this without first reading my sixty eight posts on
> why it is a bad idea and explaining why I am wrong will be publicly
> hanged :). 

Some how I get the feeling that I make you a little nervous :-/

> The basic problem is that we do not generally control the speed 
> at which the data on a given trailer becomes available. Maybe it'd be

You have mentioned this...  I wonder though where the bottleneck really
is, given that many (maybe most) nodes are on DSL or cable connections
where incoming bandwidth is much higher than out going?  

> worthwhile for ubernodes sending data primarily from store, but that's
> not a particularly worthwhile case to optimize IMHO. I don't see how we
> can do much more than we do now - reject requests if we are over some
> fraction of the bandwidth limit.

There is a difference in what I suggest here though.  I only suggest queueing
the tailer send _if_ we are all ready exceeding the bandwidth limit.  Hopefully
this does not prevent data for the queued message from being received and
buffered...

It would be simple enough to insert a timer every 200 ms or to check for
queued trailers and start them.  We would also want to track what the
queue delay is.  If its too large we probably need to do more rejecting.

> > so the nodes waiting for this data get it faster.   The node waiting for
> > the queued request would not wait that much longer either - when we start
> > sending, we have bandwidth to do it faster...
> >
> > Comments?
> > Ed
_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to