On Monday 03 December 2007 17:25, Michael Rogers wrote:
> Matthew Toseland wrote:
> > Well... yes and no. It was a bug at the LINK layer iirc. Remember the 
> > pathetically low payload percentages?
> 
> What you call the link layer is supposed to be congestion-controlled, right?

Yes, at the link layer. Congestion control presently only applies to data 
packets, and yes I accept that this sucks and we have plans for a new link 
layer/packet format, which is more like TCP, but they haven't yet been 
implemented.
> 
> > I'm not talking about stop/start signals, nor am I talking about tokens in 
the 
> > sense that you use the word.
> 
> Then whey did you say your proposal was basically the same as token passing?
> 
> So just to be clear, you're talking about tokens that expire, backed up
> by pre-emptive rejection if too many tokens are spent at once?

Yes.
> 
> Will the grounds for pre-emptive rejection be the same as they are now
> (bandwidth liability etc)? If so, how will tokens solve the current
> problem of too many requests being rejected?

I'm not sure that that is the main problem we have to deal with right now. My 
suspicion is that pre-emptive rejection is fine, but we're not being sent 
enough requests in the first place. Output liability limiting *may* cause too 
many requests to be rejected, but I don't see any evidence for that right 
now. And anything we could do about that would run the risk of creating 
timeouts if we have a burst of unusually successful requests.
> 
> Cheers,
> Michael
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20071203/3479b431/attachment.pgp>

Reply via email to