On Thursday 15 January 2009 21:57, svenerichoffmann at gmx.de wrote:
> There is still the question why are some transfers are so slow.

IMHO because of freak conditions - QoS, foreground CPU jobs, there are loads 
of possible explanations. Malicious attack is also possible, which is one 
reason we need to deal with this.

> Heres what i thought about that. (nothing new)
> 
> 
> MY THEORY 
> 
> -> most traffic is routed through a node
> -> if the node does not serve out of store, up / down bandwith usage will be 
exactly the same (in/out balance)
> -> cause the load management targets towards 100% up bandwith usage, all the 
up is used to the max
> -> what happens if the node starts now to serve a request (transfer) out of 
store?
> 
> 
> CONCLUSION:
> 
> this out of store served request is in bad condition cause the upload is 
already in full use (100%)
> with routing traffic. Cause up limit is a hard limit there is no room for a 
fast transfer.

I don't follow. Usually upload is the scarce resource, and we decide whether 
to accept a request without reference to whether we have the key locally.
> 
> If there is always more upload limit than download limit

There is pretty much always much more download limit than upload limit.

> all requests should pass fast through the node -> reserve is given
> 
> - load management targets not to full size of upload, cause the lower down 
limit
>   will trigger the load management.
> 
> - inserts are incorporated to load management -> they use incoming bandwith
> 
> - if my node serves every 10th request from its store
>   it MUST HAVE 10% more upload capacity than download capacity 

No, it must decide how many requests it can accept given the available 
resources.
> 
> 
> 
> TESTED:
> 
> Currently up limit seems to work as hard limit and down limit seems
> to work as soft limit (my node got above it but triggered load management)

Yes, it is very hard to accurately limit downstream.
> 
> 
> In the test i set my limits like this
> 
> upload limit ->  20 kb/s  
> down limit ->  16 kb/s  

Most people have download limit way over the upload limit.
> 
> -> runing transfers seemed to be in good relation to used upload bandwith
> -> local requests worked (in relation to what my node can handle / 
downbandwith limit)
>     down bandwith load management did also manage how much local request
>     can be started, in relation to what the node is able to handle (local 
rejects)
> -> it seemed not to oszilate, quite linear runing of bandwith usage
> -> down bandwith limit seemed to be used to the max (overtime usage)
> -> up bandwith was used MORE than down bandwith (overtime usage) but not to 
the max, still reserves there
>     seems that the node was able to serve out of store quite fast using
>     the overhead that was generated by the difference between up / down 
limit
> 
> 
> The downlimit as soft limit did quite well, cause it does not add latency 
like
> a hard limit does. The node still can process short "waves" and get over the 
limit.
> But on the long run it did come very near to the set limit.
> 
> How happen those short waves?
> If only 10% request are successfull everything works well if only every 10th 
request
> is successfull. But in real world we might get 3 successfull request after 
another and this
> generates a wave. So its neccessary that up bandwith usage has 
headroom/reserve to process
> such waves. A 100% used upload can not process this waves without latency.

Sure. Hence my proposed solution: for real-time requests, accept relatively 
few so that the latency doesn't get high. For bulk requests, accept loads to 
maximise throughput.
> 
> 
> 
> 
> SOLUTIONS:
> 
> But we all do know this. So the question is how to operate a good load 
management
> that does both a high upload usage and small latencys.
> 
> 
> #1    Quick and Dirty - with the current load management
> 
> Get the downlimit out of reach from the user and the node sets it by itself
> based on the given up limit in config (user input)
> 
> as example like this
> 
> upload limit ->  20 kb/s  (user input)
> down limit ->  16 kb/s  (autoset by node -> upload limit - 20% reserve for 
serving out of store = 80% down limit)
> 
> Advantage: local requests are also limited -> in correlation to the capacity 
of the node
> 
> Hmm...maybe a sliding down limit based on psuccess or other indicates is an 
option?
> Some mathematical thougts here? -> would result in optimzed upload usage
> But dont forget 100% upload usage will not give good latency without an QoS 
for requests.

I do not understand.
> 
> 
> 
> #2  Two upload limits - Soft and Hard
> 
> Upload limit hard (current)
>  -> stays the same -> bandwith usage does no go beyond
> 
> Upload limit soft  (new)
>  -> autocalculated by node or set to some value like 80% of hard limit (20% 
reserve for waves / serving out of store)
>  -> this limit is the value where upload management targets too
> 
> 
> PS:
> I think its not that problematic if you make some tests in the wild.
> The current Freenet-Users are surely hardcore and will survive some tests.
> Was good to see that there is still the will to try something new the hard 
way
> instead of doing small steps over month.
> 
> 
> 
> 
> 
> 
>     We have several conflicting goals here:
>     - Minimise request latency for fproxy.
>     - Maximise the probability of a request succeeding.
>     - Maximise throughput for large downloads.
>     - Use all available upstream bandwidth.
>     - Don't break routing by causing widespread backoff.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 827 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090115/e1d7b38b/attachment.pgp>

Reply via email to