I know.  I started with one suggestion (which Dr. Anderson said he 
checked in almost immediately) and then we got off on the more 
complex/controversial one.

Skipping the next upload (or download) because the one before failed 
will reduce load automatically at a time when the project needs less load.

[email protected] wrote:
> Sorry, I got a bit confused by some of the posts.  I wasn't quite certain
> where the thread had gotten to.
> 
> jm7
> 
> 
>                                                                            
>              "Lynn W. Taylor"                                              
>              <[email protected]>                                             
>              Sent by:                                                   To 
>              boinc_dev-bounces         [email protected]              
>              @ssl.berkeley.edu                                          cc 
>                                        [email protected]          
>                                                                    Subject 
>              07/13/2009 11:46          Re: [boinc_dev] Optimizing          
>              AM                        uploads..... (AND downloads)        
>                                                                            
>                                                                            
>                                                                            
>                                                                            
>                                                                            
>                                                                            
> 
> 
> 
> 
> That was my original suggestion, with one small change.
> 
> Always try to upload the file with the earliest deadline, so that if a
> few uploads are occasionally going through, you get the most important
> one first.
> 
> -- Lynn
> 
> [email protected] wrote:
>> A couple of suggestions.
>>
>> 1)  When a backoff on the client ends, only start ONE upload.  (Not the
>> maximum allowed by the client).
>> 2)  If that upload completes, start the next ONE.
>> 3)  If all upload with no problem, drop the backoff to 0.
>> 4)  If one does NOT upload, continue with the exponential backoff.
>>
>> jm7
>>
>>
>>
> 
>>              Martin
> 
>>              <[email protected]
> 
>>              o.uk>
> To
>>              Sent by:                  BOINC Developers List
> 
>>              boinc_dev-bounces         <[email protected]>
> 
>>              @ssl.berkeley.edu
> cc
> 
> Subject
>>              07/12/2009 08:06          Re: [boinc_dev] Optimizing
> 
>>              PM                        uploads..... (AND downloads)
> 
> 
> 
> 
> 
> 
> 
>>
>>
>>
>> OK... To summarise/clarify from offlist...
>>
>> (NB: Mb = Mega-bits. Mega-Bytes is MB)
>>
>>
>> The assumption is that the core problem for the s...@h uploads/downloads
>> being strangled is due to disgraceful link bandwidth degradation due to
>> link congestion. This is highly suggestive when looking at the cricket
>> graphs.
>>
>> For example, at the moment the s...@h link shows:
>>
>>
> http://fragment1.berkeley.edu/newcricket/grapher.cgi?target=%2Frouter-interfaces%2Finr-250%2Fgigabitethernet2_3;ranges=m;view=Octets
> 
>>
>> Average bits in (for the day):
>> Cur: 65.82 Mbits/sec
>> Avg: 64.92 Mbits/sec
>> Max: 74.95 Mbits/sec
>>
>> Average bits out (for the day):
>> Cur: 18.14 Mbits/sec
>> Avg: 18.11 Mbits/sec
>> Max: 21.31 Mbits/sec
>> Last updated at Sun Jul 12 16:28:39 2009
>>
>> And from the graph, earlier download peaks are pegged at about 93Mbit/s.
>> Uploads peak at 20Mb/s.
>>
>> Note that this is for a period where s...@h appear to be short of available
>> WUs! (Or is this a supply throttle to test scenarios?)
>>
>>
>> (Note also that the "bits in" "bits out" direction is confusingly
>> reversed wrt the s...@h servers due to that router's reporting config.)
>>
>> The present link load is 70Mb/s and connections from my Boinc client are
>> fine and fast. Eg just now:
>>
>> Mon 13 Jul 2009 00:38:55 BST         s...@home         Sending scheduler
>> request:
>> Requested by user.
>> Mon 13 Jul 2009 00:38:55 BST         s...@home         Requesting new
> tasks
>> Mon 13 Jul 2009 00:39:00 BST         s...@home         Scheduler request
>> completed: got
>> 8 new tasks
>> Mon 13 Jul 2009 00:39:02 BST         s...@home         Started download
> of
>> 28au08ag.27740.18477.4.8.252
>> Mon 13 Jul 2009 00:39:02 BST         s...@home         Started download
> of
>> 28au08ag.27740.18477.4.8.242
>> Mon 13 Jul 2009 00:39:07 BST         s...@home         Finished download
> of
>> 28au08ag.27740.18477.4.8.252
>> Mon 13 Jul 2009 00:39:07 BST         s...@home         Finished download
> of
>> 28au08ag.27740.18477.4.8.242
>> Mon 13 Jul 2009 00:39:07 BST         s...@home         Started download
> of
>> 15se08aa.19921.11115.9.8.138
>> Mon 13 Jul 2009 00:39:07 BST         s...@home         Started download
> of
>> 04dc08ae.32262.20522.4.8.62
>> Mon 13 Jul 2009 00:39:12 BST         s...@home         Finished download
> of
>> 15se08aa.19921.11115.9.8.138
>> Mon 13 Jul 2009 00:39:12 BST         s...@home         Finished download
> of
>> 04dc08ae.32262.20522.4.8.62
>> Mon 13 Jul 2009 00:39:12 BST         s...@home         Started download
> of
>> 28au08ag.27740.18477.4.8.254
>> Mon 13 Jul 2009 00:39:12 BST         s...@home         Started download
> of
>> 15se08aa.19921.11115.9.8.140
>> Mon 13 Jul 2009 00:39:18 BST         s...@home         Finished download
> of
>> 28au08ag.27740.18477.4.8.254
>> Mon 13 Jul 2009 00:39:18 BST         s...@home         Finished download
> of
>> 15se08aa.19921.11115.9.8.140
>> Mon 13 Jul 2009 00:39:18 BST         s...@home         Started download
> of
>> 04dc08ae.32262.20522.4.8.95
>> Mon 13 Jul 2009 00:39:18 BST         s...@home         Started download
> of
>> 04dc08ae.32262.20522.4.8.75
>> Mon 13 Jul 2009 00:39:23 BST         s...@home         Finished download
> of
>> 04dc08ae.32262.20522.4.8.95
>> Mon 13 Jul 2009 00:39:23 BST         s...@home         Finished download
> of
>> 04dc08ae.32262.20522.4.8.75
>>
>> Nice 'n' quick. That is actually pegged at the maximum that my own
>> traffic management permits (linux "tc", but NOT the "policer").
>>
>> Sooo... A s...@h link average of 70Mb/s looks good :-)
>>
>>
>>
>> To add the full list of proposed fixes (from myself and others):
>>
>> Martin wrote:
>> [...]
>>> Server-side code can later be added to dynamically adjust the parameters
>>> from learning about what happens on average.
>> For example, to control the link data rate from server side, dynamically:
>>
>> Adjust the maximum number of simultaneous connections accepted;
>> Adjust transfer rate limits for sending out data;
>> Delay server responses and send NACK instead if still too many
>> simultaneous connections.
>>
>>
>>> That's the nature of a DDOS attack. That shouldn't be happening with the
>>> present Boinc clients, and especially not with the exponential backoff.
>> Modify the existing client-side exponential backoff so that once a
>> backoff time has accumulated, only reduce that backoff with a linear
>> decay or better, a slow start exponential decay.
>>
>> Allow the servers to update the client-side exponential backoff
>> parameters depending on the project's current conditions. Send this in
>> the initial response?
>>
>> Always use ordered uploads: EDF if the last transfer completed
>> successfully, RR if the last transfer failed (so that one rogue blocked
>> result doesn't 'jam up the works'.
>>
>>
>>> Very many more than the number of uploads/downloads that can be
>>> accommodated down 90Mb/s of pipe to a modern world community of
>>> broadband users. *Just 20 'average' users* simultaneously downloading
>>> can easily saturate the downlink to a congested mess beyond what tcp can
>>> handle. Similarly so for 150 'average' users all trying to upload during
>>> the same one second.
>>>
>>> Those numbers are vastly smaller than 184320 and also a very much
>>> smaller than the limits for apache.
>> [...]
>>> Hence:
>>>
>>>  >> As an experiment, can s...@h limit the max number of simultaneous
>>>  >> upload/download connections to see what happens to the data rates?
>>>  >>
>>>  >> I suggest a 'first try' *max simultaneous connections of 150* for
>>>  >> uploads *and 20 for downloads* Adjust as necessary to keep the link
>>>  >> at an average that is no more than *just 80 Mbit/s*
>>>
>>> It would be interesting to see what those numbers adjust to so as to hit
>>> the unsaturated 80 Mbit/s... And see by how much that improves the
>>> sustained transfer rate for the WUs.
>> Is that being tried at the moment?
>>
>> And for what numbers?
>>
>> Whatever has been done for the moment, at 70Mb/s the s...@h link is running
>> sweet and smooth and fast!
>>
>>
>> Regards,
>> Martin
>>
>> --
>> --------------------
>> Martin Lomas
>> m_boincdev ml1 co uk.ddSPAM.dd
>> --------------------
>> _______________________________________________
>> boinc_dev mailing list
>> [email protected]
>> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
>> To unsubscribe, visit the above URL and
>> (near bottom of page) enter your email address.
>>
>>
>>
>> _______________________________________________
>> boinc_dev mailing list
>> [email protected]
>> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
>> To unsubscribe, visit the above URL and
>> (near bottom of page) enter your email address.
>>
> _______________________________________________
> boinc_dev mailing list
> [email protected]
> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
> To unsubscribe, visit the above URL and
> (near bottom of page) enter your email address.
> 
> 
> 
> _______________________________________________
> boinc_dev mailing list
> [email protected]
> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
> To unsubscribe, visit the above URL and
> (near bottom of page) enter your email address.
> 
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to