In my opinion, this is an exceptional tool for exceptional times.

In other words, even the most heavily loaded project runs fine most of 
the time without throttling -- you only need to throttle aggressively 
when the project servers are clobbered by some exceptional load.

-- Lynn

Martin wrote:
> Josef W. Segur wrote:
>> On 13 Jul 2009 at 1:06, Martin wrote:
>>
>>> I suggest a 'first try' *max simultaneous connections of 150* for
>>> uploads *and 20 for downloads* Adjust as necessary to keep the link
>>> at an average that is no more than *just 80 Mbit/s*
>> 20 for downloads won't work. I'm on dial-up like Mikus (though I don't
>> know if he's doing s...@h). Maybe 5% of the 292343 active s...@h hosts, call 
>> it
>> 15000 or so, are communicating at ~50 kbps. On such a host the download
>> connection for a setiathome_enhanced workunit lasts for about a minute.
>> If each such host downloads on average one workunit per day your 20
>> connections are tied up continually. An Astropulse workunit takes 30
>> minutes or so to download...
> 
> Good point.
> 
> The danger there is that the dial-up users may steal all the 
> connections. The fast connections will clear quickly. The dial-up users 
> will grab a connection and hold it for a 'long' time as you describe...
> 
> This is where the server must dynamically adjust the connections limit.
> 
> Hence the point about the server (dynamically) adjusting the connections 
> limits to maintain an approximately 80% maximum link utilisation for 
> each of uploads and downloads. Also note that the adjust should also be 
> made until a proportion of the connections are available but unused for 
> the case when the project isn't busy with transfers.
> 
> That should self adjust to accommodate any real-world mix of slow and 
> fast connected users.
> 
> 
>> To get back to BOINC development, I think a fairly simple change would
>> help a project tune work delivery to available bandwidth. The Feeder
>> ensures there are 100 results available to send every 2 seconds. If that
>> 2 second sleep interval were changed to a variable, the amount of work
>> which is available to be sent could be adjusted to circumstances. Make it
>> adjustable by a script which senses when dropped connections exceed some
>> small minimum, perhaps. Or simply have project staff adjust it when
>> conditions change. Projects which never run into difficulties (if there
>> are any such) would keep the 2 second default.
> 
> I'd suggest to have the bursts much smaller, say no more than 1 second 
> bursts of WUs. Most routers can only buffer a little more than a second 
> of link data without data loss.
> 
> That scheme will work well for limiting downloads to a controlled limit. 
> You still need to do something about an upload scramble whilst clearing 
> the backlog after an outage.
> 
> 
> Regards,
> Martin
> 
> 
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to