Lynn W. Taylor wrote:
> Comments in-line.
> 
> Martin wrote:
>> Lynn W. Taylor wrote:
>> [...]
>> information for Berkeley to do the calculations.
> 
> If we're optimizing SETI, sure.  If we're optimizing for other projects, 
> there are lots of variables that would affect the peak load.

s...@h is a good first test with many and understanding participants.

Server-side code can later be added to dynamically adjust the parameters 
from learning about what happens on average.

Agreed, first is to prove what the problem actually is and what 
solutions work.


>> [...]
[...]
>> If the requests for new connections exceed those available, then they 
>> get a NACK or even just nothing.
> 
> Martin, having to generate the NACK is part of the problem.
> 
> Imagine a gigabit connection where 80% of the packets are TCP SYN requests.
> 
> Everyone who is arguing that we can use the BOINC servers to tell the 
> clients to slow down does not have this picture in their heads.

That's the nature of a DDOS attack. That shouldn't be happening with the 
present Boinc clients, and especially not with the exponential backoff.

Some numbers:

Assuming 90Mbit/s and 64 byte packets, you can squeeze an absolute 
maximum of 184320 packets a second down that.

I can't believe that ALL of the s...@h clients are all sending SYN packets 
all during the same one second, repeatedly every second.

In any case, the servers would be the bottleneck with that lot rather 
than instead showing the symptoms of a saturated link as we see now.


> It is an extreme picture, but if we can design for the most extreme 
> situation possible, then everything else is easier, and everything else 
> gets simpler.

Indeed.


[...]
>> ... I don't think that the link is being brought down merely by a 
>> flood of tcp requests for new connections.
> 
> How many simultaneous connections can a single server reasonably handle? 
>  Any idea??

Very many more than the number of uploads/downloads that can be 
accommodated down 90Mb/s of pipe to a modern world community of 
broadband users. *Just 20 'average' users* simultaneously downloading 
can easily saturate the downlink to a congested mess beyond what tcp can 
handle. Similarly so for 150 'average' users all trying to upload during 
the same one second.

Those numbers are vastly smaller than 184320 and also a very much 
smaller than the limits for apache.


>> So for your analogy, you have the basin with it's drain wide open 
>> (100Mb/s) and you are dumping a jugful (a batch of WUs) of water at a 
>> time into the basin in one big splosh (1000Mb/s). You need to ensure a 
>> long enough pause and a small enough jug to avoid the basin 
>> overflowing (and losing water/data onto the floor).
> 
> I don't care about the lost water, I just want dry feet.

Lost water means that you get your feet very wet as you are sent on a 
fools errand to fetch exponentially more jugs for all the ones that 
didn't make it through. Almost like a scene out of the Sorcerer's 
Apprentice.

http://www.youtube.com/watch?v=P_9PG_oNrWM

> We have 180,000 machines out there.
> 
> Assume that they're all on 1 megabit connections for a moment, and it 
> seems intuitively obvious that the biggest possible "big splosh" is not 
> 1000Mb/s, but 180,000Mb/s.
> 
> I want to pass a law that says no more than 1000 taps can be open at once.

The '1000' is still far far to great. The 'pinch' point is just an 
observed 90Mb/s and I expect the 'average' modern internet user is 
nearer to 10Mb/s download and 1Mb/s upload...

See about congestion extremes:
http://en.wikipedia.org/wiki/Congestive_collapse


(Mb = Mega-bit)


Hence:

 >> As an experiment, can s...@h limit the max number of simultaneous
 >> upload/download connections to see what happens to the data rates?
 >>
 >> I suggest a 'first try' *max simultaneous connections of 150* for
 >> uploads *and 20 for downloads* Adjust as necessary to keep the link
 >> at an average that is no more than *just 80 Mbit/s*

It would be interesting to see what those numbers adjust to so as to hit 
the unsaturated 80 Mbit/s... And see by how much that improves the 
sustained transfer rate for the WUs.

Regards,
Martin


-- 
--------------------
Martin Lomas
m_boincdev ml1 co uk.ddSPAM.dd
--------------------
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to