On Fri, 2010-11-26 at 15:59 -0700, David Barrett wrote:
> Interesting article:
> 
> http://blog.benstrong.com/2010/11/google-and-microsoft-cheat-on-slow.html
> 
> I know a lot of people on this list are interested in this topic.  But 
> I'm curious: if all sites were to start adopting *ahem* "alternative" 
> congestion strategies like this, would would the real-world 
> ramifications be?  Indeed, it seems reasonable to assume that before 
> long it'll be a standard Apache option to do what Google does.
> 
> Is this the end of the gentleman's internet?  Should ISPs detect and 
> block/throttle this behavior -- essentially punishing (or overriding) 
> this type of behavior to re-establish normalcy?
> 
> -david


FWIW, I'm in favor of anything that reduces the number of roundtrips. 
While I see the rationale behind the slow-start algorithm in HTTP, the 
initial window size (and packet size) currently in use is ridiculous.  

Google is right that the IW size should be increased to (at least) 10 
packets. I think I'd have gone for 32 actually.  Someone should write 
an RFC to change it. 

This reflects the current reality that network latency has become by 
far the limiting factor on speed; it's not (or it's rarely) the size 
of data being transmitted that limits speed anymore; it's mainly the
number of roundtrip delays.

Skipping the slow-start algorithm all together as MS is doing is 
dubious at best, though. 

                                Bear




_______________________________________________
p2p-hackers mailing list
p2p-hackers@lists.zooko.com
http://lists.zooko.com/mailman/listinfo/p2p-hackers

Reply via email to