Re: Does TCP Need an Overhaul? (internetevolution, via slashdot)

2008-04-07 Thread Sam Stickland


Kevin Day wrote:
Yeah, I guess the point I was trying to make is that once you throw 
SACK into the equation you lose the assumption that if you drop TCP 
packets, TCP slows down. Before New Reno, fast-retransmit and SACK 
this was true and very easy to model. Now you can drop a considerable 
number of packets and TCP doesn't slow down very much, if at all. If 
you're worried about data that your clients are downloading you're 
either throwing away data from the server (which is wasting bandwidth 
getting all the way to you) or throwing away your clients' ACKs. Lost 
ACKs do almost nothing to slow down TCP unless you've thrown them 
*all* away.
If this was true surely it would mean that drop models such WRED/RED are 
becoming useless?


Sam


Re: Does TCP Need an Overhaul? (internetevolution, via slashdot)

2008-04-07 Thread Iljitsch van Beijnum


On 5 apr 2008, at 12:34, Kevin Day wrote:

As long as you didn't drop more packets than SACK could handle  
(generally 2 packets in-flight) dropping packets is pretty  
ineffective at causing TCP to slow down.


It shouldn't be. TCP hovers around the maximum bandwidth that a path  
will allow (if the underlying buffers are large enough). It increases  
its congestion window in congestion avoidance until a packet is  
dropped, then the congestion window shrinks but it also starts growing  
again.


If you read The macroscopic behavior of the TCP Congestion Avoidance  
algorithm by Mathis et al you'll see that TCP performance conforms to:


bandwdith = MSS / RTT * C / sqrt(p)

Where MSS is the maximum segment size, RTT the round trip time, C a  
constant close to 1 and p the packet loss probability.


Since the overshooting of the congestion window causes congestion =  
packet loss, you end up at some equilibrium of bandwidth and packet  
loss. Or, for a given link: number of flows, bandwidth and packet loss.


I'm sure this behavior isn't any different in the presence of SACK.

However, the caveat is that the congestion window never shrinks  
between two maximum segment sizes. If packet loss is such that you  
reach that size, then more packet loss will not slow down sessions.  
Note that for short RTTs you can still move a fair amount of data in  
this state, but any lost packet means a retransmission timeout, which  
stalls the session.


You've also got fast retransmit, New Reno, BIC/CUBIC, as well as  
host parameter caching to limit the affect of packet loss on  
recovery time.


The really interesting one is TCP Vegas, which doesn't need packet  
loss to slow down. But Vegas is a bit less aggressive than Reno (which  
is what's widely deployed) or New Reno (which is also deployed but not  
so widely). This is a disincentive for users to deploy it, but it  
would be good for service providers. Additional benefit is that you  
don't need to keep huge numbers of buffers in your routers and  
switches because Vegas flows tend to not overshoot the maximum  
available bandwidth of the path.


Re: Does TCP Need an Overhaul? (internetevolution, via slashdot)

2008-04-07 Thread Kevin Day



On Apr 7, 2008, at 7:17 AM, Iljitsch van Beijnum wrote:



On 5 apr 2008, at 12:34, Kevin Day wrote:

As long as you didn't drop more packets than SACK could handle  
(generally 2 packets in-flight) dropping packets is pretty  
ineffective at causing TCP to slow down.


It shouldn't be. TCP hovers around the maximum bandwidth that a path  
will allow (if the underlying buffers are large enough). It  
increases its congestion window in congestion avoidance until a  
packet is dropped, then the congestion window shrinks but it also  
starts growing again.


I'm sure this behavior isn't any different in the presence of SACK.



At least in FreeBSD, packet loss handled by SACK recovery changes the  
congestion window behavior. During a SACK recovery, the congestion  
window is clamped down to allow no more than 2 additional segments in  
flight, but that only lasts until the recovery is complete and quickly  
recovers. (That's significantly glossing over a lot of details that  
probably only matter to those who already know them - don't shoot me  
for that not being 100% accurate :) )


I don't believe that Linux or Windows are quite that aggressive with  
SACK recovery though, but I'm less familiar there.


As a quick example on two FreeBSD 7.0 boxes attached directly over  
GigE, with New Reno, fast retransmit/recovery, and 256K window sizes,  
with an intermediary router simulating packet loss. A single HTTP TCP  
session going from a server to client.


SACK enabled, 0% packet loss: 780Mbps
SACK disabled, 0% packet loss: 780Mbps

SACK enabled, 0.005% packet loss: 734Mbps
SACK disabled, 0.005% packet loss: 144Mbps  (19.6% the speed of having  
SACK enabled)


SACK enabled, 0.01% packet loss: 664Mbps
SACK disabled, 0.01% packet loss: 88Mbps (13.3%)

However, this falls apart pretty fast when the packet loss is high  
enough that SACK doesn't spend enough time outside the recovery phase.  
It's still much better than without SACK though:


SACK enabled, 0.1% packet loss: 48Mbps
SACK disabled, 0.1% packet loss: 36Mbps (75%)


However, the caveat is that the congestion window never shrinks  
between two maximum segment sizes. If packet loss is such that you  
reach that size, then more packet loss will not slow down sessions.  
Note that for short RTTs you can still move a fair amount of data in  
this state, but any lost packet means a retransmission timeout,  
which stalls the session.




True, a longer RTT changes this effect. Same test, but instead of back- 
to-back GigE, this is going over a real-world trans-atlantic link:


SACK enabled, 0% packet loss: 2.22Mbps
SACK disabled, 0% packet loss: 2.23Mbps

SACK enabled, 0.005% packet loss: 2.03Mbps
SACK disabled, 0.005% packet loss: 1.95Mbps (96%)

SACK enabled, 0.01% packet loss: 2.01Mbps
SACK disabled, 0.01% packet loss: 1.94Mbps (96%)

SACK enabled, 0.1% packet loss: 1.93Mbps
SACK disabled, 0.1% packet loss: 0.85Mbps (44%)


(No, this wasn't a scientifically valid test there, but the best I can  
do for an early Monday morning)



You've also got fast retransmit, New Reno, BIC/CUBIC, as well as  
host parameter caching to limit the affect of packet loss on  
recovery time.


The really interesting one is TCP Vegas, which doesn't need packet  
loss to slow down. But Vegas is a bit less aggressive than Reno  
(which is what's widely deployed) or New Reno (which is also  
deployed but not so widely). This is a disincentive for users to  
deploy it, but it would be good for service providers. Additional  
benefit is that you don't need to keep huge numbers of buffers in  
your routers and switches because Vegas flows tend to not overshoot  
the maximum available bandwidth of the path.


It would be very nice if more network-friendly protocols were in use,  
but with download optimizers for Windows that cranks the TCP window  
sizes way up, the general move to solving latency by opening more  
sockets, and P2P doing whatever it can to evade ISP detection - it's  
probably a bit late.


-- Kevin



Superfast internet may replace world wide web

2008-04-07 Thread Glen Kent

says the solemn headline of Telegraph.

http://www.telegraph.co.uk/news/main.jhtml?xml=/news/2008/04/06/ninternet106.xml

Also related to this one, here:

Web could collapse as video demand soars
http://www.telegraph.co.uk/news/main.jhtml?xml=/news/2008/04/07/nweb107.xml

.. and we in Nanog are still discussing IPv6! ;-)


Re: Does TCP Need an Overhaul? (internetevolution, via slashdot)

2008-04-07 Thread charles

Thanks for sharing your test results Kevin. Most interesting. I am setting up a 
small test lab of a couple linux boxes myself to learn more about the various 
traffic shaping and TCP stack options. 

Ill post my results here. I am primarily interested in local wireless network 
performance optimization vs long haul connects at least for now.

It would be interesting to see how the numbers change with a windows or linux 
box on one end and BSD on the other. 

Also how did you simulate packet loss? 

 
Sent via BlackBerry from T-Mobile



Re: Superfast internet may replace world wide web

2008-04-07 Thread Lucy Lynch


On Mon, 7 Apr 2008, Bill Woodcock wrote:



 On Mon, 7 Apr 2008, Glen Kent wrote:
says the solemn headline of Telegraph.
.. and we in Nanog are still discussing IPv6! ;-)

It's because we don't have a hadron demolition derby to power our American
interwebs:

   The power of the grid will be unlocked this summer with the switching
on of the Large Hadron Collider (LHC).



http://xkcd.com/401/


   -Bill



Re: Superfast internet may replace world wide web

2008-04-07 Thread Bill Woodcock

  On Mon, 7 Apr 2008, Glen Kent wrote:
 says the solemn headline of Telegraph.
 .. and we in Nanog are still discussing IPv6! ;-)

It's because we don't have a hadron demolition derby to power our American 
interwebs:

The power of the grid will be unlocked this summer with the switching 
 on of the Large Hadron Collider (LHC).


-Bill



Re: Superfast internet may replace world wide web

2008-04-07 Thread Jeroen Massar

Glen Kent wrote:

says the solemn headline of Telegraph.

http://www.telegraph.co.uk/news/main.jhtml?xml=/news/2008/04/06/ninternet106.xml


It is always good to see that journalists don't know that Networks are 
also used for other purposes than their daily dose of nonsense (also 
called the Internet or World Wide Web for the web-only portion etc)



Also related to this one, here:

Web could collapse as video demand soars
http://www.telegraph.co.uk/news/main.jhtml?xml=/news/2008/04/07/nweb107.xml

.. and we in Nanog are still discussing IPv6! ;-)


The CERN LHC (Large Hadron Collider) is a nice toy. They had an Open Day 
(http://lhc2008.web.cern.ch/LHC2008/index-E.html) yesterday, which was 
really impressive. No more Open Days in the tunnel are planned for the 
next couple of years, thus for everybody who missed it:


  http://gallery.unfix.org/2008/2008-04-06-cern-lhc/

Greets,
 Jeroen



signature.asc
Description: OpenPGP digital signature


Re: Superfast internet may replace world wide web

2008-04-07 Thread Thomas Kernen




Bill Woodcock wrote:

  On Mon, 7 Apr 2008, Glen Kent wrote:
 says the solemn headline of Telegraph.
 .. and we in Nanog are still discussing IPv6! ;-)

It's because we don't have a hadron demolition derby to power our American 
interwebs:


The power of the grid will be unlocked this summer with the switching 
 on of the Large Hadron Collider (LHC).


And those of us that live next to the LHC wonder if we will be sucked 
into a {vortex|wormhole}.


Thomas



Re: Does TCP Need an Overhaul? (internetevolution, via slashdot)

2008-04-07 Thread Lucy Lynch


On Mon, 7 Apr 2008, [EMAIL PROTECTED] wrote:



Thanks for sharing your test results Kevin. Most interesting. I am 
setting up a small test lab of a couple linux boxes myself to learn more 
about the various traffic shaping and TCP stack options.


Ill post my results here. I am primarily interested in local wireless 
network performance optimization vs long haul connects at least for now.


Anyone out there attend this event?

The Future of TCP: Train-wreck or Evolution?
http://yuba.stanford.edu/trainwreck/agenda.html

how did the demos go?

- Lucy


It would be interesting to see how the numbers change with a windows or 
linux box on one end and BSD on the other.


Also how did you simulate packet loss?


Sent via BlackBerry from T-Mobile



Re: Superfast internet may replace world wide web

2008-04-07 Thread Steven M. Bellovin

On Mon, 7 Apr 2008 08:24:54 -0700 (PDT)
Lucy Lynch [EMAIL PROTECTED] wrote:

 
 On Mon, 7 Apr 2008, Bill Woodcock wrote:
 
 
   On Mon, 7 Apr 2008, Glen Kent wrote:
  says the solemn headline of Telegraph.
  .. and we in Nanog are still discussing IPv6! ;-)
 
  It's because we don't have a hadron demolition derby to power our
  American interwebs:
 
 The power of the grid will be unlocked this summer with the
  switching on of the Large Hadron Collider (LHC).
 
 
 http://xkcd.com/401/
 
Also http://ars.userfriendly.org/cartoons/?id=20080330 and
http://ars.userfriendly.org/cartoons/?id=20080406


--Steve Bellovin, http://www.cs.columbia.edu/~smb


Any tool or theorical method on detecting number of computer behind a NAT box?

2008-04-07 Thread Joe Shen

hi,

   Sharing internet access bandwidth between multiple
computers is common today. 

   Usually, bandwidth sharer bought a little router
with NAT/PAT function. After connecting that box to a
ADSL/LAN access link, multiple computer could share a
single access link.

   I heard some company provide prdouct for detecting
number of computers behind NAT/PAT box. 

   Is there any paper or document on how such product
work? where could I fint them ?


  Joe


  __
Search, browse and book your hotels and flights through Yahoo! Travel.
http://sg.travel.yahoo.com


Re: Superfast internet may replace world wide web

2008-04-07 Thread Jon Lewis


On Mon, 7 Apr 2008, Glen Kent wrote:


says the solemn headline of Telegraph.

http://www.telegraph.co.uk/news/main.jhtml?xml=/news/2008/04/06/ninternet106.xml


So, the internet was created in Switzerland at Cern's particle physics 
center?  Can someone look up Al Gore's passport history and tell us when 
he spent time there?


:)

--
 Jon Lewis   |  I route
 Senior Network Engineer |  therefore you are
 Atlantic Net|
_ http://www.lewis.org/~jlewis/pgp for PGP public key_


Re: Superfast internet may replace world wide web

2008-04-07 Thread Marshall Eubanks



On Apr 7, 2008, at 11:36 AM, Thomas Kernen wrote:





Bill Woodcock wrote:

 On Mon, 7 Apr 2008, Glen Kent wrote:
says the solemn headline of Telegraph.
.. and we in Nanog are still discussing IPv6! ;-)
It's because we don't have a hadron demolition derby to power our  
American interwebs:
   The power of the grid will be unlocked this summer with the  
switching  on of the Large Hadron Collider (LHC).


And those of us that live next to the LHC wonder if we will be  
sucked into a {vortex|wormhole}.


If you are, it won't matter if you live near it or not.

Regards
Marshall




Thomas





Re: Superfast internet may replace world wide web

2008-04-07 Thread Valdis . Kletnieks
On Mon, 07 Apr 2008 17:36:09 +0200, Thomas Kernen said:

 And those of us that live next to the LHC wonder if we will be sucked 
 into a {vortex|wormhole}.

You mean like this?

http://ars.userfriendly.org/cartoons/?id=20080406mode=classic 


pgplzlVbya2JN.pgp
Description: PGP signature


RE: Superfast internet may replace world wide web

2008-04-07 Thread michael.dillon

 Subject: Superfast internet may replace world wide web
 says the solemn headline of Telegraph.

Hasn't your mummy told you not to believe everything that
you read in the papers? Especially when it involves technology!

In any case, there is no new Internet here, just an
engineered P2P network (or call it a CDN if you will) that
is intended to distribute 15 million gigs per year of data
to scientists who crunch that data on virtual supercomputer
clusters known as the Grid. They do all of this on the Internet
today, except for big data transfers for which most countries
have build special academic IP networks.

The Grid is rather like Amazon's EC2 and this CERN project is
rather like Amazon's S3. 

Yes, I agree with the Telegraph that P2P and cloud computing
Amazon style, are indeed the wave of the future, but they won't
replace the web or the Internet. They are just another theme
being added to the Internet recipe. It's just like Heston
Blumenthal's cuisine http://en.wikipedia.org/wiki/Heston_Blumenthal;
it's still food, it's still served in restaurants and it still
counts towards his three Michelin stars.

Still, I don't expect bacon and eggs ice cream to come to 
Baskin Robbins anytime soon.

--Michael Dillon


Train wreck (was Does TCP Need an Overhaul?)

2008-04-07 Thread Fred Baker


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Apr 7, 2008, at 8:36 AM, Lucy Lynch wrote:

Anyone out there attend this event?

The Future of TCP: Train-wreck or Evolution?
http://yuba.stanford.edu/trainwreck/agenda.html

how did the demos go?


The researchers demonstrated four things that made sense to me:

(1) TCP is not the right transport for carrying video data if what  
you want is real-time delivery. Carrying stored video (YouTube-style)  
is fine, but if you're trying to watch TV, you really should be using  
some other transport such as RTP or DCCP. Same comment holds for  
sensor traffic, but the astronomers who carry radiotelescope data  
halfway around the world weren't present.


(2) TCP is probably not the right protocol for carrying transaction  
traffic within a data center. One speculates that SCTP (which has a  
concept of a stream of TCP-like transactions that can be handled  
out of order and allows for congestion management both within and  
among transactions) might be a better protocol, and in any event that  
when thousands of transactions back up in a gigabit Ethernet chip's  
queue on a host that the host should start noticing that they are  
experiencing congestion.


(3) 802.11 networks experience not only the traditional congestion  
experienced in wired networks, but channel access congestion (true of  
shared media in general) and radio interference. In such networks, it  
may be useful to think about congestion as happening in a region as  
opposed to at a bottleneck.


(4) When it is pointed out that instead of complaining about TCP in  
cases where it is the wrong protocol it may be more useful to use the  
transport designed for the purpose, researchers who presumably are  
expert on matters in the transport layer respond in complete surprise.

-BEGIN PGP SIGNATURE-

iD8DBQFH+klNbjEdbHIsm0MRAlLhAKCDprgXaKYukFG57KRsRS8HyGAUHgCgyRLd
SpNahEUbZudgcoc3bMz/Cto=
=hnGa
-END PGP SIGNATURE-


Re: Any tool or theorical method on detecting number of computer behind a NAT box?

2008-04-07 Thread Steven M. Bellovin

On Mon, 7 Apr 2008 23:51:55 +0800 (CST)
Joe Shen [EMAIL PROTECTED] wrote:

 
 hi,
 
Sharing internet access bandwidth between multiple
 computers is common today. 
 
Usually, bandwidth sharer bought a little router
 with NAT/PAT function. After connecting that box to a
 ADSL/LAN access link, multiple computer could share a
 single access link.
 
I heard some company provide prdouct for detecting
 number of computers behind NAT/PAT box. 
 
Is there any paper or document on how such product
 work? where could I fint them ?
 
THere was a paper of mine from a few years ago:
http://www.cs.columbia.edu/~smb/papers/fnat.pdf


--Steve Bellovin, http://www.cs.columbia.edu/~smb


Re: Does TCP Need an Overhaul? (internetevolution, via slashdot)

2008-04-07 Thread Iljitsch van Beijnum


On 7 apr 2008, at 16:20, Kevin Day wrote:

As a quick example on two FreeBSD 7.0 boxes attached directly over  
GigE, with New Reno, fast retransmit/recovery, and 256K window  
sizes, with an intermediary router simulating packet loss. A single  
HTTP TCP session going from a server to client.


Ok, assuming a 1460 MSS that leaves the RTT as the unknown.


SACK enabled, 0% packet loss: 780Mbps
SACK disabled, 0% packet loss: 780Mbps


Is that all? Try with jumboframes.


SACK enabled, 0.005% packet loss: 734Mbps
SACK disabled, 0.005% packet loss: 144Mbps  (19.6% the speed of  
having SACK enabled)


144 Mbps and 0.5 packet loss probability would result in a ~ 110  
ms RTT so obviously something isn't right with that case.


734 would be an RTT of around 2 ms, which sounds fairly reasonable.

I'd be interested to see what's really going on here, I suspect that  
the packet loss isn't sufficiently random so multiple segments are  
lost from a single window. Or maybe disabling SACK also disables fast  
retransmit? I'll be happy to look at a tcpdump for the 144 Mbps case.


It would be very nice if more network-friendly protocols were in  
use, but with download optimizers for Windows that cranks the TCP  
window sizes way up, the general move to solving latency by opening  
more sockets, and P2P doing whatever it can to evade ISP detection -  
it's probably a bit late.


Don't forget that the user is only partially in control, the data also  
has to come from somewhere. Service operators have little incentive to  
break the network. And users would probably actually like it if their  
p2p was less aggressive, that way you can keep it running when you do  
other stuff without jumping through traffic limiting hoops.


Re: Train wreck (was Does TCP Need an Overhaul?)

2008-04-07 Thread Iljitsch van Beijnum


On 7 apr 2008, at 18:18, Fred Baker wrote:

(4) When it is pointed out that instead of complaining about TCP in  
cases where it is the wrong protocol it may be more useful to use  
the transport designed for the purpose, researchers who presumably  
are expert on matters in the transport layer respond in complete  
surprise.


There is of course the issue of migrating from one transport to  
another with NATs and firewalls thrown in for good measure, which is  
worse than migrating to IPv6 in some ways and only significantly  
better in one (no need to upgrade routers).




Re: Superfast internet may replace world wide web

2008-04-07 Thread Hank Nussbacher


On Mon, 7 Apr 2008, Bill Woodcock wrote:



 On Mon, 7 Apr 2008, Glen Kent wrote:
says the solemn headline of Telegraph.
.. and we in Nanog are still discussing IPv6! ;-)

It's because we don't have a hadron demolition derby to power our American
interwebs:

   The power of the grid will be unlocked this summer with the switching
on of the Large Hadron Collider (LHC).


I doubt it:
http://www.geant2.net/server/show/nav.00d00h001003
The structure of the LHC Computing Grid (LCG) is to distribute the data 
first to 12 Tier 1 sites, each connected to Tier 0 (CERN) by a dedicated 
wavelength switched path of 10Gbps. These paths are provided by the new 
hybrid (IP routed/ wavelength switched) structure of G?ANT2. Corresponding 
dark-fibre lightpaths will be provided by each of the European NRENs 
involved.


So within Europe, much of the LHC data will move via paths that not even 
part of the Internet/Geant2 infrastructure:

http://www.geant2.net/upload/pdf/PUB-07-179_GN2_Topology_Jan_08_final.pdf
Just look for the black links which are all dark fiber out of Switzerland.

Also, LHC will generate 15Petabytes per annum - not Gigabytes or Terabytes 
as some media have stated.


-Hank


Re: Superfast internet may replace world wide web

2008-04-07 Thread Patrick W. Gilmore


On Apr 7, 2008, at 11:39 AM, Steven M. Bellovin wrote:


On Mon, 7 Apr 2008 08:24:54 -0700 (PDT)
Lucy Lynch [EMAIL PROTECTED] wrote:



http://xkcd.com/401/


Also http://ars.userfriendly.org/cartoons/?id=20080330 and
http://ars.userfriendly.org/cartoons/?id=20080406


I love those!

I also love the top story here, especially the last sentence:

  http://bobpark.physics.umd.edu/WN08/wn040408.html

--
TTFN,
patrick



Re: Superfast internet may replace world wide web

2008-04-07 Thread Kevin Oberman
 Date: Mon, 7 Apr 2008 20:21:26 +0530
 From: Glen Kent [EMAIL PROTECTED]
 Sender: [EMAIL PROTECTED]
 
 
 says the solemn headline of Telegraph.
 
 http://www.telegraph.co.uk/news/main.jhtml?xml=/news/2008/04/06/ninternet106.xml
 
 Also related to this one, here:
 
 Web could collapse as video demand soars
 http://www.telegraph.co.uk/news/main.jhtml?xml=/news/2008/04/07/nweb107.xml
 
 .. and we in Nanog are still discussing IPv6! ;-)
 

Sigh. Never let a clueless writer put up a story as technically complex
as this. He clearly does not know the difference between the web (which
WAS invented at CERN) and the Internet (which was not). His confusion on
this and other details leads to a story which has little or nothing to
say.

1. The grid was NOT invented at CERN, although CERN/LHC people were
involved. 

2. Aside from being the a huge physics experiment, it is also a huge
network experiment. We will be carrying many gigabits of data from CERN
to FermiLab and Brookhaven as well as from those facilities to physics
researcher all over the world. By 2011 we may be seeing close to 100
Gbps 24/7 for months at a time. And that is just data from CERN to the
US. They will be sending data to many other countries. (OK, there are
some short pauses for calibration.)

3. This will all be over the Internet, though much will utilize
dedicated lines purchased/leased just for this. But it's still TCP/IP
and UDP (mostly the former) and mostly using traditional P2P techniques
to get adequate performance over links with RTTs in excess of 200 ms.

It is true that the problems faced by CERN are similar to those faced by
CDNs streaming video, but it is different in that this data is NOT
streamed. You can't take the chance that the packet with the Higgs Boson
waving hello is dropped.

Since almost of the traffic is passing over dedicated links, congestion
due to aggregation, the big issue with streaming video, is simply not an
issue. We want to move as much data in a single stream as you can
convince TCP to allow.

So the things learned from the LHC network experiment may well help
improve the Internet and help with things like video distribution, the
grid is NOT going to replace the web, let alone the Internet.
-- 
R. Kevin Oberman, Network Engineer
Energy Sciences Network (ESnet)
Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab)
E-mail: [EMAIL PROTECTED]   Phone: +1 510 486-8634
Key fingerprint:059B 2DDF 031C 9BA3 14A4  EADA 927D EBB3 987B 3751


pgpyfWrw6l2qz.pgp
Description: PGP signature


Re: Superfast internet may replace world wide web

2008-04-07 Thread Valdis . Kletnieks
On Mon, 07 Apr 2008 20:21:26 +0530, Glen Kent said:
 
 says the solemn headline of Telegraph.
 
 http://www.telegraph.co.uk/news/main.jhtml?xml=/news/2008/04/06/ninternet106.xml

So yoy get higher bandwidth (physical pipe allowing) by downloading from a
grid of systems.

Sounds suspiciously like somebody has re-invented BitTorrent?

(Sorry, am in a cynical mood today.. ;)


pgpa8tbzcFIzG.pgp
Description: PGP signature


Re: Superfast internet may replace world wide web

2008-04-07 Thread Fred Baker


That and someone can't tell the difference between a network and an  
application that runs in a network.


On Apr 7, 2008, at 10:38 AM, [EMAIL PROTECTED] wrote:

On Mon, 07 Apr 2008 20:21:26 +0530, Glen Kent said:


says the solemn headline of Telegraph.

http://www.telegraph.co.uk/news/main.jhtml?xml=/news/2008/04/06/ 
ninternet106.xml


So yoy get higher bandwidth (physical pipe allowing) by downloading  
from a

grid of systems.

Sounds suspiciously like somebody has re-invented BitTorrent?

(Sorry, am in a cynical mood today.. ;)




Re: Superfast internet may replace world wide web

2008-04-07 Thread Marshall Eubanks



On Apr 7, 2008, at 1:00 PM, Kevin Oberman wrote:


Date: Mon, 7 Apr 2008 20:21:26 +0530
From: Glen Kent [EMAIL PROTECTED]
Sender: [EMAIL PROTECTED]


says the solemn headline of Telegraph.

http://www.telegraph.co.uk/news/main.jhtml?xml=/news/2008/04/06/ninternet106.xml

Also related to this one, here:

Web could collapse as video demand soars
http://www.telegraph.co.uk/news/main.jhtml?xml=/news/2008/04/07/nweb107.xml

.. and we in Nanog are still discussing IPv6! ;-)



Sigh. Never let a clueless writer put up a story as technically  
complex
as this. He clearly does not know the difference between the web  
(which
WAS invented at CERN) and the Internet (which was not). His  
confusion on

this and other details leads to a story which has little or nothing to
say.

1. The grid was NOT invented at CERN, although CERN/LHC people were
involved.

2. Aside from being the a huge physics experiment, it is also a huge
network experiment. We will be carrying many gigabits of data from  
CERN

to FermiLab and Brookhaven as well as from those facilities to physics
researcher all over the world. By 2011 we may be seeing close to 100
Gbps 24/7 for months at a time. And that is just data from CERN to the
US. They will be sending data to many other countries. (OK, there are
some short pauses for calibration.)

3. This will all be over the Internet, though much will utilize
dedicated lines purchased/leased just for this. But it's still TCP/IP
and UDP (mostly the former) and mostly using traditional P2P  
techniques

to get adequate performance over links with RTTs in excess of 200 ms.

It is true that the problems faced by CERN are similar to those  
faced by

CDNs streaming video, but it is different in that this data is NOT
streamed. You can't take the chance that the packet with the Higgs  
Boson

waving hello is dropped.


I would actually disagree with that, _IF_ your SNR is limited by your  
bit rate.


In VLBI (where the SNR _IS_ limited by the bit rate) it is more  
efficient to send more

(new) data than to repeat old data that gets lost.

Having talked to particle physicists here who feel that they are in  
the same regime, I

would be curious as to whether or not CERN has done the math on
this, and with what result.

Regards
Marshall





Since almost of the traffic is passing over dedicated links,  
congestion
due to aggregation, the big issue with streaming video, is simply  
not an

issue. We want to move as much data in a single stream as you can
convince TCP to allow.

So the things learned from the LHC network experiment may well help
improve the Internet and help with things like video distribution, the
grid is NOT going to replace the web, let alone the Internet.
--
R. Kevin Oberman, Network Engineer
Energy Sciences Network (ESnet)
Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab)
E-mail: [EMAIL PROTECTED]   Phone: +1 510 486-8634
Key fingerprint:059B 2DDF 031C 9BA3 14A4  EADA 927D EBB3 987B 3751




Bandwidth issues in the Sprint network

2008-04-07 Thread Brian Raaen
I am currently having problems get upload bandwidth on a Sprint circuit. I am 
using a full OC3 circuit.  I am doing fine on downloading data, but uploading 
data I can only get about 5Mbps with ftp or a speedtest.  I have tested 
against multiple networks and this has stayed the same.  Monitoring Cacti 
graphs and the router I do get about 30Mbps total traffic outbound, but 
individual (flows/ip?) test always seem limited.  I would like to know if 
anyone else sees anything similar, or where I can get help.  The assistance I 
have gotten from Sprint up to this point is that they find no problems.  Due 
to the consistency of 5Mbps I am suspecting rate limiting, but wanted to know 
if I was overlooking something else.

-- 
Brian Raaen
Network Engineer
[EMAIL PROTECTED]


signature.asc
Description: This is a digitally signed message part.


Re: Bandwidth issues in the Sprint network

2008-04-07 Thread Valdis . Kletnieks
On Mon, 07 Apr 2008 15:06:21 EDT, Brian Raaen said:
 have gotten from Sprint up to this point is that they find no problems.  Due
 to the consistency of 5Mbps I am suspecting rate limiting, but wanted to know
 if I was overlooking something else.

TCP window size tuning?  I'd look there first...



pgp5sXCAaiPAS.pgp
Description: PGP signature


Perspectives: Diseconomies of Scale [was: Re: (latency) cooling door ...

2008-04-07 Thread Frank Coluccio

I offer this as an afterthought without any expectations of replies to last
week's latency discussion that focused on centralized vs. distributed, or one 
vs.
many, data centers. While IMO the authors do not take into proper account the
costs associated with network bandwidth and node provisioning, I thought it was
nonetheless interesting due to the other factors they highlighted. There's
nothing particularly correct here, merely some additional viewpoints on the
subject that were not covered here earlier. Enjoy!

Diseconomies of Scale - by Ken Church  James Hamilton
April 6, 2008

http://perspectives.mvdirona.com/2008/04/06/DiseconomiesOfScale.aspx

Frank  

---


Re: Bandwidth issues in the Sprint network

2008-04-07 Thread Scott Weeks



--- [EMAIL PROTECTED] wrote:

I am currently having problems get upload bandwidth on a Sprint circuit. I am 
using a full OC3 circuit.  I am doing fine on downloading data, but uploading 
data I can only get about 5Mbps with ftp or a speedtest.  I have tested 
against multiple networks and this has stayed the same.  Monitoring Cacti 
graphs and the router I do get about 30Mbps total traffic outbound, but 
individual (flows/ip?) test always seem limited.  I would like to know if 
anyone else sees anything similar, or where I can get help.  The assistance I 
have gotten from Sprint up to this point is that they find no problems.  Due 
to the consistency of 5Mbps I am suspecting rate limiting, but wanted to know 
if I was overlooking something else.



I would not use one FTP session to test bandwidth.  The rate limiting may be in 
the FTP software or other area of the computer.  Likewise, Speedtest servers 
(in my opinion) are more marketing tools than testing tools.  Try several 
similarly configured (but separate boxes) FTP servers simultaneously.  If you 
see it go up by a factor of three you've found the issue.

I have had to push my four OC-12s to Sprint to the max at times and get full 
BW.  That's in Hawaii, but I imagine it's the same as other areas.

scott


RE: Bandwidth issues in the Sprint network

2008-04-07 Thread Robert D. Scott

See if you can find a nother connector that can help with using iperf. Also,
make sure any system testing systems have tuned IP stacks. That info is also
linked from the iperf web page. 

http://dast.nlanr.net/Projects/Iperf/

http://www.psc.edu/networking/projects/tcptune/

Robert D. Scott [EMAIL PROTECTED]
Senior Network Engineer 352-273-0113 Phone
CNS - Network Services  352-392-2061 CNS Receptionist
University of Florida   352-392-9440 FAX
Florida Lambda Rail 352-294-3571 FLR NOC
Gainesville, FL  32611

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Scott Weeks
Sent: Monday, April 07, 2008 5:24 PM
To: nanog@merit.edu
Subject: Re: Bandwidth issues in the Sprint network




--- [EMAIL PROTECTED] wrote:

I am currently having problems get upload bandwidth on a Sprint circuit. I
am using a full OC3 circuit.  I am doing fine on downloading data, but
uploading data I can only get about 5Mbps with ftp or a speedtest.  I have
tested against multiple networks and this has stayed the same.  Monitoring
Cacti graphs and the router I do get about 30Mbps total traffic outbound,
but individual (flows/ip?) test always seem limited.  I would like to know
if anyone else sees anything similar, or where I can get help.  The
assistance I have gotten from Sprint up to this point is that they find no
problems.  Due to the consistency of 5Mbps I am suspecting rate limiting,
but wanted to know if I was overlooking something else.



I would not use one FTP session to test bandwidth.  The rate limiting may be
in the FTP software or other area of the computer.  Likewise, Speedtest
servers (in my opinion) are more marketing tools than testing tools.  Try
several similarly configured (but separate boxes) FTP servers
simultaneously.  If you see it go up by a factor of three you've found the
issue.

I have had to push my four OC-12s to Sprint to the max at times and get full
BW.  That's in Hawaii, but I imagine it's the same as other areas.

scott




Re: Bandwidth issues in the Sprint network

2008-04-07 Thread Scott Weeks


--- [EMAIL PROTECTED] wrote: ---
I would like to second the recommendation and go one further.  Internet2 
has released a performance toolkit that is run from CD.  I would like to 
--
Robert D. Scott wrote:
 See if you can find a nother connector that can help with using iperf. Also,
---


The thing to note about most tools like these is you need a box on both sides 
of the circuit using the same software.  One could be 'out there' on the 
internet, but the further 'out there' your other box is, the less valid your 
test is.

scott



Iperf 2.0.4 Released

2008-04-07 Thread Kevin Oberman
On the subject of iperf, I just received this today:

Iperf 2.0.4 addresses one major and several minor issues with Iperf.

The bugs fixed were:

  * Iperf should no longer consume gratuitous CPU under Linux The help messages
  * have been expanded to include previously undocumented options The header
  * the stats report was missing a header which has been replaced.

New in Iperf 2.0.4:

 * Under Linux you can select the TCP congestion algorithm by using the -Z 
(--linux-congestion) flag
 * Iperf has a minimal man page!

Thanks to Stephen Hemminger and Claus Klein for their patches.

This is intended to be the last release in the 2.0 train.  Development efforts
going forward will concentrate on the 2.1 series of releases.  If significant
bugs are found there will be a 2.0.5 release, but hopefully we won't need to
do that.

You can download it at:

https://sourceforge.net/project/showfiles.php?group_id=128336
-- 
R. Kevin Oberman, Network Engineer
Energy Sciences Network (ESnet)
Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab)
E-mail: [EMAIL PROTECTED]   Phone: +1 510 486-8634
Key fingerprint:059B 2DDF 031C 9BA3 14A4  EADA 927D EBB3 987B 3751


pgpYP3q8gyzYQ.pgp
Description: PGP signature