Re: trans-Atlantic latency?

2007-06-29 Thread Peter Dambier


Neal R wrote:


  I have a customer with IP transport from Sprint and McLeod and fiber
connectivity to Sprint in the Chicago area. The person making the
decisions is not a routing guy but is very sharp overall. He is
currently examining the latency on trans-Atlantic links and has fixed on
the idea that he needs 40ms or less to London through whatever carrier
he picks. He has spoken to someone at Cogent about a point to point link.


What is a reasonable latency to see on a link of that distance? I
get the impression he is shopping for something that involves dilithium
crystal powered negative latency inducers, wormhole technology, or an
ethernet to tachyon bridge, but its been a long time (9/14/2001, to be
exact) since I've had a trans-Atlantic circuit under my care and things
were different back then.


  Anyone care to enlighten me on what these guys can reasonably
expect on such a link? My best guess is he'd like service from Colt
based on the type of customer he is trying to reach, but its a big
muddle and I don't get to talk to all of the players ...


I remember voiping over the pond, from Frankfurt, germany to New York.

We had to twist asterisk to even accept the sip. Time was between
80 and 90 msec. The experienced time was higher. Roger, Over and Out
with their interstallar hamradio experience could do it, but to a
normal citizen it was unuseble.

(dsl 1000 customer, close to Frankfurt)

 1  krzach.peter-dambier.de (192.168.48.2)  2.918 ms   3.599 ms   3.926 ms
 2  * * *
 3  217.0.78.58  85.268 ms   85.301 ms   102.059 ms
 4  f-ea1.F.DE.net.DTAG.DE (62.154.18.22)  102.092 ms   110.057 ms   126.310 ms
 5  p2-0.core01.fra01.atlas.cogentco.com (212.20.159.38)  126.344 ms * *
 6  * * *
 7  p3-0.core01.ams03.atlas.cogentco.com (130.117.0.145)  132.262 ms   139.333 
ms   147.174 ms
 8  p12-0.core01.lon01.atlas.cogentco.com (130.117.0.198)  76.436 ms   76.444 
ms   84.374 ms
 9  t1-4.mpd02.lon01.atlas.cogentco.com (130.117.1.74)  99.840 ms   99.873 ms   
107.508 ms
10  t3-2.mpd01.bos01.atlas.cogentco.com (130.117.0.185)  209.678 ms   217.428 
ms   225.601 ms
11  t2-4.mpd01.ord01.atlas.cogentco.com (154.54.6.22)  233.514 ms * *
12  vl3491.mpd01.ord03.atlas.cogentco.com (154.54.6.210)  243.741 ms * *
13  * * *
14  ge-1-3-0x24.aa1.mich.net (198.108.23.241)  165.776 ms   174.752 ms   
193.770 ms
15  www.merit.edu (198.108.1.92)(H!)  193.812 ms (H!)  201.863 ms (H!)  209.704 
ms

(colo in Amsterdam)

 1  205.189.71.253 (205.189.71.253)  0.227 ms  0.257 ms  0.227 ms
 2  ge-5-2-234.ipcolo1.Amsterdam1.Level3.net (212.72.46.165)  0.985 ms  0.811 
ms  0.856 ms
 3  ae-32-54.ebr2.Amsterdam1.Level3.net (4.68.120.126)  4.235 ms  6.575 ms  
1.360 ms
 4  ae-2.ebr2.London1.Level3.net (4.69.132.133)  19.097 ms  12.816 ms  18.220 ms
 5  ae-4.ebr1.NewYork1.Level3.net (4.69.132.109)  78.197 ms  78.769 ms  87.062 
ms
 6  ae-71-71.csw2.NewYork1.Level3.net (4.69.134.70)  78.068 ms  79.058 ms  
89.192 ms
 7  ae-22-79.car2.NewYork1.Level3.net (4.68.16.68)  142.665 ms  135.007 ms  
214.243 ms
 8  te-7-4-71.nycmny2wch010.wcg.Level3.net (4.68.110.22)  75.824 ms  75.695 ms  
76.566 ms
 9  64.200.249.153 (64.200.249.153)  282.356 ms  138.384 ms  243.104 ms
10  * * *
11  * * *
12  * * *
13  * * *
14  www.merit.edu (198.108.1.92)  112.906 ms !C  110.515 ms !C  113.418 ms !C

Try Switch (swizzerland) they are testing warp tunnels - but not producting yet 
:)


Cheers
Peter and Karin

--
Peter and Karin Dambier
Cesidian Root - Radice Cesidiana
Rimbacher Strasse 16
D-69509 Moerlenbach-Bonsweiher
+49(6209)795-816 (Telekom)
+49(6252)750-308 (VoIP: sipgate.de)
mail: [EMAIL PROTECTED]
mail: [EMAIL PROTECTED]
http://iason.site.voila.fr/
https://sourceforge.net/projects/iason/
http://www.cesidianroot.com/



Re: trans-Atlantic latency?

2007-06-29 Thread Leigh Porter



I used to get about 60ms from router to router in TAT12/13 (I think) 
from London Telehouse to NY Telehouse.





Security Admin (NetSec) wrote:

Sprint has probably the lowest latency in the industry; I use them for a Los 
Angeles - London IPSec VPN.  Typical latency is around 140-150 ms rt (70-75 ms 
one-way)

40 ms RT is not possible in this reality, unless the speed of light is 
increased or one transimits through subspace (see Star Trek)


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Neal R
Sent: Thursday, June 28, 2007 4:21 PM
To: nanog@merit.edu
Subject: trans-Atlantic latency?



  I have a customer with IP transport from Sprint and McLeod and fiber
connectivity to Sprint in the Chicago area. The person making the
decisions is not a routing guy but is very sharp overall. He is
currently examining the latency on trans-Atlantic links and has fixed on
the idea that he needs 40ms or less to London through whatever carrier
he picks. He has spoken to someone at Cogent about a point to point link.


What is a reasonable latency to see on a link of that distance? I
get the impression he is shopping for something that involves dilithium
crystal powered negative latency inducers, wormhole technology, or an
ethernet to tachyon bridge, but its been a long time (9/14/2001, to be
exact) since I've had a trans-Atlantic circuit under my care and things
were different back then.


  Anyone care to enlighten me on what these guys can reasonably
expect on such a link? My best guess is he'd like service from Colt
based on the type of customer he is trying to reach, but its a big
muddle and I don't get to talk to all of the players ...

--
This mail was scanned by BitDefender
For more informations please visit http://www.bitdefender.com



  


Re: trans-Atlantic latency?

2007-06-29 Thread Andy Ashley




Peter Dambier wrote:


Neal R wrote:


  I have a customer with IP transport from Sprint and McLeod and fiber
connectivity to Sprint in the Chicago area. The person making the
decisions is not a routing guy but is very sharp overall. He is
currently examining the latency on trans-Atlantic links and has fixed on
the idea that he needs 40ms or less to London through whatever carrier
he picks. He has spoken to someone at Cogent about a point to point 
link.



What is a reasonable latency to see on a link of that distance? I
get the impression he is shopping for something that involves dilithium
crystal powered negative latency inducers, wormhole technology, or an
ethernet to tachyon bridge, but its been a long time (9/14/2001, to be
exact) since I've had a trans-Atlantic circuit under my care and things
were different back then.


  Anyone care to enlighten me on what these guys can reasonably
expect on such a link? My best guess is he'd like service from Colt
based on the type of customer he is trying to reach, but its a big
muddle and I don't get to talk to all of the players ...


I remember voiping over the pond, from Frankfurt, germany to New York.

We had to twist asterisk to even accept the sip. Time was between
80 and 90 msec. The experienced time was higher. Roger, Over and Out
with their interstallar hamradio experience could do it, but to a
normal citizen it was unuseble.

(dsl 1000 customer, close to Frankfurt)

 1  krzach.peter-dambier.de (192.168.48.2)  2.918 ms   3.599 ms   
3.926 ms

 2  * * *
 3  217.0.78.58  85.268 ms   85.301 ms   102.059 ms
 4  f-ea1.F.DE.net.DTAG.DE (62.154.18.22)  102.092 ms   110.057 ms   
126.310 ms

 5  p2-0.core01.fra01.atlas.cogentco.com (212.20.159.38)  126.344 ms * *
 6  * * *
 7  p3-0.core01.ams03.atlas.cogentco.com (130.117.0.145)  132.262 ms   
139.333 ms   147.174 ms
 8  p12-0.core01.lon01.atlas.cogentco.com (130.117.0.198)  76.436 ms   
76.444 ms   84.374 ms
 9  t1-4.mpd02.lon01.atlas.cogentco.com (130.117.1.74)  99.840 ms   
99.873 ms   107.508 ms
10  t3-2.mpd01.bos01.atlas.cogentco.com (130.117.0.185)  209.678 ms   
217.428 ms   225.601 ms

11  t2-4.mpd01.ord01.atlas.cogentco.com (154.54.6.22)  233.514 ms * *
12  vl3491.mpd01.ord03.atlas.cogentco.com (154.54.6.210)  243.741 ms * *
13  * * *
14  ge-1-3-0x24.aa1.mich.net (198.108.23.241)  165.776 ms   174.752 
ms   193.770 ms
15  www.merit.edu (198.108.1.92)(H!)  193.812 ms (H!)  201.863 ms 
(H!)  209.704 ms


(colo in Amsterdam)

 1  205.189.71.253 (205.189.71.253)  0.227 ms  0.257 ms  0.227 ms
 2  ge-5-2-234.ipcolo1.Amsterdam1.Level3.net (212.72.46.165)  0.985 
ms  0.811 ms  0.856 ms
 3  ae-32-54.ebr2.Amsterdam1.Level3.net (4.68.120.126)  4.235 ms  
6.575 ms  1.360 ms
 4  ae-2.ebr2.London1.Level3.net (4.69.132.133)  19.097 ms  12.816 ms  
18.220 ms
 5  ae-4.ebr1.NewYork1.Level3.net (4.69.132.109)  78.197 ms  78.769 
ms  87.062 ms
 6  ae-71-71.csw2.NewYork1.Level3.net (4.69.134.70)  78.068 ms  79.058 
ms  89.192 ms
 7  ae-22-79.car2.NewYork1.Level3.net (4.68.16.68)  142.665 ms  
135.007 ms  214.243 ms
 8  te-7-4-71.nycmny2wch010.wcg.Level3.net (4.68.110.22)  75.824 ms  
75.695 ms  76.566 ms

 9  64.200.249.153 (64.200.249.153)  282.356 ms  138.384 ms  243.104 ms
10  * * *
11  * * *
12  * * *
13  * * *
14  www.merit.edu (198.108.1.92)  112.906 ms !C  110.515 ms !C  
113.418 ms !C


Try Switch (swizzerland) they are testing warp tunnels - but not 
producting yet :)



Cheers
Peter and Karin


Hi,

Over Level 3 transit from their London 2 gateway to the New York, 111 
8th St. gateway:


(0.0.0.0)(tos=0x0 psize=64 
bitpattern=0x00)
Fri Jun 29 10:56:25 2007

Keys:  Help   Display mode   Restart statistics   Order of fields   quit

Packets   Pings
Host  
Loss%   Snt   Last   Avg  Best  Wrst StDev
1. 
v5-csw01.ln1.qubenet.net
0.0%   1881.4   6.2   0.7 197.4  24.7
2. 
bdr01.ln1.qubenet.net  
0.0%   1881.3   5.0   0.6 214.4  26.4
3. 
ipcolo2.london2.level3.net
0.0%   1881.3   1.4   0.8   2.4   0.3
4. 
ae-0-52.bbr2.London2.Level3.net
0.0%   1881.4   2.8   1.0  52.2   5.9
5. 
ae-0-0.bbr2.NewYork1.Level3.net   
0.0%   187   67.4  69.1  66.4 181.3  12.0

   as-0-0.bbr1.NewYork1.Level3.net
6. 
ae-31-89.car1.NewYork1.Level3.net 
0.0%   187   67.5  69.3  66.7 227.1  13.9

 

Re: trans-Atlantic latency?

2007-06-29 Thread Jim Segrave

On Thu 28 Jun 2007 (18:20 -0500), Neal R wrote:
 
 
   I have a customer with IP transport from Sprint and McLeod and fiber
 connectivity to Sprint in the Chicago area. The person making the
 decisions is not a routing guy but is very sharp overall. He is
 currently examining the latency on trans-Atlantic links and has fixed on
 the idea that he needs 40ms or less to London through whatever carrier
 he picks. He has spoken to someone at Cogent about a point to point link.
 
 
 What is a reasonable latency to see on a link of that distance? I
 get the impression he is shopping for something that involves dilithium
 crystal powered negative latency inducers, wormhole technology, or an
 ethernet to tachyon bridge, but its been a long time (9/14/2001, to be
 exact) since I've had a trans-Atlantic circuit under my care and things
 were different back then.
 
 
   Anyone care to enlighten me on what these guys can reasonably
 expect on such a link? My best guess is he'd like service from Colt
 based on the type of customer he is trying to reach, but its a big
 muddle and I don't get to talk to all of the players ...


He'll need a Black  Dekker drill with a hammer attachment, and an
absolutely prodigious stone cutting bit, a convenient wormhole, or a
waiver on the laws of physics.

-- 
Jim Segrave   [EMAIL PROTECTED]


RE: trans-Atlantic latency?

2007-06-29 Thread Brian Knoll (TTNET)

A reasonable latency to expect between Chicago and London would be 92ms
RTT.

Brian Knoll


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Neal R
Sent: Thursday, June 28, 2007 6:21 PM
To: nanog@merit.edu
Subject: trans-Atlantic latency?



  I have a customer with IP transport from Sprint and McLeod and fiber
connectivity to Sprint in the Chicago area. The person making the
decisions is not a routing guy but is very sharp overall. He is
currently examining the latency on trans-Atlantic links and has fixed on
the idea that he needs 40ms or less to London through whatever carrier
he picks. He has spoken to someone at Cogent about a point to point
link.


What is a reasonable latency to see on a link of that distance? I
get the impression he is shopping for something that involves dilithium
crystal powered negative latency inducers, wormhole technology, or an
ethernet to tachyon bridge, but its been a long time (9/14/2001, to be
exact) since I've had a trans-Atlantic circuit under my care and things
were different back then.


  Anyone care to enlighten me on what these guys can reasonably
expect on such a link? My best guess is he'd like service from Colt
based on the type of customer he is trying to reach, but its a big
muddle and I don't get to talk to all of the players ...


ICANN registrar supporting v6 glue?

2007-06-29 Thread Barrett Lyon


Apparently GoDaddy does not support v6 glue for their customers, who  
does?  I don't think requiring dual-stack v6 users perform v4 queries  
to find  records is all that great.


Any input would be helpful,

-Barrett





Re: The Choice: IPv4 Exhaustion or Transition to IPv6

2007-06-29 Thread David Conrad


Christian,

On Jun 29, 2007, at 9:37 AM, Christian Kuhtz wrote:
Until there's a practical solution for multihoming, this whole  
discussion is pretty pointless


The fact that a practical multihoming solution for IPv6 does not  
exist doesn't mean that the IPv4 free pool will not be exhausted.


Rgds,
-drc






Re: ICANN registrar supporting v6 glue?

2007-06-29 Thread Barrett Lyon



One note here is that even though you can get glue into com/net/org
using this method, there is no IPv6 glue for the root yet, as such  
even
if you manage to get the IPv6 glue in, it won't accomplish much  
(except

sending all IPv6 capable resolvers over IPv6 transport :) as all


Unless I did this query wrong, you are absolutely right:

;.  IN  NS
A.ROOT-SERVERS.NET. 360 IN  A   198.41.0.4
B.ROOT-SERVERS.NET. 360 IN  A   192.228.79.201
C.ROOT-SERVERS.NET. 360 IN  A   192.33.4.12
D.ROOT-SERVERS.NET. 360 IN  A   128.8.10.90
E.ROOT-SERVERS.NET. 360 IN  A   192.203.230.10
F.ROOT-SERVERS.NET. 360 IN  A   192.5.5.241
G.ROOT-SERVERS.NET. 360 IN  A   192.112.36.4
H.ROOT-SERVERS.NET. 360 IN  A   128.63.2.53
I.ROOT-SERVERS.NET. 360 IN  A   192.36.148.17
J.ROOT-SERVERS.NET. 360 IN  A   192.58.128.30
K.ROOT-SERVERS.NET. 360 IN  A   193.0.14.129
L.ROOT-SERVERS.NET. 360 IN  A   198.32.64.12
M.ROOT-SERVERS.NET. 360 IN  A   202.12.27.33


I don't see any v6 glue there...  Rather than having conversations  
about transition to IPv6, maybe we should be sure it works natively  
first?  It's rather ironic to think that for v6 DNS to work an  
incumbent legacy protocol is still required.  The GTLD's appear to  
have somewhat better v6 services than root:


A.GTLD-SERVERS.NET. 172800  IN  2001:503:a83e::2:30
B.GTLD-SERVERS.NET. 172800  IN  2001:503:231d::2:30

I'm pretty disappointed now,

-Barrett


Re: Thoughts on best practice for naming router infrastructure in DNS

2007-06-29 Thread Pete Ehlke

On Fri Jun 29, 2007 at 16:35:09 +0100, Neil J. McRae wrote:

I remember in the past an excellent system using Sesame Street characters 
names.

http://www.faqs.org/rfcs/rfc2100.html


Re: v6 multihoming (Re: The Choice: IPv4 Exhaustion or Transition to IPv6)

2007-06-29 Thread Stephen Wilcox

Hi Nicolas,
 you will never make 2GB of traffic go down one STM4 or even 3x STM4! 

But you are asking me about load balancing amongst 3 upstreams...

Deaggregation of your prefix is an ugly way to do TE. If you buy from carriers 
that support BGP communities there are much nicer ways to manage this. I've 
never deaggregated and I have had and do have individual prefixes that generate 
more traffic than any single GE link.

Steve

On Fri, Jun 29, 2007 at 12:11:58PM -0300, Nicolás Antoniello wrote:
 Hi Stephen,
 
 Supose you have STM4 links, ok?
 And you have 2G of trafic from your 10 ADSL customers, ok?
 And those STM4 go to 3 dif carriers in USA.
 Then, how you advertise only one IPv6 prefix to all and make the 2G go 
 trough one STM4 ?
 
 
 On Fri, 29 Jun 2007, Stephen Wilcox wrote:
 
 steve. 
 steve. Hi Christian,
 steve.  I am not seeing how v4 exhaustion, transition to v6, multihoming in 
 v6 and destruction ov DFZ are correlated.
 steve. 
 steve. If you took everything on v4 today and migrated it to v6 tomoro the 
 routing table would not grow - actually by my calculation it should shrink 
 (every ASN would only need one prefix to cover its current and anticipated 
 growth). So we'll see 22 routes reduce to 25000.
 steve. 
 steve. The technology we have now is not driving multihoming directly and v4 
 vs v6 is not a factor there.
 steve. 
 steve. So in what way is v6 destroying DFZ?
 steve. 
 steve. Steve
 steve. 
 steve. On Fri, Jun 29, 2007 at 02:13:50PM +, Christian Kuhtz wrote:
 steve.  
 steve.  Amazink!  Some things on NANOG _never_ change.  Trawling for trolls 
 I must be.
 steve.  
 steve.  If you want to emulate IPv4 and destroy the DFZ, yes, this is 
 trivial.  And you should go ahead and plan that migration.
 steve.  
 steve.  As you well known, one of the core assumptions of IPv6 is that the 
 DFZ policy stay intact, ostensibly to solve a very specific scaling problem.
 steve.  
 steve.  So, go ahead and continue talking about migration while ignoring 
 the very policies within which that is permitted to take place and don't let 
 me interrupt that ranting.
 steve.  
 steve.  Best Regards,
 steve.  Christian 
 steve.  
 steve.  --
 steve.  Sent from my BlackBerry.  
 steve.  
 steve.  -Original Message-
 steve.  From: Stephen Wilcox [EMAIL PROTECTED]
 steve.  
 steve.  Date: Fri, 29 Jun 2007 14:55:06 
 steve.  To:Christian Kuhtz [EMAIL PROTECTED]
 steve.  Cc:Andy Davidson [EMAIL PROTECTED], [EMAIL PROTECTED],   
 Donald Stahl [EMAIL PROTECTED], [EMAIL PROTECTED]
 steve.  Subject: Re: The Choice: IPv4 Exhaustion or Transition to IPv6
 steve.  
 steve.  
 steve.  multihoming is simple, you get an address block and route it to 
 your upstreams.
 steve.  
 steve.  the policy surrounding that is another debate, possibly for another 
 group
 steve.  
 steve.  this thread is discussing how v4 to v6 migration can operate on a 
 network level
 steve.  
 steve.  Steve
 steve.  
 steve.  On Fri, Jun 29, 2007 at 01:37:23PM +, Christian Kuhtz wrote:
 steve.   Until there's a practical solution for multihoming, this whole 
 discussion is pretty pointless.
 steve.   
 steve.   --
 steve.   Sent from my BlackBerry.  
 steve.   
 steve.   -Original Message-
 steve.   From: Andy Davidson [EMAIL PROTECTED]
 steve.   
 steve.   Date: Fri, 29 Jun 2007 14:27:33 
 steve.   To:Donald Stahl [EMAIL PROTECTED]
 steve.   Cc:[EMAIL PROTECTED]
 steve.   Subject: Re: The Choice: IPv4 Exhaustion or Transition to IPv6
 steve.   
 steve.   
 steve.   
 steve.   
 steve.   On 29 Jun 2007, at 14:24, Donald Stahl wrote:
 steve.   
 steve.That's the thing .. google's crawlers and search app runs at 
 layer  
 steve.7, v6 is an addressing system that runs at layer 3.  If we'd 
 (the  
 steve.community) got everything right with v6, it wouldn't matter to 
  
 steve.Google's applications whether the content came from a site 
 hosted  
 steve.on a v4 address, or a v6 address, or even both.
 steve.If Google does not have v6 connectivity then how are they going 
 to  
 steve.crawl those v6 sites?
 steve.   
 steve.   I think we're debating from very similar positions...
 steve.   
 steve.   v6 isn't the ideal scenario of '96 extra bits for free', because 
 if  
 steve.   life was so simple, we wouldn't need to ask this question.
 steve.   
 steve.   Andy
 steve.   
 steve. 


Re: ICANN registrar supporting v6 glue?

2007-06-29 Thread JAKO Andras

 ;.  IN  NS
 A.ROOT-SERVERS.NET. 360 IN  A   198.41.0.4
 B.ROOT-SERVERS.NET. 360 IN  A   192.228.79.201
 C.ROOT-SERVERS.NET. 360 IN  A   192.33.4.12
 D.ROOT-SERVERS.NET. 360 IN  A   128.8.10.90
 E.ROOT-SERVERS.NET. 360 IN  A   192.203.230.10
 F.ROOT-SERVERS.NET. 360 IN  A   192.5.5.241
 G.ROOT-SERVERS.NET. 360 IN  A   192.112.36.4
 H.ROOT-SERVERS.NET. 360 IN  A   128.63.2.53
 I.ROOT-SERVERS.NET. 360 IN  A   192.36.148.17
 J.ROOT-SERVERS.NET. 360 IN  A   192.58.128.30
 K.ROOT-SERVERS.NET. 360 IN  A   193.0.14.129
 L.ROOT-SERVERS.NET. 360 IN  A   198.32.64.12
 M.ROOT-SERVERS.NET. 360 IN  A   202.12.27.33
 
 
 I don't see any v6 glue there...  Rather than having conversations about
 transition to IPv6, maybe we should be sure it works natively first?  It's
 rather ironic to think that for v6 DNS to work an incumbent legacy protocol is
 still required.

At least something is happening:

http://www.icann.org/committees/security/sac016.htm
http://www.icann.org/committees/security/sac017.htm

Regards,
Andras


Re: ICANN registrar supporting v6 glue?

2007-06-29 Thread Edward Lewis




I'm pretty disappointed now,


Searching the ICANN web site I found this:

http://www.icann.org/committees/security/sac018.pdf

Does anyone know what's been happening in the wake of that document?
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Edward Lewis+1-571-434-5468
NeuStar

Think glocally.  Act confused.


IPv6 DNS

2007-06-29 Thread David Barak


--- Barrett Lyon [EMAIL PROTECTED] wrote:

 I don't see any v6 glue there...  Rather than having
 conversations  
 about transition to IPv6, maybe we should be sure it
 works natively  
 first?  It's rather ironic to think that for v6 DNS
 to work an  
 incumbent legacy protocol is still required. 

Consider that Windows XP (and server 2k3) will not,
under any circumstance, send a DNS request over IPv6,
and yet they were widely considered IPv6 compliant. 

Consider also how long it took to get a working way of
telling autoconfigured hosts about which DNS servers
to use (without manually entering 128-bit addresses).

To me, the above show that the bulk of the actual
deployments were in dual-stack or tunnel environments,
and greenfield implementations were few and far
between.  There's a surprising amount of unexplored
here be dragons territory in IPv6, given how long
some very smart people have been working on it.

-David Barak

David Barak
Need Geek Rock?  Try The Franchise: 
http://www.listentothefranchise.com


   

Take the Internet to Go: Yahoo!Go puts the Internet in your pocket: mail, news, 
photos  more. 
http://mobile.yahoo.com/go?refer=1GNXIC


Re: v6 multihoming (Re: The Choice: IPv4 Exhaustion or Transition to IPv6)

2007-06-29 Thread Joel Jaeggli

Nicolás Antoniello wrote:
 Hi Steve,
 
 Sure... I've never mention 3 STM4... the example said 3 carriers.
 
 OK, you may do it with communities, but if you advertise all in just one 
 prefix, even with communities, I find it very difficult to control the 
 trafic when it pass through 2 or more AS (it may be quite easy for the 
 peer AS, but what about the other ASs)?

AS path prepend?

It's a gross nob. But it's not like there's no precedent for it's use.

joelja

 Nicolas.
 
 
 On Fri, 29 Jun 2007, Stephen Wilcox wrote:
 
 steve. Hi Nicolas,
 steve.  you will never make 2GB of traffic go down one STM4 or even 3x STM4! 
 steve. 
 steve. But you are asking me about load balancing amongst 3 upstreams...
 steve. 
 steve. Deaggregation of your prefix is an ugly way to do TE. If you buy 
 steve. from carriers that support BGP communities there are much nicer 
 steve. ways to manage this. I've never deaggregated and I have had and do 
 steve. have individual prefixes that generate more traffic than any 
 steve. single GE link.
 steve. 
 steve. Steve
 steve. 
 steve. On Fri, Jun 29, 2007 at 12:11:58PM -0300, Nicolás Antoniello wrote:
 steve.  Hi Stephen,
 steve.  
 steve.  Supose you have STM4 links, ok?
 steve.  And you have 2G of trafic from your 10 ADSL customers, ok?
 steve.  And those STM4 go to 3 dif carriers in USA.
 steve.  Then, how you advertise only one IPv6 prefix to all and make the 2G 
 go 
 steve.  trough one STM4 ?
 steve.  
 steve.  
 steve.  On Fri, 29 Jun 2007, Stephen Wilcox wrote:
 steve.  
 steve.  steve. 
 steve.  steve. Hi Christian,
 steve.  steve.  I am not seeing how v4 exhaustion, transition to v6, 
 multihoming in v6 and destruction ov DFZ are correlated.
 steve.  steve. 
 steve.  steve. If you took everything on v4 today and migrated it to v6 
 tomoro the routing table would not grow - actually by my calculation it 
 should shrink (every ASN would only need one prefix to cover its current and 
 anticipated growth). So we'll see 22 routes reduce to 25000.
 steve.  steve. 
 steve.  steve. The technology we have now is not driving multihoming 
 directly and v4 vs v6 is not a factor there.
 steve.  steve. 
 steve.  steve. So in what way is v6 destroying DFZ?
 steve.  steve. 
 steve.  steve. Steve
 steve.  steve. 
 steve.  steve. On Fri, Jun 29, 2007 at 02:13:50PM +, Christian Kuhtz 
 wrote:
 steve.  steve.  
 steve.  steve.  Amazink!  Some things on NANOG _never_ change.  Trawling 
 for trolls I must be.
 steve.  steve.  
 steve.  steve.  If you want to emulate IPv4 and destroy the DFZ, yes, 
 this is trivial.  And you should go ahead and plan that migration.
 steve.  steve.  
 steve.  steve.  As you well known, one of the core assumptions of IPv6 is 
 that the DFZ policy stay intact, ostensibly to solve a very specific scaling 
 problem.
 steve.  steve.  
 steve.  steve.  So, go ahead and continue talking about migration while 
 ignoring the very policies within which that is permitted to take place and 
 don't let me interrupt that ranting.
 steve.  steve.  
 steve.  steve.  Best Regards,
 steve.  steve.  Christian 
 steve.  steve.  
 steve.  steve.  --
 steve.  steve.  Sent from my BlackBerry.  
 steve.  steve.  
 steve.  steve.  -Original Message-
 steve.  steve.  From: Stephen Wilcox [EMAIL PROTECTED]
 steve.  steve.  
 steve.  steve.  Date: Fri, 29 Jun 2007 14:55:06 
 steve.  steve.  To:Christian Kuhtz [EMAIL PROTECTED]
 steve.  steve.  Cc:Andy Davidson [EMAIL PROTECTED], [EMAIL PROTECTED],  
  Donald Stahl [EMAIL PROTECTED], [EMAIL PROTECTED]
 steve.  steve.  Subject: Re: The Choice: IPv4 Exhaustion or Transition to 
 IPv6
 steve.  steve.  
 steve.  steve.  
 steve.  steve.  multihoming is simple, you get an address block and route 
 it to your upstreams.
 steve.  steve.  
 steve.  steve.  the policy surrounding that is another debate, possibly 
 for another group
 steve.  steve.  
 steve.  steve.  this thread is discussing how v4 to v6 migration can 
 operate on a network level
 steve.  steve.  
 steve.  steve.  Steve
 steve.  steve.  
 steve.  steve.  On Fri, Jun 29, 2007 at 01:37:23PM +, Christian Kuhtz 
 wrote:
 steve.  steve.   Until there's a practical solution for multihoming, 
 this whole discussion is pretty pointless.
 steve.  steve.   
 steve.  steve.   --
 steve.  steve.   Sent from my BlackBerry.  
 steve.  steve.   
 steve.  steve.   -Original Message-
 steve.  steve.   From: Andy Davidson [EMAIL PROTECTED]
 steve.  steve.   
 steve.  steve.   Date: Fri, 29 Jun 2007 14:27:33 
 steve.  steve.   To:Donald Stahl [EMAIL PROTECTED]
 steve.  steve.   Cc:[EMAIL PROTECTED]
 steve.  steve.   Subject: Re: The Choice: IPv4 Exhaustion or Transition 
 to IPv6
 steve.  steve.   
 steve.  steve.   
 steve.  steve.   
 steve.  steve.   
 steve.  steve.   On 29 Jun 2007, at 14:24, Donald Stahl wrote:
 steve.  steve.   
 steve.  steve.That's the thing .. google's crawlers and search app 
 runs at layer  
 steve.  steve.7, v6 is an 

Re: v6 multihoming (Re: The Choice: IPv4 Exhaustion or Transition to IPv6)

2007-06-29 Thread Nicolás Antoniello
Hi Joel,

To use AS path prepend when you advertise just one prefix does not solve 
the problem...in this case it actually make it worth, 'cos you may find 
all your trafic coming from only one of your uplinks.

Nicolas.


On Fri, 29 Jun 2007, Joel Jaeggli wrote:

joelja Nicolás Antoniello wrote:
joelja  Hi Steve,
joelja  
joelja  Sure... I've never mention 3 STM4... the example said 3 carriers.
joelja  
joelja  OK, you may do it with communities, but if you advertise all in just 
one 
joelja  prefix, even with communities, I find it very difficult to control 
the 
joelja  trafic when it pass through 2 or more AS (it may be quite easy for 
the 
joelja  peer AS, but what about the other ASs)?
joelja 
joelja AS path prepend?
joelja 
joelja It's a gross nob. But it's not like there's no precedent for it's use.
joelja 
joelja joelja
joelja 
joelja  Nicolas.
joelja  
joelja  
joelja  On Fri, 29 Jun 2007, Stephen Wilcox wrote:
joelja  
joelja  steve. Hi Nicolas,
joelja  steve.  you will never make 2GB of traffic go down one STM4 or even 
3x STM4! 
joelja  steve. 
joelja  steve. But you are asking me about load balancing amongst 3 
upstreams...
joelja  steve. 
joelja  steve. Deaggregation of your prefix is an ugly way to do TE. If you 
buy 
joelja  steve. from carriers that support BGP communities there are much 
nicer 
joelja  steve. ways to manage this. I've never deaggregated and I have had 
and do 
joelja  steve. have individual prefixes that generate more traffic than any 
joelja  steve. single GE link.
joelja  steve. 
joelja  steve. Steve
joelja  steve. 
joelja  steve. On Fri, Jun 29, 2007 at 12:11:58PM -0300, Nicolás Antoniello 
wrote:
joelja  steve.  Hi Stephen,
joelja  steve.  
joelja  steve.  Supose you have STM4 links, ok?
joelja  steve.  And you have 2G of trafic from your 10 ADSL customers, 
ok?
joelja  steve.  And those STM4 go to 3 dif carriers in USA.
joelja  steve.  Then, how you advertise only one IPv6 prefix to all and 
make the 2G go 
joelja  steve.  trough one STM4 ?
joelja  steve.  
joelja  steve.  
joelja  steve.  On Fri, 29 Jun 2007, Stephen Wilcox wrote:
joelja  steve.  
joelja  steve.  steve. 
joelja  steve.  steve. Hi Christian,
joelja  steve.  steve.  I am not seeing how v4 exhaustion, transition to 
v6, multihoming in v6 and destruction ov DFZ are correlated.
joelja  steve.  steve. 
joelja  steve.  steve. If you took everything on v4 today and migrated it 
to v6 tomoro the routing table would not grow - actually by my calculation it 
should shrink (every ASN would only need one prefix to cover its current and 
anticipated growth). So we'll see 22 routes reduce to 25000.
joelja  steve.  steve. 
joelja  steve.  steve. The technology we have now is not driving 
multihoming directly and v4 vs v6 is not a factor there.
joelja  steve.  steve. 
joelja  steve.  steve. So in what way is v6 destroying DFZ?
joelja  steve.  steve. 
joelja  steve.  steve. Steve
joelja  steve.  steve. 
joelja  steve.  steve. On Fri, Jun 29, 2007 at 02:13:50PM +, Christian 
Kuhtz wrote:
joelja  steve.  steve.  
joelja  steve.  steve.  Amazink!  Some things on NANOG _never_ change.  
Trawling for trolls I must be.
joelja  steve.  steve.  
joelja  steve.  steve.  If you want to emulate IPv4 and destroy the DFZ, 
yes, this is trivial.  And you should go ahead and plan that migration.
joelja  steve.  steve.  
joelja  steve.  steve.  As you well known, one of the core assumptions of 
IPv6 is that the DFZ policy stay intact, ostensibly to solve a very specific 
scaling problem.
joelja  steve.  steve.  
joelja  steve.  steve.  So, go ahead and continue talking about migration 
while ignoring the very policies within which that is permitted to take place 
and don't let me interrupt that ranting.
joelja  steve.  steve.  
joelja  steve.  steve.  Best Regards,
joelja  steve.  steve.  Christian 
joelja  steve.  steve.  
joelja  steve.  steve.  --
joelja  steve.  steve.  Sent from my BlackBerry.  
joelja  steve.  steve.  
joelja  steve.  steve.  -Original Message-
joelja  steve.  steve.  From: Stephen Wilcox [EMAIL PROTECTED]
joelja  steve.  steve.  
joelja  steve.  steve.  Date: Fri, 29 Jun 2007 14:55:06 
joelja  steve.  steve.  To:Christian Kuhtz [EMAIL PROTECTED]
joelja  steve.  steve.  Cc:Andy Davidson [EMAIL PROTECTED], [EMAIL 
PROTECTED],   Donald Stahl [EMAIL PROTECTED], [EMAIL PROTECTED]
joelja  steve.  steve.  Subject: Re: The Choice: IPv4 Exhaustion or 
Transition to IPv6
joelja  steve.  steve.  
joelja  steve.  steve.  
joelja  steve.  steve.  multihoming is simple, you get an address block 
and route it to your upstreams.
joelja  steve.  steve.  
joelja  steve.  steve.  the policy surrounding that is another debate, 
possibly for another group
joelja  steve.  steve.  
joelja  steve.  steve.  this thread is discussing how v4 to v6 migration 
can operate on a network level
joelja  steve.  steve.  
joelja  steve.  steve.  Steve
joelja  steve.  steve.  
joelja  steve.  steve.  On Fri, Jun 29, 

Re: ICANN registrar supporting v6 glue?

2007-06-29 Thread Chris L. Morrow



On Fri, 29 Jun 2007, Barrett Lyon wrote:

  If you deploy dual-stack, it is much easier to keep doing the DNS
  queries
  using IPv4 transport, and there is not any practical advantage in
  doing so
  with IPv6 transport.

 Thanks Jordi, not to sound too brash but, I'm already doing so.  I am
 trying not to deploy a hacked v6 service which requires an incumbent
 legacy protocol to work.

  Of course, is nice to have IPv6 support in as many DNS
  infrastructure pieces
  as possible, and a good signal to the market. Many TLDs already do,
  and the
  root servers are moving also in that direction. Hopefully then the
  rest of
  the folks involved in DNS move on.

 I would like to support v6 so a native v6 only user can still
 communicate with my network, dns and all, apparently in practice that
 is not easy to do, which is somewhat ironic given all of the v6 push
 lately.  It also seems like the roots are not even fully supporting
 this properly?

there are providers that have (in the US even if that matters) ipv6
connected auth servers, that could even help. I can't seem to make one of
them want to be a registrar too :( but... maybe Ultra/Neustar could do
that for you?


Re: Thoughts on best practice for naming router infrastructure in DNS

2007-06-29 Thread Cat Okita


On Fri, 29 Jun 2007, Chris L. Morrow wrote:

perhaps a decent other question is: Do I want to let the whole world know
that router X with interfaces of type Y/Z/Q is located in 1-wilshire.

I suppose on the one hand it's helpful to know that Network-A has a device
with the right sorts of interfaces/capacity in a location I care about,
but it's also nice to know that for other reasons :( so naming things
about mythic beasts or cheese or movies is not so bad, provided your
NOC/OPS folks have a key that shows: optimus-prime.my.network.net ==
1-wilshire, m160, t1-customers-only.


At 3am, I'd rather have the NOC/OPS folk be able to figure things out
from the name directly, than have to have access to the magic key that
lets them know what the weird name translates into.

Ditto if I'm nowhere near my translation key, having a life, for example.

At any rate - it's possible to have informative names without going into
too much detail - knowing where the device is, what it does (border, core,
switch), and what it's connecting (customerA) is pretty darn'd useful
all around, and avoids getting into the device type and interface specifics.

cheers!
==
A cat spends her life conflicted between a deep, passionate and profound
desire for fish and an equally deep, passionate and profound desire to
avoid getting wet.  This is the defining metaphor of my life right now.


Re: ICANN registrar supporting v6 glue?

2007-06-29 Thread Edward Lewis


At 9:23 -0700 6/29/07, Barrett Lyon wrote:


I would like to support v6 so a native v6 only user can still communicate
with my network, dns and all, apparently in practice that is not easy to
do, which is somewhat ironic given all of the v6 push lately.  It also
seems like the roots are not even fully supporting this properly?


Given that the ARIN BoT has published a call to move to IPv6:
 http://www.arin.net/media/releases/070521-v6-resolution.pdf
and that LACNIC and .MX have made these statements:
 http://lacnic.net/en/anuncios/2007_agotamiento_ipv4.html
 http://www.nic.mx/es/Noticias_2?NEWS=220
and ICANN has been studying the issue:
 http://www.icann.org/committees/security/sac018.pdf

What possibly can be done to get the root zone really available on 
IPv6? http://www.root-servers.org/ lists a few root servers as having 
IPv6 addresses, so really means having


for i in a b c d e f g h i j k l m; do dig $i.root-servers.net  
+norec; done


return at least one  in the answer section.

What's the hold up?  What's getting worked on?  Is there a 
dns-root-on-ipv6-deployment task force anywhere?  Is there someone 
that can give an authoritative update on where we are on the road to 
being able to accomplish what is requested above?  Part of my 
reaction is to the quip given all of the v6 push lately juxtaposed 
with NANOG 40 that barely mentioned IPv6 in the agenda.


If we can't get one application (DNS) to do IPv6 how can we expect 
the ISPs to just up and deploy it?  I would suspect that getting the 
roots - or just some of them - to legitimize their IPv6 presence 
would be easier than getting ISPs rolling.

--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Edward Lewis+1-571-434-5468
NeuStar

Think glocally.  Act confused.


Re: ICANN registrar supporting v6 glue?

2007-06-29 Thread Iljitsch van Beijnum


On 29-jun-2007, at 19:06, Edward Lewis wrote:


I'm pretty disappointed now,



Searching the ICANN web site I found this:



http://www.icann.org/committees/security/sac018.pdf



Does anyone know what's been happening in the wake of that document?


Well:

Additional study and testing is encouraged to continue to assess the  
impact of including  records in the DNS priming response.


Apparently, this can't be studied enough. This is what I wrote in my  
book two years ago:


Since mid-2004, TLD registries may have IPv6 addresses included in  
the root zone as glue records, and some TLDs allow end users to  
register IPv6 nameserver addresses for their domains. Many of the  
root name-servers are alreadyreachable over IPv6 (see http://www.root- 
servers.org/). ICANN and theroot server operators are proceeding very  
cautiously, but addition of IPv6 glue records to the root zone is  
expected in the

not too distant future.

At this rate, we'll be fresh out of IPv4 space before anything  
happens. More study is a waste of time, we all know that all  
implementations from this century can handle it but a small  
percentage of all sites is going to have trouble anyway because they  
have protocol-breaking equipment installed. ICANN should bite the  
bullet and announce a date for this so we can start beating the  
firewall admins into submission.


Re: ICANN registrar supporting v6 glue?

2007-06-29 Thread Barrett Lyon



there are providers that have (in the US even if that matters) ipv6
connected auth servers, that could even help. I can't seem to make  
one of

them want to be a registrar too :( but... maybe Ultra/Neustar could do
that for you?


Neustar/Ultra's .org gtld registration services apparently do not  
support v6, however, net and com do.  Yet, .org does provide a v6  
resolver:


b0.org.afilias-nst.org. 86400   IN  2001:500:c::1

-B




Re: ICANN registrar supporting v6 glue?

2007-06-29 Thread JORDI PALET MARTINEZ

My view is that deploying only IPv6 in the LANs is the wrong approach in the
short term, unless you're sure that all your applications are ready, or you
have translation tools (that often are ugly), and you're disconnected from
the rest of the IPv4 Internet.

I'm deploying large (5000 sites) IPv6 networks for customers, and we also
decided that at a given point, if your traffic is IPv6 dominant, it made be
sensible to consider deploying IPv6-only in the access and core network. I
just explained it yesterday in another mailing list:

The trick is to keep dual stack in the LANs (even if the LANs use net10 and
NAT), so the old applications that are still only available with IPv4,
keep running. In order to do that, you need an automatic tunneling protocol.
For example, softwires, and in fact this is the reason we needed it.
Softwires is basically L2TP, so you can guess we are talking simply about
VPNs on demand.

In order to keep most of the traffic as IPv6 within the network, the access
to the rest of the Internet, for example for http, is proxied by boxes (that
also do caching functions, as in many networks is done to proxy
IPv4-to-IPv4), but in our case to IPv4-to-IPv6.

What I will never do at this stage and probably for many years, is to drop
IPv4 from the LANs, unless I have a closed network and don't want to talk
with other parties across Internet, and I'm sure all my applications already
support IPv6.

This has been presented several times in different foras such RIR meetings.
And yes ... I'm already working on an ID to explain a bit more all the
details.

Regards,
Jordi




 De: Barrett Lyon [EMAIL PROTECTED]
 Responder a: [EMAIL PROTECTED]
 Fecha: Fri, 29 Jun 2007 09:23:59 -0700
 Para: [EMAIL PROTECTED]
 CC: nanog@merit.edu
 Asunto: Re: ICANN registrar supporting v6 glue?
 
 If you deploy dual-stack, it is much easier to keep doing the DNS
 queries
 using IPv4 transport, and there is not any practical advantage in
 doing so
 with IPv6 transport.
 
 Thanks Jordi, not to sound too brash but, I'm already doing so.  I am
 trying not to deploy a hacked v6 service which requires an incumbent
 legacy protocol to work.
 
 Of course, is nice to have IPv6 support in as many DNS
 infrastructure pieces
 as possible, and a good signal to the market. Many TLDs already do,
 and the
 root servers are moving also in that direction. Hopefully then the
 rest of
 the folks involved in DNS move on.
 
 I would like to support v6 so a native v6 only user can still
 communicate with my network, dns and all, apparently in practice that
 is not easy to do, which is somewhat ironic given all of the v6 push
 lately.  It also seems like the roots are not even fully supporting
 this properly?
 
 
 -Barrett
 
 




**
The IPv6 Portal: http://www.ipv6tf.org

Bye 6Bone. Hi, IPv6 !
http://www.ipv6day.org

This electronic message contains information which may be privileged or 
confidential. The information is intended to be for the use of the 
individual(s) named above. If you are not the intended recipient be aware that 
any disclosure, copying, distribution or use of the contents of this 
information, including attached files, is prohibited.





Re: ICANN registrar supporting v6 glue?

2007-06-29 Thread Perry Lorier




One note here is that even though you can get glue into com/net/org
using this method, there is no IPv6 glue for the root yet, as such even
if you manage to get the IPv6 glue in, it won't accomplish much (except
sending all IPv6 capable resolvers over IPv6 transport :) as all
resolvers will still require IPv4 to reach the root. One can of course
create their own root hint zone and force bind, or other dns server, to
not fetch the hints from the real root, but that doesn't help for the
rest of the planet. (Root alternatives like orsn could fix that up but
apparently their main german box that was doing IPv6 is out of the air)
  


Having  glue in GTLD/ccTLD's will help resolvers that first query 
for  glue before A glue for nameservers.  If you don't have  
glue then it's going to be an extra RTT to look up the A record for your 
nameservers, which makes your webpages slower to load.  And everyone 
wants their webpages to load faster.


The fact that the root name serers don't supply  glue for 
GTLDs/ccTLDs is a minor annoyance, people should in general only go to 
the root name servers once a day per GTLD/ccTLD.  There are 267 TLD's 
and you're unlikely to talk to them all in a given day, but almost every 
request your name server makes is going to start with a query to a GTLD 
or ccTLD server.




Re: ICANN registrar supporting v6 glue?

2007-06-29 Thread Chris L. Morrow



On Fri, 29 Jun 2007, Barrett Lyon wrote:

  there are providers that have (in the US even if that matters) ipv6
  connected auth servers, that could even help. I can't seem to make
  one of
  them want to be a registrar too :( but... maybe Ultra/Neustar could do
  that for you?

 Neustar/Ultra's .org gtld registration services apparently do not
 support v6, however, net and com do.  Yet, .org does provide a v6
 resolver:

 b0.org.afilias-nst.org. 86400   IN  2001:500:c::1

bummer :(