Are there any common locations in Alaska where multiple local ISPs
exchange traffic, either transit or peering? Or is Seattle the closest
exchange point for Alaska ISPs?
On Wed, 3 Mar 2010, Sean Donelan wrote:
Are there any common locations in Alaska where multiple local ISPs exchange
traffic, either transit or peering? Or is Seattle the closest exchange point
for Alaska ISPs?
peeringdb.com lists only SIX (in Seattle) and PAIX Seattle.
Antonio Querubin
On Mar 3, 2010, at 3:13 PM, Sean Donelan wrote:
Are there any common locations in Alaska where multiple local ISPs exchange
traffic, either transit or peering? Or is Seattle the closest exchange point
for Alaska ISPs?
PCH doesn't know of any. If any exist, we'd very much like to hear
Hello All ,
On Wed, 3 Mar 2010, Bill Woodcock wrote:
On Mar 3, 2010, at 3:13 PM, Sean Donelan wrote:
Are there any common locations in Alaska where multiple local ISPs exchange
traffic, either transit or peering? Or is Seattle the closest exchange point
for Alaska ISPs?
PCH
The Euro-IX ASN database now has more than 5.100 entries in it of which
almost 3.000 are unique ASNs. In an effort to make it a little easier for
those peering or looking to peer at European IXPs to keep up the latest IXP
participant additions, we have created a page that lists the latest entries
What's the BCP for BGP timers at exchange points?
I imagine if everyone did something low like 5-15 rather than the default
60-180, CPU usage increase could be significant given a high number peers.
Keeping in mind that bgp fast-external-failover is of no use at an
exchange since the fabric is
Hi Chris,
.-- My secret spy satellite informs me that at Mon, 25 May 2009, Chris Caputo
wrote:
Would going below 60-180 without first discussing it with your peers, tend
to piss them off?
60-180 is fairly conservative. 60-180 is the Cisco default I believe, however
Junipers defaults are
@nanog.org
Subject: Re: IXP BGP timers (was: Multi-homed clients and BGP timers)
Hi Chris,
.-- My secret spy satellite informs me that at Mon, 25 May 2009, Chris Caputo
wrote:
Would going below 60-180 without first discussing it with your peers, tend
to piss them off?
60-180 is fairly conservative
, not just stephen, say virtual wire was how
they'd do an IXP today if they had to start from scratch. i know that
for many here, starting from scratch isn't a reachable worldview, and so
i've tagged most of the defenses of shared subnets with that caveat. the
question i was answering was from
Leo Bicknell wrote:
In a message written on Fri, Apr 24, 2009 at 01:48:28AM +, Paul Vixie wrote:
i think i saw several folks, not just stephen, say virtual wire was how
they'd do an IXP today if they had to start from scratch. i know that
for many here, starting from scratch isn't
It's the technological equvilient of bringing everyone into a
conference room and then having them use their cell phones to call
each other and talk across the table. Why are you all in the same
room if you don't want a shared medium?
Probably the wrong people to ask (cf. IRC @ NANOG
But routers dont have bo.:)
--- original message ---
From: Brandon Butterworth bran...@rd.bbc.co.uk
Subject: Re: IXP
Date: 24th April 2009
Time: 8:16:00 am
It's the technological equvilient of bringing everyone into a
conference room and then having them use their cell phones to call
each
effectively in a variety of ways - and knowing which features to
avoid is just as important as knowing which features to expose. Not
every knob that can be turned, should be turned.
The challenge to a developer of the software infrastructure of a
modern IXP is to take what we learned about the ease of use
In a message written on Fri, Apr 24, 2009 at 05:06:15PM +, Stephen Stuart
wrote:
Your argument, and Leo's, is fundamentally the complacency argument
that I pointed out earlier. You're content with how things are,
despite the failure modes, and despite inefficiencies that the IXP
operator
On 24/04/2009 18:46, Leo Bicknell wrote:
I have looked at the failure modes and the cost of fixing them and
decided that it is cheaper and easier to deal with the failure modes
than it is to deal with the fix.
Leo, your position is: worse is better. I happen to agree with this
sentiment for
On Fri, Apr 24, 2009 at 12:46 PM, Leo Bicknell bickn...@ufp.org wrote:
Quite frankly, I think the failure modes have been grossly overblown.
The number of incidents of shared network badness that have caused
problems are actually few and far between. I can't attribute any
down-time to
In a message written on Fri, Apr 24, 2009 at 04:22:49PM -0500, Paul Wall wrote:
On the twelfth day of Christmas, NYIIX gave to me,
Twelve peers in half-duplex,
Eleven OSPF hellos,
Ten proxy ARPs,
Nine CDP neighbors,
Eight defaulting peers,
was how
they'd do an IXP today if they had to start from scratch. i know that
for many here, starting from scratch isn't a reachable worldview, and so
i've tagged most of the defenses of shared subnets with that caveat. the
question i was answering was from someone starting from scratch, and when
In a message written on Fri, Apr 24, 2009 at 01:48:28AM +, Paul Vixie wrote:
i think i saw several folks, not just stephen, say virtual wire was how
they'd do an IXP today if they had to start from scratch. i know that
for many here, starting from scratch isn't a reachable worldview
On Thu, Apr 23, 2009, Leo Bicknell wrote:
It's the technological equvilient of bringing everyone into a
conference room and then having them use their cell phones to call
each other and talk across the table. Why are you all in the same
room if you don't want a shared medium?
Because you
turned the single stream concept of multicast on its head,
creating essentially a unicast stream for each multicast PVC client.
-Original Message-
From: Lamar Owen [mailto:lo...@pari.edu]
Sent: Tuesday, April 21, 2009 1:21 PM
To: nanog@nanog.org
Subject: Re: IXP
On Monday 20 April 2009
On Wed, Apr 22, 2009, Holmes,David A wrote:
But I recollect that FORE ATM equipment using LAN Emulation (LANE) used
a broadcast and unknown server (BUS) to establish a point-to-point ATM
PVC for each broadcast and multicast receiver on a LAN segment. As well
as being inherently unscalable (I
On Monday 20 April 2009 18:57:01 Niels Bakker wrote:
Ethernet has no administrative boundaries that can be delineated.
Spanning one broadcast domain across multiple operators is therefore
a recipe for disaster.
Isn't this the problem that NBMA networks like ATM were built for?
Cheap,
A solution I put in place at UUnet circa 1997 was to take a set of /32
routes representing major destination, e.g. ISP web sites, content
sites, universities, about 20 of them, and temporarily place a /32
static route to each participant at the public exchange and traceroute
to the
So here is an idea that I hope someone shoots down.
We've been talking about pseudo-wires, and the high level of expertise a
shared-fabric IXP needs
to diagnose weird switch oddities, etc.
As far as I can tell, the principal reason to use a shared fabric is to allow
multiple connections
Hello Deepak:
-Original Message-
So here is an idea that I hope someone shoots down.
We've been talking about pseudo-wires, and the high level of expertise a
shared-fabric IXP needs
to diagnose weird switch oddities, etc.
As far as I can tell, the principal reason to use a shared
* dee...@ai.net (Deepak Jain) [Mon 20 Apr 2009, 23:25 CEST]:
So here is an idea that I hope someone shoots down.
We've been talking about pseudo-wires, and the high level of expertise a
shared-fabric IXP needs to diagnose weird switch oddities, etc.
[..]
What if everyone who participated
for any IXP.
Well, as long as it simply drops packets and doesn't shut the port or
some other fascist enforcement. We've had AMSIX complain that our Cisco
12k with E5 linecard was spitting out a few tens of packets per day during
two months with random source mac addresses. Started suddenly
:45:48 2009
Subject: Re: IXP
Best solution I ever saw to an 'unintended' third-party
peering was devised by a pretty brilliant guy (who can
pipe up if he's listening). When he discovered traffic
loads coming from non-peers he'd drop in an ACL that
blocked everything except ICMP - then tell the NOC
important stability / security enforcement mechanism for any IXP.
Well, as long as it simply drops packets and doesn't shut the port or
some other fascist enforcement. We've had AMSIX complain that our
Cisco 12k with E5 linecard was spitting out a few tens of packets per
day during two
On 19/04/2009 08:31, Mikael Abrahamsson wrote:
Well, as long as it simply drops packets and doesn't shut the port or
some other fascist enforcement. We've had AMSIX complain that our
Cisco 12k with E5 linecard was spitting out a few tens of packets per
day during two months with random source
, ever. This is probably the single more
important stability / security enforcement mechanism for any IXP.
Well, as long as it simply drops packets and doesn't shut the port or
some other fascist enforcement. We've had AMSIX complain that our
Cisco 12k with E5 linecard was spitting out a few
On 19.04.2009 01:38 Randy Bush wrote
just curious. has anyone tried arista for smallish exchanges, before
jumping off the cliff into debugging extreme, foundry, ...
last time I look at them their products lacked port security or
anything similiar.
whoops!
Iirc it's on the roadmap for
Iirc it's on the roadmap for thier next generation of switches.
bummer, as performance and per-port cost are certainly tasty.
Afaik low latency is due to the fact that Arista boxes are doing cut
through.
no shock there
Pricewise they are very attractive. And Arista EOS actually is more or
...
jy
On Apr 18, 2009, at 11:35 AM, Nick Hilliard wrote:
On 18/04/2009 01:08, Paul Vixie wrote:
i've spent more than several late nights and long weekends dealing
with
the problems of shared multiaccess IXP networks. broadcast storms,
poisoned ARP, pointing default, unintended third party BGP
From: Paul Vixie vi...@isc.org
Date: Sat, 18 Apr 2009 00:08:04 +
...
i should answer something said earlier: yes there's only 14 bits of tag and
yes 2**14 is 4096. in the sparsest and most wasteful allocation scheme,
tags would be assigned 7:7 so there'd be a max of 64 peers.
i meant
...@nipper.de, Paul Vixie vi...@isc.org,
na...@merit.edu na...@merit.edu
Subject: Re: IXP
Date: Sat, 18 Apr 2009 05:30:41 +
From: Stephen Stuart stu...@tech.org
Not sure how switches handle HOL blocking with QinQ traffic across trunks,
but hey...
what's the fun of running an IXP
- kris foster kris.fos...@gmail.com wrote:
painfully, with multiple circuits into the IX :) I'm not advocating
Paul's suggestion at all here
Kris
Totally agree with you Kris.
For the IX scenario (or at least looking in a Public way) it seems Another
Terrible Mistake to me.
IMHO,
On Sat, Apr 18, 2009 at 05:30:41AM +, Stephen Stuart wrote:
Not sure how switches handle HOL blocking with QinQ traffic across trunks,
but hey...
what's the fun of running an IXP without testing some limits?
Indeed. Those with longer memories will remember that I used to
regularly
On 18/04/2009 01:08, Paul Vixie wrote:
i've spent more than several late nights and long weekends dealing with
the problems of shared multiaccess IXP networks. broadcast storms,
poisoned ARP, pointing default, unintended third party BGP, unintended
spanning tree, semitranslucent loops
that complexity in. the
choice of per-peering VLANs represents a minimal response to the problems
of shared IXP fabrics, with maximal impedance matching to the PNI's that
inevitably follow successful shared-port peerings.
capabilities that support this stuff... it just
means as the IXP fabric grows it has to become router-based.
Hey, I have an idea: you could take this plan and build a tunnel-based or
even a native IP access IXP platform like this, extend it to multiple
locations and then buy transit from a bunch
that complexity in. the
choice of per-peering VLANs represents a minimal response to the problems
of shared IXP fabrics, with maximal impedance matching to the PNI's that
inevitably follow successful shared-port peerings.
complexity invites failure - failure in unusual and unexpected
ways
On Sat, 18 Apr 2009 16:58:24 +
bmann...@vacation.karoshi.com wrote:
i make the claim that simple, clean design and execution is
best. even the security goofs will agree.
Even? *Especially* -- or they're not competent at doing security.
But I hadn't even thought about DELNIs in
On 17/04/2009 15:11, Sharlon R. Carty wrote:
I like would to know what are best practices for an internet exchange. I
have some concerns about the following;
Can the IXP members use RFC 1918 ip addresses for their peering?
Can the IXP members use private autonomous numbers for their peering
security (the baseline
complexity). PE/BRAS systems suffer from a subset of IXP issues with a
few of their own. It amazes me how much security has been pushed from
the PE out into switches and dslams. Enough so, that I've found many
vendors that break IPv6 because of their security features. 1Q
their business, though - an IXP that operates a distributed metro-area
fabric has additional concerns for reliability and cost-efficient use
of resources than an IXP that operates a single switch. If
requirements were such that I needed to buy and *use* a partial mesh
topology for a distributed IXP
Stuart stu...@tech.org
Date: Sat, 18 Apr 2009 18:05:03
To: bmann...@vacation.karoshi.com
Cc: na...@merit.edu na...@merit.eduna...@merit.edu
Subject: Re: IXP
I'll get off my soap-box now and let you resume your observations that
complexity as a goal in and of itself is the olny path
I have been looking at ams-ix and linx, even some african internet
exchanges as examples. But seeing how large they are(ams-x linx) and
we are in the startup phase, I would rather have some tips/examples
from anyone who has been doing IXP for quite awhile.
So far all the responses have
On 18.04.2009 21:51 Sharlon R. Carty wrote
I have been looking at ams-ix and linx, even some african internet
exchanges as examples. But seeing how large they are(ams-x linx) and
we are in the startup phase, I would rather have some tips/examples
from anyone who has been doing IXP
Date: Sat, 18 Apr 2009 13:17:11 -0400
From: Steven M. Bellovin s...@cs.columbia.edu
On Sat, 18 Apr 2009 16:58:24 +
bmann...@vacation.karoshi.com wrote:
i make the claim that simple, clean design and execution is
best. even the security goofs will agree.
Even? *Especially*
On Sat, Apr 18, 2009 at 09:12:24PM +, Paul Vixie wrote:
Date: Sat, 18 Apr 2009 13:17:11 -0400
From: Steven M. Bellovin s...@cs.columbia.edu
On Sat, 18 Apr 2009 16:58:24 +
bmann...@vacation.karoshi.com wrote:
i make the claim that simple, clean design and execution is
Paul Vixie wrote:
if we maximize for simplicity we get a DELNI. oops that's not fast
enough we need a switch not a hub and it has to go 10Gbit/sec/port.
looks like we traded away some simplicity in order to reach our goals.
Agreed.
Security + Efficiency = base complexity
1Q has great
Stephen, that's a straw-man argument. Nobody's arguing against
VLANs. Paul's argument was that VLANs rendered shared subnets
obsolete, and everybody else has been rebutting that. Not saying that
VLANs shouldn't be used.
I believe shared VLANs for IXP interconnect are obsolete. Whether
- ruthless and utterly fascist enforcement of one mac address per
port, using either L2 ACLs or else mac address counting, with no
exceptions for any reason, ever. This is probably the single more
important stability / security enforcement mechanism for any IXP.
You should also take a look
Thanks for talking about your PNIs. Let's see:
Permit Next Increase
Private Network Interface
Private Network Interconnection
Primary Network Interface
and it goes on and on . . .
On 19.04.2009 01:08 Randy Bush wrote
just curious. has anyone tried arista for smallish exchanges, before
jumping off the cliff into debugging extreme, foundry, ...
last time I look at them their products lacked port security or anything
similiar. Iirc it's on the roadmap for thier next
On Apr 19, 2009, at 5:12 AM, Paul Vixie wrote:
many colo facilities now use one customer per vlan due to this
concern?
Haven't most major vendors for years offered features in their
switches which mitigate ARP-spoofing, provide per-port layer-2
isolation on a sub-VLAN basis, as well as
multiaccess IXP networks. broadcast storms,
poisoned ARP, pointing default, unintended third party BGP,
unintended
spanning tree, semitranslucent loops, unauthorized IXP LAN
extension...
all to watch the largest flows move off to PNI as soon as somebody's
port was getting full.
...@merit.edu
Sent: Sat Apr 18 20:45:48 2009
Subject: Re: IXP
Best solution I ever saw to an 'unintended' third-party
peering was devised by a pretty brilliant guy (who can
pipe up if he's listening). When he discovered traffic
loads coming from non-peers he'd drop in an ACL that
blocked everything except
On Fri, Apr 17, 2009 at 10:11:30AM -0400, Sharlon R. Carty wrote:
Hello NANOG,
I like would to know what are best practices for an internet exchange. I
have some concerns about the following;
Can the IXP members use RFC 1918 ip addresses for their peering?
Can the IXP members use private
Hello NANOG,
I like would to know what are best practices for an internet exchange. I
have some concerns about the following;
Can the IXP members use RFC 1918 ip addresses for their peering?
Can the IXP members use private autonomous numbers for their peering?
Maybe the answer is obviuos
I like would to know what are best practices for an
internet exchange.
I have some concerns about the following; Can the IXP
members use RFC
1918 ip addresses for their peering?
No. Those IP addresses will at least appear on traceroutes;
also, it might not be such a good idea
On Fri, 17 Apr 2009, Paul Vixie wrote:
with the advent of vlan tags, the whole idea of CSMA for IXP networks is
passe.
just put each pair of peers into their own private tagged vlan.
Uh, I'm not sure whether you're being sarcastic or not.
-Bill
On 17.04.2009 20:52 Paul Vixie wrote
with the advent of vlan tags, the whole idea of CSMA for IXP networks is
passe.
just put each pair of peers into their own private tagged vlan and let one of
them allocate a V4 /30 and a V6 /64 for it. as a bonus, this prevents third
party BGP (which
On Apr 17, 2009, at 12:00 PM, Arnold Nipper wrote:
On 17.04.2009 20:52 Paul Vixie wrote
with the advent of vlan tags, the whole idea of CSMA for IXP
networks is passe.
just put each pair of peers into their own private tagged vlan and
let one of
them allocate a V4 /30 and a V6 /64
On 17.04.2009 21:04 kris foster wrote
On Apr 17, 2009, at 12:00 PM, Arnold Nipper wrote:
On 17.04.2009 20:52 Paul Vixie wrote
with the advent of vlan tags, the whole idea of CSMA for IXP
networks is passe.
just put each pair of peers into their own private tagged vlan and
let one
Sorry, hit send a little early, by accident.
On Apr 17, 2009, at 11:52 AM, Paul Vixie wrote:
with the advent of vlan tags, the whole idea of CSMA for IXP
networks is passe.
just put each pair of peers into their own private tagged vlan.
I'm not sure whether you're being sarcastic
On Fri, 17 Apr 2009, Arnold Nipper wrote:
Large IXP have 300 customers. You would need up to 45k vlan tags,
wouldn't you?
... and exchanging multicast would be... err.. suboptimal.
--
Mikael Abrahamssonemail: swm...@swm.pp.se
On Apr 17, 2009, at 12:05 PM, Arnold Nipper wrote:
On 17.04.2009 21:04 kris foster wrote
On Apr 17, 2009, at 12:00 PM, Arnold Nipper wrote:
On 17.04.2009 20:52 Paul Vixie wrote
with the advent of vlan tags, the whole idea of CSMA for IXP
networks is passe.
just put each pair of peers
the vlan tagging idea is a virtualization of the PNI construct.
why use an IX when running 10's/100's/1000's of private network
interconnects will do?
granted, if out of the 120 ASN's at an IX, 100 are exchanging on
average - 80KBs - then its likley safe to dump them all into a single
physical
Large IXP have 300 customers. You would need up to 45k vlan tags,
wouldn't you?
the 300-peer IXP's i've been associated with weren't quite full mesh
in terms of who actually wanted to peer with whom, so, no.
On Fri, Apr 17, 2009 at 09:00:53PM +0200, Arnold Nipper wrote:
Large IXP have 300 customers. You would need up to 45k vlan tags,
wouldn't you?
Not only that, but when faced with the requirement of making the vlan
IDs match on both sides of the exchange, most members running layer 3
switches
might be low for individual source ASNs. On the other hand, if the IXP
doesn't use IGMP/MLD snooping capable switches, then I suppose it doesn't
matter.
Antonio Querubin
whois: AQ7-ARIN
the traffic
might be low for individual source ASNs. On the other hand, if the IXP
doesn't use IGMP/MLD snooping capable switches, then I suppose it doesn't
matter.
Didn't we go through all this with ATM VC's at the AADS NAP, etc?
... JG
--
Joe Greco - sol.net Network Services - Milwaukee
On 17.04.2009 23:06 Paul Vixie wrote
Large IXP have 300 customers. You would need up to 45k vlan tags,
wouldn't you?
the 300-peer IXP's i've been associated with weren't quite full mesh
in terms of who actually wanted to peer with whom, so, no.
Much depends on your definition of quite
of 1Q
tags in an IXP context?
Why? You only need 1 ;-)
Arnold
--
Arnold Nipper / nIPper consulting, Sandhausen, Germany
email: arn...@nipper.de phone: +49 6224 9259 299
mobile: +49 172 2650958 fax: +49 6224 9259 333
signature.asc
Description: OpenPGP digital signature
exchange if
there's a significant number of multicast peers even though the traffic
might be low for individual source ASNs. On the other hand, if the IXP
doesn't use IGMP/MLD snooping capable switches, then I suppose it doesn't
matter.
Didn't we go through all this with ATM VC's
with the advent of vlan tags, the whole idea of CSMA for IXP networks
is passe. just put each pair of peers into their own private tagged
vlan and let one of them allocate a V4 /30 and a V6 /64 for it. as a
bonus, this prevents third party BGP (which nobody really liked which
sometimes got
Arnold Nipper wrote:
On 17.04.2009 20:52 Paul Vixie wrote
Large IXP have 300 customers. You would need up to 45k vlan tags,
wouldn't you?
Not agreeing or disagreeing with this as a concept, but I'd imagine that
since a number of vendors support arbitrary vlan rewrite on ports
be pretty trivial.. Especially QinQ management
for VLANID
uniqueness.
Not sure how switches handle HOL blocking with QinQ traffic across trunks, but
hey...
what's the fun of running an IXP without testing some limits?
Deepak Jain
AiNET
be assigned by increment, but it's still
nowhere
near enough for 300+ peers. however, well before 300 peers, there'd
be
enough staff and enough money to use something other than a switch
in the
middle, so that the tagspace would be per-port rather than global
to the
IXP. Q in Q is not how
Not sure how switches handle HOL blocking with QinQ traffic across trunks,
but hey...
what's the fun of running an IXP without testing some limits?
Indeed. Those with longer memories will remember that I used to
regularly apologize at NANOG meetings for the DEC Gigaswitch/FDDI
head-of-line
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Elmar K. Bins wrote:
I am not an IXP operator, but I know of no exchange (public or
private, big or closet-style) that uses private ASNs or RFC1918
space.
I know of at least two IXPs where RFC 1918 space is used on the IXP
Subnet. I know a fair
201 - 284 of 284 matches
Mail list logo