Re: IPV4 as a Commodity for Profit

2008-02-28 Thread Stephen Sprunk


Thus spake "Owen DeLong" <[EMAIL PROTECTED]>

On Feb 24, 2008, at 12:45 PM, Stephen Sprunk wrote:
The wording of the question and response referred only to "ARIN 
members". That does not include most orgs with _only_ legacy 
allocations, but it would include orgs with both legacy and non- legacy 
allocations.  Presumably, if an org had both types, both  would have been 
included, but that wasn't explicitly stated since it  wasn't relevant to 
the questions I was asking at the time.


Not necessarily.  Orgs which are end-users and not LIR/ISP subscriber 
members may have resources from ARIN without being members.


82% (by number) of all direct assignments are legacy*, and that includes all 
of the class A blocks.


While I haven't requested the data to back it up, I find it fairly obvious 
that non-legacy direct assignments would be smaller on average and thus 
constitute far less than 18% (by size) of all assignments -- and a trivial 
amount of space overall compared to allocations to LIRs/ISPs.


S

* Same source.

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 



Re: IPV4 as a Commodity for Profit

2008-02-24 Thread Stephen Sprunk


Thus spake "Tom Vest" <[EMAIL PROTECTED]>

On Feb 23, 2008, at 1:54 PM, Stephen Sprunk wrote:
Rechecking my own post to PPML, 73 Xtra Large orgs held 79.28% of  ARIN's 
address space as of May 07; my apology for a faulty memory,  but it's not 
off by enough to invalidate the point.


The statistics came from ARIN Member Services in response to an email
inquiry.  I don't believe they publish such things anywhere (other  than 
what's in WHOIS), but you can verify yourself if you wish;  they were 
quite willing to

give me any stats I asked for if they had the necessary data  available.


Thanks for the information Stephen.
In order to be perfectly clear on how to interpret this, it would be  good 
to know whether this sum includes the pre-ARIN delegations, or  just 
reflects what has happened since ARIN was established.


The wording of the question and response referred only to "ARIN members". 
That does not include most orgs with _only_ legacy allocations, but it would 
include orgs with both legacy and non-legacy allocations.  Presumably, if an 
org had both types, both would have been included, but that wasn't 
explicitly stated since it wasn't relevant to the questions I was asking at 
the time.


If you are interested in who those 73 Xtra Large orgs are, you can try 
asking ARIN.  If that level of detail is covered by NDA, you can get a close 
approximation by mining WHOIS or BGP.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 



Re: IPV4 as a Commodity for Profit

2008-02-22 Thread Stephen Sprunk


Thus spake "Tom Vest" <[EMAIL PROTECTED]>

I agree, to a point.  My prediction is that when the handful of
mega-ISPs are unable to get the massive quantities of IPv4  addresses
they need (a few dozen account for 90% of all
consumption in the ARIN region)...


I keep reading assertions like this. Is there any public,  authoritative
evidence to support this claim?


Rechecking my own post to PPML, 73 Xtra Large orgs held 79.28% of ARIN's 
address space as of May 07; my apology for a faulty memory, but it's not off 
by enough to invalidate the point.


The statistics came from ARIN Member Services in response to an email
inquiry.  I don't believe they publish such things anywhere (other than 
what's in WHOIS), but you can verify yourself if you wish; they were quite 
willing to

give me any stats I asked for if they had the necessary data available.


If there is, is this 90% figure a new development, or rather the  product
of changes in ownership (e.g., MCI-VZ-UU, SBC-ATT, etc.),  changes in
behavior (a run on the bank), some combination of the two,  or something
else altogether?


Most of the orgs in the Xtra Large class were already there before the
mega-mergers started; after all, you only need >/14 to be Xtra Large.  Given
how most tend to operate in silos, they might still be separate orgs as far
as ARIN is concerned...

S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 



Re: IPV4 as a Commodity for Profit

2008-02-21 Thread Stephen Sprunk


Thus spake "Adrian Chadd" <[EMAIL PROTECTED]>

As I ranted on #nanog last night; the v6 transition will happen when it
costs more to buy / maintain a v4 infrastructure (IP trading, quadruple
NAT, support overheads, v6 tunnel brokers, etc) then it is to migrate
infrastructure to v6.

If people were sane (!), they'd have a method right now for an
enterprise to migrate 100% native IPv6 and interconnect to the v4
network via translation devices. None of this dual stack crap. It makes
the heads of IT security and technical managers spin.


I agree, to a point.  My prediction is that when the handful of mega-ISPs 
are unable to get the massive quantities of IPv4 addresses they need (a few 
dozen account for 90% of all consumption in the ARIN region), they'll 
gradually start converting consumer POPs to 10/8 and reusing the freed 
blocks for new commercial customers.  ISPs without consumer customers to 
cannibalize addresses from, e.g. hosting shops, will be the main folks 
needing to buy space on the market.


Unfortunately, it's just not possible today for most edge networks to go 
v6-only and get to the v4 Internet via NAT-PT.  WinXP can't do DNS over v6, 
and earlier versions (which are still in widespread use) can't do v6 at all. 
The vast majority of home routers/modems can't do v6 either.  They'll need 
NAT-PT eventually so all of those users stuck on v4 can get to new v6-only 
sites when they appear.  Some may offer native v6 as well for people who 
don't like ISP NAT, but the main complainers will be the heavy P2P users 
they don't want in the first place, so where's the motivation?


Enterprises are a different story entirely; most are already on RFC1918 (or 
unadvertised class B space) behind their own NAT, and adding PT 
functionality to it is a simple software update that gives them access to 
external v6-only sites without touching any of their hosts.  Once all their 
hosts can support it, perhaps in 5-10 years, they'll do a flash cut to v6 on 
the internal side and reconfigure their PT to reach external v4-only sites.


Dual-stack is necessary in the ISP core, definitely, but it's unrealistic at 
the edge.  Most of us living out there went through the hell of running 
multiple L3 protocols in the 80s and 90s and have no desire to return to it; 
there's just no ROI for doing it that way vs a simple NAT-PT box.



(ObRant: Want v6 to take off? Just give everyone who has a v4
allocation a v6 allocation already. There's enough space to make
that happen.


I'm philosophically opposed to giving people something they haven't asked 
for.  It's not like it's tough to get IPv6 space; ARIN's rejection rate is 
something like 2% once you remove the folks that applied for the wrong type.


Also, a response from the ARIN Pres/BoT on a similar topic was that it's not 
ARIN's job to push IPv6 on people, merely to educate them and serve any 
resulting requests.  Giving an IPv6 block to everyone who has an IPv4 block 
definitely goes against that philosophy.



Oh wait, that reduces IRR revenues..)


Not at all; at least under the current fee schedule, revenues won't go down 
until total consumption of IPv4 space is well into a decline, which isn't 
going to happen for a long time.  If that happens by 2020, I'll be 
pleasantly surprised.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 



Re: v6 subnet size for DSL & leased line customers

2008-01-03 Thread Stephen Sprunk


Thus spake "Simon Lyall" <[EMAIL PROTECTED]>

On Wed, 2 Jan 2008, Deepak Jain wrote:

Is there anything inherently harmful with suggesting that filtering at
RIR boundaries should be expected, but those that accept somewhat
more lenient boundaries are nice guys??? When the nice guys run
out of resources, they can filter at RIR boundaries and say they are
doing so as a security upgrade :_).


So how would this work for large companies?

In theory multinationals like Morgan Stanley, Wall-Mart or HSBC should
only get at most a /48 from each RIR.


In what theory?  They'd get at _minimum_ a /48 from each RIR that has 
approved PIv6.  If they needed more, they merely have to fill out the 
appropriate paperwork showing justification.  If they operate in regions 
where the RIR hasn't approved PIv6, they'd route around the failure there 
and use space assigned by other RIRs.  (Not saying I approve of that, but 
it's reality.)


Currently ARIN is approving all requests for more than a /48 since there is 
no definition of what "justify" means in that context.  Ebay, which is 
hardly the size of the companies you listed, got a /41.  That obviously 
needs fixing, but the problem is the opposite of the one you seem to be 
theorizing.



How should they handle region offices, Especially mutihomed ones?


Announce their prefix from all locations, with more-specifics for TE 
purposes.  Presumably their upstreams would carry the more-specifics since 
they're being paid to, but folks further away would filter them and only see 
the covering aggregate, which is good enough.


(Note this assumes they have an internal network; if they didn't, each 
disconnected part would be a "site" and qualify for a /48 on its own. 
That's a suboptimal solution, though, for reasons too numerous to list.)


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 



Re: v6 subnet size for DSL & leased line customers

2007-12-25 Thread Stephen Sprunk


Thus spake <[EMAIL PROTECTED]>

In places where you need tighter control over the usage of various
gateways on a common L2 segment, VRRP probably makes more
sense.  However, as things currently stand, that means static routing
configuration on the host since for reasons passing understanding,
DHCP6 specifically won't do gateway assignment.


For those of us with lots of IPv4 customers dependent on DHCP, it
would be good to know more detail about this point. What is the
problem, and are there plans to do anything about it in DHCPv6?


For most hosts, there is no need for anything like VRRP or getting a default 
gateway via DHCP in v6 because all hosts are required to implement RA/RS. 
The vast majority of hosts either have one default gateway or two-plus 
equivalent ones, so that works fine.  For hosts with multiple gateways that 
are unequal, VRRP+DHCP doesn't solve the problem any better than RA/RS; you 
have to fiddle with the hosts' routing tables to get things set up right.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: IEEE 40GE & 100GE

2007-12-13 Thread Stephen Sprunk


Thus spake "Chris Cole" <[EMAIL PROTECTED]>

The 40km/10km cost ratio is between 1.6x and 2x, depending on
the source.

The 10km/4km cost ratio is between 1.15x and 1.3x, again
depending on the source.


If those numbers translate into prices (not costs), then I'd prefer to see 
40km and 4km optics, with no 10km optics.  The important point is that the 
40km optics neet to be able to handle 4.1km links with no attenuators, 
preferably without any human tuning at all.  You only pay the extra capital 
cost once (if there even is any, due to more volume of fewer parts), but you 
pay labor and sparing over and over.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: 240/4

2007-10-18 Thread Stephen Sprunk


Thus spake "Pekka Savola" <[EMAIL PROTECTED]>
The operators who want to do something private with this space don't need 
the IETF or IANA approval to do so.  So they should just go
ahead and do it.  If they can manage to get it to work, and live to tell 
about it, maybe we can consider that sufficient proof that we can start 
thinking about reclassification.


There are, fortunately, a number of vendors that don't like to go against 
existing RFCs.  We're one of them.  Regardless of customer demand, I will 
block any attempt inside our development group to allow 240/4 until the IETF 
reclassifies it from experimental to unicast address space.  Note that doing 
that would _not_ automatically imply that the IETF would direct IANA to 
delegate that space to the RIRs; the IETF could direct IANA to mark one /8 
as private and the rest reserved.  Releasing the rest to the RIRs shouldn't 
be done until it is observed that a non-trivial number of hosts on the 
public network support it -- if that ever happens.


I can see cases for using 240/4 on private networks where one has more 
control over patches getting deployed (or is using OSes one can patch 
themselves or bully vendors to patch), but that's all that's worth 
discussing now.  Short of someone from Microsoft indicating they'd post a 
patch on Windows Update for Vista, XP, and possibly earlier systems, any 
discussion of _when_ these addresses _might_ be usable on a public network 
is a waste of bits.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 



Re: Access to the IPv4 net for IPv6-only systems, was: Re: WG Action: Conclusion of IP Version 6 (ipv6)

2007-10-04 Thread Stephen Sprunk


Thus spake "Iljitsch van Beijnum" <[EMAIL PROTECTED]>

On 2-okt-2007, at 15:56, Stephen Sprunk wrote:

Second, the ALGs will have to be (re)written anyways to deal
with IPv6 stateful firewalls, whether or not NAT-PT happens.


That's one solution. I like the hole punching better because it's  more 
general purpose and better adheres to the principle of least 
astonishment.


ALGs are just automated hole-punching.


That's the purpose of an ALG.  Requiring users to modify their
home router config or put in a change request with their IT
department for a firewall exception is a non-starter if you want
your app to be accepted.


Hence uPnP and NAT-PMP plus about half a dozen protocols
the IETF is working on.


uPNP is moderately successful in the consumer space; it still doesn't work 
very well today, and it won't work at all in a few years when ISPs are 
forced to put consumers behind their own NAT boxes because they can't get 
any more v4 addresses.


None of those protocols are being seriously considered by business folks.

ALGs are here to stay.  If the NAT/FW box can recognize a SIP call, or an 
active FTP transfer, or whatever and open the pinhole on its own, why is 
that a bad thing?  Since it's the NAT/FW box that's breaking things, it's 
the NAT/FW box's responsibility to minimize that breakage -- not rely on 
hosts to tell it when a pinhole needs to be opened.


Huh? They both do, that's the point. (Although the former doesn't   work 
for everything and the latter removes the "IPv6-only" status   from the 
host if not from the network it connects to.)



The former only handles outbound TCP traffic, which works
through pure NAT boxes as it is.


BitTorrent is TCP, but it sure doesn't like NAT because it gets in  the 
way of incoming sessions.


Of course.  It doesn't help that many ISPs are filtering inbound SYN packets 
specifically to block (or at least severely degrade the performance of) P2P 
apps.


The latter "solution" ignores the problem space by telling people  to not 
be v4-only anymore.


Decoding IPv4 packets on a host is trivial, they already have all
the necessary code on board. It's building an IPv4 network that's
a burden.


Today, at least, it's less of a burden to build a NATed v4 network than it 
is to try to get v6 working end-to-end (with or without NAT).



There is a difference between the networks and the hosts.
Upgrading networks to dual stack isn't that hard, because it's
built of only a limited number of different devices.


*giggle*  You mean like the 90% of hosts that will be running Vista 
(which has v6 enabled by default) within a couple years?  Or the  other 
10% of hosts that have had v6 enabled for years?


The problem isn't the hosts.  It isn't even really the core  network. 
It's all the middleboxes between the two that are v4-only  and come from 
dozens of different clue-impaired vendors.


You forget that the majority of applications need to be changed to  work 
over IPv6.


The majority of bits moved are via apps that support v6.

One of the benefits of NAT-PT is all those legacy v4-only apps can stay 
exactly how they are (at least until the next regular upgrade, if any) and 
talk to v6 servers, or to other v4 servers across a v6-only network.



On 2-okt-2007, at 16:10, Stephen Sprunk wrote:


You just open up a hole in the firewall where appropriate.



You obviously have no experience working in security.


Who wants those headaches?

You can't trust the OS (Microsoft?  hah!), you can't trust the 
application (malware), and you sure as heck can't trust the user 
(industrial espionage and/or social engineering).  The only way  that 
address-embedding protocols can work through a firewall,  whether it's 
doing NAT or not, is to use an ALG.


You assume a model where some trusted party is in charge of a  firewall 
that separates an untrustworthy outside and an

untrustworthy inside. This isn't exactly the trust model for most
consumer networks.


Yes, it is.  Or at least it should be.  There is no "trusted" side of a 
firewall these days.  Even a decade ago it was recognized that the majority 
of attacks were from the "inside".  With the advent of worms and viruses 
(spread by insecure host software), "outside" attackers are almost 
irrelevant compared to "inside" attackers.


Also, consumer networks are not the only relevant networks.  There are 
arguably just as many hosts on enterprise networks, and the attitudes and 
practices of their admins (regardless of technical correctness) need to be 
considered.


Also, why would you be able to trust what's inside the control  protocol 
that the ALG looks at any better than anything else?


You can't completely, and obviously ALGs would fail completely if IPsec ever 
took off (in fact, that ma

Re: Creating demand for IPv6, and saving the planet

2007-10-03 Thread Stephen Sprunk


Thus spake "Daniel Senie" <[EMAIL PROTECTED]>
A number of people have bemoaned the lack of any IPv6-only killer-content 
that would drive a demand for IPv6. I've thought about this, and about the 
government's push to make IPv6 a reality. What occurred to me is there is 
a satellite sitting in storage that would provide such content:


  http://en.wikipedia.org/wiki/Triana_(satellite)

Al Gore pushed for this satellite, Triana, to provide those on earth with 
a view of the planet among its scientific goals. The

Republicans referred to it as an "overpriced screen saver," though
the effect even of just the camera component on people's lives
and how they treat the planet could be considerable.

By combining the launch of Triana with feeding the still images and video 
from servers only connected to native IPv6 bandwidth, the government would 
provide both a strong incentive for end users to want to move to IPv6, and 
a way to get the people of this planet to stop from time to time and 
ponder the future of the earth.


Here's a simple question that applies to every "killer app" that's been 
proposed for IPv6: if you're going to the trouble of making a killer app and 
giving/selling it to the public, why wouldn't you include support for IPv4?


Virtually every "unique" feature of IPv6, except the number of bits in the 
address, has been back-ported to IPv4.  There is simply no other advantage 
left, and thus no room for apps that "require" IPv6.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: Creating demand for IPv6

2007-10-02 Thread Stephen Sprunk


Thus spake "Seth Mattinen" <[EMAIL PROTECTED]>

Stephen Sprunk wrote:

If you feel ARIN has not solved the PIv6 issue sufficiently well,
please take that argument to PPML.  As of today, if you qualify
for PIv4 space, you qualify for PIv6 space automatically -- and
you only have to pay the fees for one of them.


Really? As far as I understood it, I still had to pay $500 for end-user 
allocations.


If you're an end user, you pay $100/yr for _all_ your resources.  If you're 
an LIR, you pay either your v4 or v6 maintenance fees, whichever is greater.


I don't know the status of the v6 initial assignment fee; I think that the 
v6 initial allocation fee was waived at one point.  If they're not waived 
now, that'd be a one-time cost of $1250.


The only $500/yr fee is to be a "General Member", which is how non-LIRs get 
to vote in ARIN elections.  You don't need to be a member to get a v6 
assignment.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: Access to the IPv4 net for IPv6-only systems, was: Re: WG Action: Conclusion of IP Version 6 (ipv6)

2007-10-02 Thread Stephen Sprunk


Thus spake "Iljitsch van Beijnum" <[EMAIL PROTECTED]>

On 2-okt-2007, at 11:36, John Curran wrote:

The proxy&tunnel vs NAT-PT differences of opinion are entirely
based on deployment model... proxy has the same drawbacks
as NAT-PT,


The main issue with a proxy is that it's TCP-only. The main issue  with 
NAT-PT is that the applications don't know what going on.

Rather different drawbacks, I'd say.


There are several different mechanisms devices can use to discover they're 
behind a NAT(-PT) if they care.  Most do not, and those that do often can't 
do anything about it even if they know.



only without the attention to ALG's that NAT-PT will receive,


ALGs are not the solution. They turn the internet into a telco-like 
network where you only get to deploy new applications when the

powers that be permit you to.


That's somewhat true if you rely on a NAT-PT upstream.  However, you can run 
your own NAT-PT box, decide what ALGs to run, and bypass the upstream NAT-PT 
since you will _appear_ to be a natively dual-stacked site.  Of course, 
you're limited by the vendor writing the ALGs in the first place, but that's 
just an argument for OSS.  Or perhaps it's an argument for deploying real v6 
support and getting rid of NAT-PT entirely.


The alternative to NAT-PT is multilayered v4 NAT, which has the same problem 
you describe except there's no way out.



and tunnelling is still going to require NAT in the deployment
mode once IPv4 addresses are readily available.


Yes, but it's the IPv4 NAT we all know and love (to hate). So this  means 
all the ALGs you can think of already exist and we get to

leave  that problem behind when we turn off IPv4.


We'll still need all those ALGs for v6 stateful firewalls.  Might as well 
put them to use in NAT-PT during the transition between the ALG'd starting 
phase (all v4) and the ALG'd ending phase (all v6).



Also, not unimportant: it allows IPv4-only applications to work
trivially.


Any applications that work "trivially" through v4 NAT will also work 
"trivially" through NAT-PT and v6 stateful firewalls.  The interesting apps 
are the ones that don't work through NAT or firewalls without ALGs.


If you're making some silly argument about non-NAT v4 access, well, you're 
over a decade out of touch with reality.  The number of v4 hosts that are 
_not_ behind a NAT is negligible today.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: Creating demand for IPv6

2007-10-02 Thread Stephen Sprunk


Thus spake "William Herrin" <[EMAIL PROTECTED]>

As far as I can tell, IPv6 is at least theoretically capable of
offering exactly two things that IPv4 does not offer and can't easily
be made to offer:

1. More addresses.
2. Provider independent addresses

At the customer level, #1 has been thoroughly mitigated by NAT,
eliminating demand. Indeed, the lack of IPv6 NAT creates a
negative demand: folks used to NAT don't want to give it up.

This community (network operators) has refused to permit #2,
even to the extent that its present in IPv4, eliminating that source
of demand as well.


If you feel ARIN has not solved the PIv6 issue sufficiently well, please 
take that argument to PPML.  As of today, if you qualify for PIv4 space, you 
qualify for PIv6 space automatically -- and you only have to pay the fees 
for one of them.


If you're claiming that you have a PIv6 block and ISPs won't route it, 
please publicly shame the offending parties here so the rest of us will know 
not to give them our money.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: Access to the IPv4 net for IPv6-only systems, was: Re: WG Action: Conclusion of IP Version 6 (ipv6)

2007-10-02 Thread Stephen Sprunk


Thus spake Duane Waddle

On 10/2/07, Stephen Sprunk <[EMAIL PROTECTED]> wrote:

If you think anyone will be deploying v6 without a stateful firewall,
you're delusional.  That battle is long over.  The best we can hope
for is that those personal firewalls won't do NAT as well.


Vendor C claims to support v6 (without NAT) in their "enterprise
class" stateful firewall appliance as of OS version 7.2 (or
thereabouts, perhaps 7.0).  I've not tried it out yet to see how
well it works.


Good for them.  Perhaps one day their Divison L will wake up and do the same 
for consumer products.



But, as far as the home/home office goes -- will my cable/dsl
provider be able (willing?) to route a small v6 prefix to my home
so that I can use a bitty-box stateful v6 firewall without NAT?
What will be the cost to me, the home subscriber, to get said
routable prefix?  I am sure it increases the operator's expense
to route a prefix to most (if not every) broadband subscriber in
an area.


Pricing is, of course, up to the vendors and operators in question.

One possibility is that your CPE box would do a DHCP PD request for a /64 
upstream, the /64 would come out of a pool for your POP.  As the response 
came back downstream from whatever box managed the pool, routers would 
install the /64 in their tables to make it reachable.  It wouldn't need to 
propogate any higher than the POP since the the POP's routers would be 
advertising a constant aggregate for the pool into the core.


Another possibility is that the operator would assign a /48 (or /56) to your 
cable/DSL modem, which would handle the above functions at the home level 
instead of the POP level.  It would provide a /64 natively on its own 
interface, and delegate /64s to downstream devices on request.  If 
customer-owned CPE boxes did the same thing, you could chain hundreds of 
them together and have a network that Just Worked(tm).



In the beginning, cable operators were reluctant to support home
customers using NAT routers to share their access.


Of course -- they were used to charging per television.  However, they 
learned over time that they really wanted to charge for usage and the 
per-computer model didn't work like the per-television model did.  Now they 
don't care about how many computers you have, just how many bits you move. 
That's a good thing.



Now, renting/selling NAT routers to customers has become a
revenue stream for some.


I bet they break even at best on the rentals, given how often the darn 
things die.  One shipment and/or truck roll eliminates a year's profit 
margin on the equipment, even if the replacement box itself is free.



How does lack of v6 NAT affect all of this?


It prevents them from being characteristically stupid.  However, I wouldn't 
be surprised if one or more of them demanded it from their vendors, though, 
or if their vendors caved to win a deal.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: Access to the IPv4 net for IPv6-only systems, was: Re: WG Action: Conclusion of IP Version 6 (ipv6)

2007-10-02 Thread Stephen Sprunk


Thus spake "Iljitsch van Beijnum" <[EMAIL PROTECTED]>

On 2-okt-2007, at 15:05, Adrian Chadd wrote:

Please explain how you plan on getting rid of those protocol-
aware plugins when IPv6 is widely deployed in environments
with -stateful firewalls-.


You just open up a hole in the firewall where appropriate.

You can have an ALG, the application or the OS do this. As you  probably 
know by now, I don't favor the ALG approach.


You obviously have no experience working in security.  You can't trust the 
OS (Microsoft?  hah!), you can't trust the application (malware), and you 
sure as heck can't trust the user (industrial espionage and/or social 
engineering).  The only way that address-embedding protocols can work 
through a firewall, whether it's doing NAT or not, is to use an ALG.


The defense and healthcare industries will force vendors to write those ALGs 
(actually, make minor changes to existing ones) if they care about the 
protocols in question because they have no choice -- security is the law. 
And, once those ALGs are available, everyone else will use them.


Even for home users, most have zero clue how to "open a hole" in their home 
firewall.  Consumer OSes are far, far too insecure to let them sit exposed 
without a firewall by default (you can't even patch a Windows system before 
it's hacked), and we can't trust end users not to run malware that will open 
holes for them.



End-to-end-ness is and has be-en "busted" in the corporate
world AFAICT for a number of years. IPv6 "people" seem to
think that simply providing globally unique addressing to all
endpoints will remove NAT and all associated trouble. Guess
what - it probably won't.


If you don't want end-to-end, be a man (or woman) and use a
proxy.   Don't tell the applications they they are connected to the
rest of the world and then pull the rug from under them. This
works in IPv4 today but don't expect this to carry over to IPv6.
At least not without a long, bloody fight.


If you think anyone will be deploying v6 without a stateful firewall, you're 
delusional.  That battle is long over.  The best we can hope for is that 
those personal firewalls won't do NAT as well.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: Access to the IPv4 net for IPv6-only systems, was: Re: WG Action: Conclusion of IP Version 6 (ipv6)

2007-10-02 Thread Stephen Sprunk


Thus spake "Iljitsch van Beijnum" <[EMAIL PROTECTED]>

On 1-okt-2007, at 19:56, Stephen Sprunk wrote:
There is no "IPv6 world".  I've heard reference over and over to  how 
developers shouldn't add "NAT support" into v6 apps, but

the reality is that there are no "v6 apps".  There are IPv4 apps
and IP apps that are version agnostic.  The NAT code is there
and waiting to be used whether the socket underneath happens
to be v4 or v6 at any given time.


I could talk about APIs and how IPv6 addresses are embedded
in protocols, but let me suffice to say that although your
applications may work over both IPv4 and IPv6, this doesn't
mean that the two protocols are completely interchangeable.
NATs and their ALGs as well as applications WILL have to be
changed to make protocols that embed IP addresses work
through NAT-PT (or IPv6 NAT).


First, there really aren't that many apps today that embed IP addresses or 
don't follow the traditional client-server model.  We don't have more of 
them because of v4 NAT.


Second, the ALGs will have to be (re)written anyways to deal with IPv6 
stateful firewalls, whether or not NAT-PT happens.


The other thing is NAT is only a small fraction of the problem;  most of 
the same code will be required to work around stateful  firewalls even in 
v6.


There are different approaches possible for this. Opening up
holes in the firewall is probably better than ALGs.


That's the purpose of an ALG.  Requiring users to modify their home router 
config or put in a change request with their IT department for a firewall 
exception is a non-starter if you want your app to be accepted.  Whether the 
pinhole is needed because of a NAT or a stateful firewall is irrelevant; 
what matters is having an ALG create the pinhole _automatically_.



1. for IPv6-only hosts with modest needs: use an HTTPS proxy
to relay TCP connections



2. for hosts that are connected to IPv6-only networks but with
needs that can't be met by 1., obtain real IPv6 connectivity
tunneled on-demand over IPv6



Neither solves the problem of v6-only hosts talking to v4-only hosts.


Huh? They both do, that's the point. (Although the former doesn't  work 
for everything and the latter removes the "IPv6-only" status  from the 
host if not from the network it connects to.)


The former only handles outbound TCP traffic, which works through pure NAT 
boxes as it is.  The latter "solution" ignores the problem space by telling 
people to not be v4-only anymore.



NAT-PT gives hosts the _appearance_ of being dual-stacked
at very little up-front cost.


Again, you're right. The costs will be ongoing in the form of
reduced transparency (both in the technical/architectural sense
and in the sense that applications behave unexpectedly) and the
continous need to accommodate workarounds in applications.


Agreed.  People have shown they're willing to accept those costs in a 
v4-only network.  Extending that to the transition phase adds zero _new_ 
costs.  Providing a way out for people if they deploy v6 is a new _benefit_.



Could you please explain what problems you see with the
proxy/tunnel approach and why you think NAT-PT doesn't have
these problems?


NAT-PT works for more apps/protocols.  It definitely has its own problems, 
though.  That's why I view it as a transition technology, not a desirable 
end state.  If it's successful, it will drive itself out of existence.



When v4-only users get sick of going through a NAT-PT
because it breaks a few things, that will be their motivation to
get real IPv6 connectivity and turn the NAT-PT box off -- or
switch it around so they can be a v6-only site internally.


Yeah right. Youtube is going to switch to IPv6 because I have
trouble viewing their stuff through NAT-PT. (Well, they use
flash/HTTP so I guess I wouldn't.)


Either YouTube won't care, in which case NAT-PT obviously isn't as evil as 
people claim, or they will care and they'll deploy v6.  I don't claim to 
know which scenario is correct, but I assert that it's one of the two.



No, what's going to happen is that users will demand IPv4
connectivity from their service providers if IPv6-only doesn't
work well enough.


This is one place where the duopoly will work in our favor -- most people 
(at least in the US) only have two choices, and if neither of them has new 
IPv4 addresses available due to exhaustion, people simply can't buy 
non-NATed v4 access.  The choices will be native v6, NAT-PT to v4, or 
multilayered v4 NAT.


If that doesn't work "well enough", the people at the other end will be 
motivated to deploy native v6 on their end to make their service work better 
than their competitors' -- and all the evil NAT(-PT) stuff is bypassed.



On 1-okt-2007, at 20:15, Stephen Sprunk wrote:
The issue is that introducin

Re: Access to the IPv4 net for IPv6-only systems, was: Re: WG Action: Conclusion of IP Version 6 (ipv6)

2007-10-01 Thread Stephen Sprunk


Thus spake <[EMAIL PROTECTED]>

"Historic" usually refers to "stuff we've managed to mostly stamp
out production use".

So it boils down to "Do you think that once that camel has gotten
its nose into the tent, he'll ever actually leave?".


This particular camel will be here until we manage to get v4 turned off, 
regardless of what status the IETF dogmatists assign it.  Once that happens, 
though, there will be no need for NAT-PT anymore  :-)



(Consider that if (for example) enough ISPs deploy that sort of
migration tool, then Amazon has no incentive to move to IPv6, and
then the ISP is stuck keeping it around because they don't dare
turn off Amazon).


That depends.  If Amazon sees absolutely no ill effects from v6 users 
reaching it via v4, then they obviously have little technical incentive to 
migrate.  OTOH, if that is true, then all the whining about how "evil" 
NAT-PT is is obviously bunk.  We can't have it both ways, folks: either 
NAT-PT breaks things and people would move to native v6 to get away from it, 
or NAT-PT doesn't break things and there's no reason not to use it.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: Access to the IPv4 net for IPv6-only systems, was: Re: WG Action: Conclusion of IP Version 6 (ipv6)

2007-10-01 Thread Stephen Sprunk


Thus spake "Iljitsch van Beijnum" <[EMAIL PROTECTED]>
For the purpose of this particular discussion, NAT in IPv4 is  basically a 
given: coming up with an IPv4-IPv6 transition

mechanism that only works with if no IPv4 NAT is present both
defeats the purpose (if we had that kind of address space we
wouldn't have a problem in the first place) and it's completely
unrealistic.

The issue is that introducing NAT in IPv6, even if it's only in the 
context of translating IPv6 to IPv4, for a number of protocols,  requires 
ALGs in the middle and/or application awareness. These  things don't exist 
in IPv6, but they do exist in IPv4. So it's a  better engineering choice 
to have IPv4 NAT than IPv6 NAT.


Of course ALGs will exist in IPv6: they'll be needed for stateful firewalls, 
which aren't going away in even the most optimistic ideas of what an 
IPv6-only network will look like.


I don't see the problem with proxying, except that it only works for  TCP. 
Yes, you need a box in the middle, but that's true of any  solution where 
you have an IPv6-only host talk to an IPv4-only

host.  If both sides use a dual stack proxy, it's even possible to
use address-based referrals. E.g., the IPv4 host asks the proxy
to set up a session towards 2001:db8:31::1 and voila, the IPv4
host can talk to the IPv6 internet. Not possible with a NAT-PT
like solution.


Only one side needs to proxy/translate; if both sides have a device to do 
it, one of them will not be used.  Better, if both sides support the same 
version (either v4 or v6), that would be used without any proxying or 
translating at all.


Tunneling IPv4 over IPv6 is a lot cleaner than translating between  the 
two. It preserves IPv4 end-to-end.  :-)


And when we run out of v4 addresses in a few years, what do you propose we 
do?  It makes little sense to tunnel v4 over v6 until v6 packets become the 
majority on the backbones -- and the only way that'll happen is if everyone 
dual-stacks or is v6-only.  If everyone has v6 connectivity, then why do we 
need to route v4 anymore, even over tunnels?


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: Access to the IPv4 net for IPv6-only systems, was: Re: WG Action: Conclusion of IP Version 6 (ipv6)

2007-10-01 Thread Stephen Sprunk


Thus spake "Iljitsch van Beijnum" <[EMAIL PROTECTED]>

On 28-sep-2007, at 6:25, Jari Arkko wrote:

And make it works both way, v4 to v6 and v6 to v4.
And also don’t call it NAT-PT. That name is dead.



For what it is worth, this is one of the things that I want
to do. I don't want to give you an impression that NAT-PT++
will solve all the IPv6 transition issues; I suspect dual stack
is a better answer. But nevertheless, the IETF needs to
produce a revised spec for the translation case. Fred and
I are organizing an effort to do this.


The problem with NAT-PT (translating between IPv6 and IPv4
similar to IPv4 NAT) was that it basically introduces all the NAT
ugliness that we know in IPv4 into the IPv6 world.


There is no "IPv6 world".  I've heard reference over and over to how 
developers shouldn't add "NAT support" into v6 apps, but the reality is that 
there are no "v6 apps".  There are IPv4 apps and IP apps that are version 
agnostic.  The NAT code is there and waiting to be used whether the socket 
underneath happens to be v4 or v6 at any given time.


Yes, ideally the NAT code wouldn't get used if the socket were v6.  The 
other thing is NAT is only a small fraction of the problem; most of the same 
code will be required to work around stateful firewalls even in v6.



Rather than "solving" this issue by trying harder, I would like to
take the IETF to adopt the following approach:

1. for IPv6-only hosts with modest needs: use an HTTPS proxy
to relay TCP connections

2. for hosts that are connected to IPv6-only networks but with
needs that can't be met by 1., obtain real IPv6 connectivity
tunneled on-demand over IPv6


Neither solves the problem of v6-only hosts talking to v4-only hosts.

The fundamental flaw in the transition plan is that it assumes every host 
will dual-stack before the first v6-only node appears.  At this point, I 
think we can all agree it's obvious that isn't going to happen.


NAT-PT gives hosts the _appearance_ of being dual-stacked at very little 
up-front cost.  It allows v6-only hosts to appear even if there still remain 
hosts that are v4-only, as long as one end or the other has a NAT-PT box. 
The chicken and egg problem is _solved_.  When v4-only users get sick of 
going through a NAT-PT because it breaks a few things, that will be their 
motivation to get real IPv6 connectivity and turn the NAT-PT box off -- or 
switch it around so they can be a v6-only site internally.


The alternative is that everyone just deploys multi-layered v4 NAT boxes and 
v6 dies with a whimper.  Tell me, which is the lesser of the two evils?


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: Question on Loosely Synchronized Router Clocks

2007-09-18 Thread Stephen Sprunk


Thus spake "Xin Liu" <[EMAIL PROTECTED]>

Ideally, yes, a protocol should not rely on clock synchronization
at all. However, to ensure freshness of messages, we don't have
many choices, and clock synchronization seems to be the least
painful one.  So we asked about router clocks on the current
Internet. If normally router clocks are synchronized and we have
a mechanism to detect and fix out-of-sync clocks, ...


Your protocol should not attempt to "fix" clocks that aren't in sync unless 
it's specifically labeled as a time-distribution protocol.  If people wanted 
that, they'd be using NTP.  Do not surprise them with unexpected behavior.



... is it reasonable to assume clock synchronization in the rest
of our design?


In general, it is not.  I can't think of any existing protocol that does, 
actually.


There's basically two common methods for determining "freshness": 
liveness-based and duration-based.  BGP, for instance, uses the model where 
the most recent message regarding a particular route is assumed to be fresh 
until the peer is detected to be dead, in which case all messages from that 
peer become stale.  RIP, on the other hand, uses the model where a message 
is fresh (unless updated) for a certain duration and it becomes stale when 
that duration expires.


Notice that neither requires the sender or receiver to agree on the time. 
Even in protocols like SIP, which include an explicit validity duration for 
some messages, that duration is specified as the number of seconds after 
transmission, not a fixed point in time.  You don't need to agree on what 
time it is to agree when "180 seconds from now" is.  The receiver takes the 
current local time, adds the duration specified, and that's the local 
expiration time.


HTTP muddles this a bit by allowing absolute time/date expiration; however, 
it requires the server to send what it thinks the current time/date is.  The 
client should then calculate the difference and uses it as if it were a 
duration above.  (i.e. if the server says it's now 1 Jan 1980 00:00:00 and 
an object expires on 31 Jan 1980 00:00:00, and my local time is now 18 Sep 
2007 19:49:00, my client should actually use an expiration of 17 Oct 2007 
19:49:00.)  That's ugly.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: Question on Loosely Synchronized Router Clocks

2007-09-18 Thread Stephen Sprunk


Thus spake "Xin Liu" <[EMAIL PROTECTED]>

Sorry for the confusion. Let me clarify.

We are interested in a number of questions:
1. Can we assume loosely synchronized router clocks in the
Internet, or we have to make absolutely no assumption about
router clocks at all?


That assumption is _generally_ true, but not often enough that you can rely 
on it.



2. If the router clocks are indeed loosely synchronized, what is
the granularity we can assume? Particularly, we are interested in
whether we can assume router clocks are synchronized within
10 minutes.


My experience is they'll either be within a few seconds or off by several 
days to years.  There's not much middle ground.



3. It's always possible that a router's clock goes wrong. In
practice, how often does this happen?


It's unlikely to "go wrong" to any noticeable degree _if it was ever correct 
in the first place_.  However, many people do not bother setting the clocks 
at all (which will often result in a clock that's off by a decade or more), 
or intentionally set them to be wrong.  A lot of folks had to set their 
clocks back a few years around Y2k, for instance.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: Route table growth and hardware limits...talk to the filter

2007-09-10 Thread Stephen Sprunk


Thus spake "Kevin Loch" <[EMAIL PROTECTED]>

Stephen Sprunk wrote:

Sucks to be them.  If they do not have enough PA space to meet
the RIR minima, the community has decided they're not "worthy"
of a slot in the DFZ by denying them PI space.


Not true, there is an ARIN policy that allows you to get a /24 from
one of your providers even if you only need 1 IP address:

NPRM 4.2.3.6

"This policy allows a downstream customer's multihoming
requirement to serve as justification for a /24 reassignment from
their upstream ISP, regardless of host requirements."

http://www.arin.net/policy/nrpm.html


If the PA /24 is under 199/8 or 204-207/8, then the filters being discussed 
would allow their advertisement through, because ARIN's minimum allocation 
for those blocks is /24.  In ARIN's 22 other /8s, the filters would not 
because the minimum is /20 (or /22, for 208/8).


Let's also keep in mind that if other folks block a PA more-specific, the 
site doesn't lose connectivity unless they lose their upstream connection to 
the LIR that assigned them the block.  I suspect that many of them already 
see that behavior today, at least partially; we're really discussing making 
it a near-complete outage versus a semi-outage.  That's life if you don't 
qualify for a real routing slot via PI.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: Route table growth and hardware limits...talk to the filter

2007-09-10 Thread Stephen Sprunk


Thus spake "Jon Lewis" <[EMAIL PROTECTED]>

The trouble is, it turns out there are a number of networks where
CIDR isn't spoken.  They get their IP space from their RIR, break
it up into /24s, and announce those /24s (the ones they're using
anyway) into BGP as /24s with no covering CIDR.


IMHO, such networks are broken and they should be filtered.  If people doing 
this found themselves unable to reach the significant fraction of the Net 
(or certain key sites), they would add the covering route even if they were 
hoping people would accept their incompetent/TE /24s.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: Route table growth and hardware limits...talk to the filter

2007-09-10 Thread Stephen Sprunk


Thus spake "Forrest" <[EMAIL PROTECTED]>

With the option of filtering on the RIR minimums, I'm not terribly
worried about breaking connectivity to the people announcing all
/24s instead of their /19.  Broken connectivity for them is probably
the only way they will ever look at cleaning up their announcements.
 The organizations that are hurt unnecessarily by filtering on the
RIR minimums are the ones multi-homing with smaller PA space


Sucks to be them.  If they do not have enough PA space to meet the RIR 
minima, the community has decided they're not "worthy" of a slot in the DFZ 
by denying them PI space.  OTOH, most providers are happy to give out as 
much PA space as you need, as long as you pay for it.  If you only have a 
/25 today and you need a /24 for your PA route to be heard, call your 
upstream's sales droid and request a /24.



or announcing a few more specifics here and there for traffic
engineering.


Such folks would lose the effects of their TE, if their TE routes are longer 
than RIR minima, but not connectivity in general.  Also, TE is only useful a 
few AS hops away, so the filter being discussed could be combined with 
another solution being proposed to allow longer-than-RIR-minima routes with 
a short AS PATH.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: [funsec] The "Great IPv6 experiment" (fwd)

2007-09-05 Thread Stephen Sprunk


Thus spake "Deepak Jain" <[EMAIL PROTECTED]>

Crap. Now people are going to start asking if the ipv6 platform
does ipv6 forwarding in hardware or software. :|


We'll all have to answer "hard"ware of course, since admitting we forward 
the Experiment's traffic in "soft"ware would be rather embarassing, right?


S

P.S.  I'm writing this from behind a monopoly ISP who deliberately blocks 
all proto 41 traffic, and thus 6to4, so I have no idea what content, if any, 
the Experiment is actually providing...  Anyone want to give me a Teredo 
relay for "research" purposes?  :)


Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: For want of a single ethernet card, an airport was lost ...

2007-08-20 Thread Stephen Sprunk


Thus spake "Bill Stewart" <[EMAIL PROTECTED]>

While the goals of the system, as identified by the GAO, include
a brief phrase about "facilitate legitimate travel and trade", the
rest of the report appears to entirely ignore it.
... it appears that the designers of both the technical and
operational sides are also ignoring the goal of facilitating
legitmate travel and trade.
... Certainly the operational side didn't have processes for
supporting travellers with reasonable-looking papers in the
event of a computer failure.


The problem is that if you have a second path of entry with lesser security 
protocols, attackers will find a way to get themselves onto that path.  For 
instance, imagine the terrorists have papers that look legit but they know 
won't pass computer cross-references; any time they want to come in, they 
would just disrupt the computer network and force the agents to rely on the 
papers alone.  That's why people get stuck on the runways waiting for the 
computers to come back up.


Such secondary procedures are okay in the banking world, where you can back 
out transactions that an audit reveals are fraudulent after the fact.  The 
same does not apply to letting persons across a border where you can't 
retroactively deny them entry after they've killed a bunch of people (and, 
most likely, martyred themselves).  It's the same problem with voting 
systems, actually: the anonymity requirements mean all security hinges on 
making sure only authorized people vote, and only once at that; you can't 
back out fraudulent votes after they're cast, which is why all of the 
attacks are on the authorization system and being undetected in an audit 
doesn't matter.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: An Internet IPv6 Transition Plan

2007-07-25 Thread Stephen Sprunk


Thus spake "Adrian Chadd" <[EMAIL PROTECTED]>

I'm not sure what your definition of "really tiny" is, but out here
IPs are a dollar or two each a year from APNIC. I'm sure ARIN's IP
charges aren't $0.00.


The 73 "Xtra Large" LIRs that consume 79% of ARIN's v4 space today are 
paying no more than USD 0.03 per IP per year.  That's not quite zero, but 
it's close enough the effect is the same.  Until the cost of v4 space to 
these folks is more than a rounding error, they have absolutely no incentive 
to conserve.  It doesn't matter what the other 2550 LIRs do because they're 
insignificant factors in overall consumption.


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: The Choice: IPv4 Exhaustion or Transition to IPv6

2007-06-28 Thread Stephen Sprunk


Thus spake "Iljitsch van Beijnum" <[EMAIL PROTECTED]>
The Comcasts of this world burn addresses by the millions. If they  can't 
have new ones for (almost) free, they'll have to stick multiple  customers 
behind a single IPv4 address. If you have to share your IP  address with 
several of your neighbors, it becomes attractive to add  IPv6 to the mix 
to make peer to peer stuff, including VoIP, work more  reliably. QED.


More likely, one day $BIG_ISP is going to go to ARIN with justification for 
another /12 or so and they're going to get a few hundred /20s instead 
because that's all that's left.  Lather, rinse, repeat, and watch the v4 DFZ 
implode.  This will happen _before_ RIR exhaustion hits.


Hopefully, the $BIG_ISP folks of the world see this coming and are starting 
to tell CPE vendors that they will not buy/resell anything that doesn't do 
v6 after some fixed date.  The abject lack of v6 support in ISP-supplied CPE 
devices shows that if they are, that date is not yet imminent.


It's great we all have v6-capable hosts and core routers now, but that isn't 
enough; it's the CPE boxes (firewalls, DSL modems, etc.) that're going to 
eat us all alive in 3-4 years if things don't change Real Soon Now(tm). 
Kudos to Apple for being the first vendor to wake up; let's hope the others 
follow their lead in time to make a difference.


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: The Choice: IPv4 Exhaustion or Transition to IPv6

2007-06-28 Thread Stephen Sprunk


Thus spake "Iljitsch van Beijnum" <[EMAIL PROTECTED]>
The Comcasts of this world burn addresses by the millions. If they  can't 
have new ones for (almost) free, they'll have to stick multiple  customers 
behind a single IPv4 address. If you have to share your IP  address with 
several of your neighbors, it becomes attractive to add  IPv6 to the mix 
to make peer to peer stuff, including VoIP, work more  reliably. QED.


More likely, one day $BIG_ISP is going to go to ARIN with justification for 
another /12 or so and they're going to get a few hundred /20s instead 
because that's all that's left.  Lather, rinse, repeat, and watch the v4 DFZ 
implode.  This will happen _before_ RIR exhaustion hits.


Hopefully, the $BIG_ISP folks of the world see this coming and are starting 
to tell CPE vendors that they will not buy/resell anything that doesn't do 
v6 after some fixed date.  The abject lack of v6 support in ISP-supplied CPE 
devices shows that if they are, that date is not yet imminent.


It's great we all have v6-capable hosts and core routers now, but that isn't 
enough; it's the CPE boxes (firewalls, DSL modems, etc.) that're going to 
eat us all alive in 3-4 years if things don't change Real Soon Now(tm). 
Kudos to Apple for being the first vendor to wake up; let's hope the others 
follow their lead in time to make a difference.


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: The Choice: IPv4 Exhaustion or Transition to IPv6

2007-06-28 Thread Stephen Sprunk


Thus spake "Iljitsch van Beijnum" <[EMAIL PROTECTED]>
How about this: when the OS only has an IPv6 address, and an  application 
wants to talk to an IPv4-only destination, automatically  proxy the TCP 
session through an HTTPS proxy. This catches

anything that uses TCP and doesn't need to know its own IPv4
address (hard to know if you don't have one) which would be
upwards of 95% of all protocols in widespread use. So we only
have to fix that other 5%.


If you're going to go that route, you might as well just deploy a v6-to-v4 
NAT device.  It'll break all the same protocols (though you could add ALG 
code for popular ones if desired) and, for those that won't, doesn't require 
any end host or application knowledge of what's going on.


I've bounced around some ideas privately on how such devices would work, 
probably have it defined well enough now to make a draft, and even managed 
to come up with a snappy backronym for it, but the IETF does not appear to 
be interested in any v6 transition model that doesn't require dual-stacking 
every single existing host on the Internet before the first v6-only host 
appears -- and certainly not one that's an adaptation of that evil NAT 
stuff.


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: UK ISPs v. US ISPs (was RE: Network Level Content Blocking)

2007-06-09 Thread Stephen Sprunk


Thus spake "Kradorex Xeron" <[EMAIL PROTECTED]>

From my view, ISPs should continue their role as "passing the
packets" and not say what their users can or cannot view. It's
when ISPs start interfering with what their users do is when we
will run into legal, political and otherwise issues that I'm sure
none of us want to see.


IIRC, AOL got whacked by a court years ago because they censored some chat 
rooms and not others.  The court held that since they censored some content, 
they lost their status as a common carrier and were liable for other content 
they didn't censor (either by intent or mistake).  This was a particularly 
interesting case, since the implication was that ISPs who _don't_ censor 
content _are_ common carriers, which I don't think has otherwise been 
touched upon in the US.


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: Security gain from NAT

2007-06-06 Thread Stephen Sprunk


Thus spake "Roger Marquis" <[EMAIL PROTECTED]>

I, for one, give up. No matter what you say I will never
implement NAT, and you may or may not implement it if people
make boxes that support it.


Most of the rest of us will continue to listen to both sides and
continue to prefer NAT, in no small part because of the absurd
examples and inconsistent terminology NATophobes seem to feel is
necessary to make their case.


The thing is, with IPv6 there's no need to do NAT.  What vendors have (so 
far) failed to deliver is a consumer-grade firewall that does SI with the 
same rules on by default that v4 NAT devices have.  Throw in DHCP PD and 
addressing (and renumbering) are automatic.  This is simpler than NAT 
because no "fixup" is required; a v6 firewall with SI and public addresses 
on both sides just needs to inspect packets, not modify them.


The same device will probably be a v4 NAT device; nobody is trying to take 
that away because it's a necessary evil.  However, NAT in v6 is not 
necessary, and it's still evil.


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov




Re: IPv6 Advertisements

2007-06-02 Thread Stephen Sprunk


Thus spake "Iljitsch van Beijnum" <[EMAIL PROTECTED]>
So I expect people who are in your position to start requesting  blocks 
larger than /32 or /48 in order to be able to deaggregate, or  even 
request multiple independent PI blocks. It will be interesting  to see 
what this means for the number of PI requests and speed at  which the 
global IPv6 routing table grows.


This is the motivation for the suggestion that folks accept a few extra bits 
for routes with a short AS_PATH length; that gets you the benefits of TE 
without cluttering distant ASes with deaggregates.  This may also be 
motivation for RIR policies that explicitly disallow TE as a justification 
for a larger-than-minimum block.



... so it's not necessary for a router on one side of the globe to
have all the more specifics that are only relevant on the opposite
side of the globe.  ... common sense suggests that there is some
middle ground where it's possible to have address space that's
at least portable within a certain region, but we get to prune the
routing tables elsewhere.


In theory this can be done at the RIR region level; what's to stop RIPE 
members from blocking all ARIN routes and just having a top-level route for 
each of ARIN's blocks pointing towards North America, and ARIN members 
blocking all RIPE routes and having a top-level route for each of RIPE's 
blocks pointing towards Europe?  If we can't get this working at a 
continental level, considering how good the aggregation is on paper, how do 
we ever expect to get it working within a region?


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: NANOG 40 agenda posted

2007-06-01 Thread Stephen Sprunk


Thus spake "Randy Bush" <[EMAIL PROTECTED]>

the average number of v4 prefixes per AS is ~10, and it's
rising.  In v6, the goal is that every PI site can use a single
prefix**, meaning the v6 routing table will be at least one (and
two or even three eventually) orders of magnitude smaller
than the v4 one.


how much of the v4 prefix count is de-aggregation for te or by
TWits?


A quick look at this week's CIDR Report says 35.9%, or 78,738 routes.

[Update to earlier stats: The current v4 prefix/AS ratio is 8.7.
However, there are ~11k ASes only announcing a single v4 route, so that 
means the other ~14k ASes are at a v4 ratio of 14.3.  In contrast, the 
current v6 ratio is 1.1 and the deaggregate rate is 1.2%.]



why won't they do this in v6?


The simplistic answer is that nearly all assigned/allocated blocks will be 
minimum-sized, which means ISPs will be capable of filtering deaggregates if 
they wish.  Some folks have proposed allowing a few extra bits for routes 
with short AS_PATHs to allow TE to extend a few ASes away without impacting 
the entire community.


While many have derided the "classful" nature of IPv6 policies, the above 
was one of the reasons that it's being done.  The other, obviously, is that 
IPv6 is big enough we can do it that way and skip all the administrative 
hassle of worrying about how much space people "need" and focus on whether 
they "need" a routing slot (as much as the RIRs pretend they don't care 
about routability).


I said "simplistic" above because there will be a few extremely large orgs 
that will end up getting larger-than-minimum blocks, and they could 
deaggregate if they want to -- or deaggregate more bits than other folks get 
to.  There's not much that can be done about that (without vendors inventing 
cool new knobs), and I already addressed why it shouldn't be that big a deal 
anyways in the ** footnote you snipped.


However, this relies on RIRs rejecting "but we need to deaggregate" as 
justification for larger-than-minimum blocks.  OTOH, the community may see 
how small the v6 table is and decide that N bits of deaggregation wouldn't 
hurt.  After all, with ~25k ASes today, and router vendors claiming to be 
able to handle 1M+ routes, it seems we could tolerate up to 5 bits of 
deaggregation -- and 3 bits would leave us with a table smaller than v4 has 
today.


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: NANOG 40 agenda posted

2007-06-01 Thread Stephen Sprunk


Thus spake "Vince Fuller" <[EMAIL PROTECTED]>

Yes, as NAT becomes ubiquitous, a larger number of private
networks will be behind ever smaller prefixes that are assigned
to sites so the per-site prefix length will decrease.


I think you mean increase.  Even without NAT, this is going to happen 
because big blocks will no longer be available, even if someone qualifies 
for one, and multiple smaller blocks will need to be assigned.  Route count 
will grow increasingly faster as we approach and pass exhaustion as RIRs (or 
the black markets) have to chop up big blocks into smaller and smaller 
chunks to meet our needs.



The logical end state for this would be /32s.  In some cases,
multi-homed end-sites may wish to have those /32s advertised
into the global routing system. If, on the other hand, those end
sites were to transition to ipv6, they would instead obtain "PI"
/48s and advertise those into the global routing system. How
is the former any worse than the latter?


For one thing, the former further weakens the end-to-end principle and 
entrenches the idea of "servers" on public addresses and "clients" behind 
NATs, with those clients becoming second-class citizens.  Today, this 
practice is voluntary for most users and under their control, so users are 
to blame for their own segregation, but tomorrow it will be forced on them 
by their ISPs*, which is a BadThing(tm).


Also, a site will need dozens, perhaps reaching hundreds, of v4 prefixes to 
address all of their hosts if they do use public addresses, e.g. for content 
hosters; Already, the average number of v4 prefixes per AS is ~10, and it's 
rising.  In v6, the goal is that every PI site can use a single prefix**, 
meaning the v6 routing table will be at least one (and two or even three 
eventually) orders of magnitude smaller than the v4 one.


(* If those ISPs also wisely deploy native v6 at the time they deploy NATs 
for v4, customers would be motivated, or would at least have the option, of 
getting out of the NAT jail by upgrading to v6.  That might end up being the 
final straw that makes the masses move -- or it might have no effect except 
on geeks who'd find a way to use v6 even if their ISP didn't offer it 
natively.)


(** Except perhaps for a handful of gargantuan ISPs that manage to assign 
more than a /32s worth of addresses, most likely residential DSL/cable 
providers who're going to burn through millions of /56s and /64s per month 
when they roll out v6.  Even so, those ISPs are still going to be rare 
enough they shouldn't affect the average number of prefixes per AS 
noticeably.)



If you think about it, the NAT approach actually offers the
possibility of improved routing scalability: site multihomed with
NATs connected to each of its providers could use topologically-
significant (read "PA") global addresses on the NATs while
using the same private address space on their network. This
reduces any renumbering problem to just that for a NAT that
moves (or is replaced) during a provider change.


This is also what some of the IETF's ideas on v6 multihoming amount to --  
though at least they leave ports and the low N bits of the address alone and 
thus don't break two-way connectivity, just protocols/apps that embed raw 
addresses in their payload, which is a marked improvement over v4 NAT if not 
quite perfect.  One might question that approach, and it's one of the uses 
feared for ULAs: since you're translating the top bits anyways, you might as 
well use private addresses on the back side.  Some who opposed SLAs and ULAs 
did so because they realize such enable and perhaps even encourage IPv6 NAT 
"solutions" like RFC1918 enables/encourages IPv4 NAT.



Yes, this sort of poor man's identifier/locator separation has
all sorts of ugliness but it can probably be made to work. It may
even be the path of least resistance versus fixing ipv6's routing
scalability and deploying ipv6.


Unless we fix IPv6's routing and get a real EID/RID split, that's what IPv6 
is going to _be_ for folks too small to get PI or who live in regions that 
don't have PI.  That's not what the IETF promised us, and it makes many 
folks wonder why they should pay the costs of upgrading if the situation 
with v6 won't be much better than it is with v4+NAT.  ARIN partially solved 
this, i.e. for the larger sites that will probably constitute a critical 
mass of clients and servers, but it doesn't solve the problem for the 
growing underclass of folks who are and always will be stuck on PA and have 
to suffer all the bad side effects of that.


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: IPv6 Advertisements

2007-06-01 Thread Stephen Sprunk


Thus spake <[EMAIL PROTECTED]>

If an ISP wants to aggregate their IPv6 traffic, they will announce
one block for their entire global network. Then, internally, they
will assign /48s in LA from a western USA internal allocation
and /48s in Hamburg from a northwestern Europe internal
allocation.


Bad example, since (a) blocks from different RIRs aren't going to aggregate 
and (b) RIPE doesn't assign /48s anyway.


If we were talking about a company with sites on the east and left coasts of 
the US, then IMHO they should get a single /48 if they have internal 
connectivity (single site) and two /48s if not (two sites).


However, I wouldn't argue (much) with ARIN issuing a /47 even in the former 
case on the logic that such constitutes two "sites", particularly if they 
had separate management; it's when we get to the level of hundreds or 
thousands of locations (with internal connectivity) that I have a problem 
with calling each location a "site".  Below that, it doesn't do much harm.


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: IPv6 Advertisements

2007-05-31 Thread Stephen Sprunk


Thus spake "Jeroen Massar" <[EMAIL PROTECTED]>

Stephen Sprunk wrote:

First of all, there's disagreement about the definition of "site",


The general definition of a site that I find appropriate is and
works pretty well as a rule of thumb:

"A site is defined by it having a single administrative domain".


That's a good rule of thumb; I'm curious how close it is to what ARIN staff 
uses when evaluating requests, though.  Or if staff let the requestor define 
"site" themselves since policy doesn't.



As such, if you have for example an NREN, most likely every
University will have their own Networking Department, with their
own administrators of that network. As such, every university is
a site.


That's reasonable, if for no other reason than the number of universities is 
manageable and there's no doubt they're independently managed -- follow the 
money.  However, I'd argue that NREN is an LIR and the universities are 
their customers.



When the University is very large, it will have multiple
administrative portions, eg generally Computer Science will
have their own folks managing the network.


That can be handled by subdividing the /48 that goes to the U.


When you have a large company, the company is also split
over several administrative sites, in some cases you might
have a single administrative group covering several sites
though, this allows you to provide them with a single /48 as
they are one group they will know how to properly divide that
address space up.


In my experience, there tends to be one corporate IT group that handles 
stuff like connectivity to other orgs, and several subordinate IT groups 
that manage their part of the network.  That can be handled with chopping up 
their /48.


In the case of the rare (typically multinational) org where the groups run 
independent networks that talk BGP to each other and/or have their own 
uplinks, it'd be fair for ARIN to consider each group a separate site or 
even org if requested.  Ditto if a single org had multiple separate networks 
but only one IT group (e.g. hosters).



It comes sort of close to an AS actually, except that an AS
tends to cover a lot of sites.


An end-user AS tends to cover a lot of locations.  By definition, it 
describes an area with a single coherent routing policy and administrator.


An ISP AS may cover a lot of sites because leaf sites are part of their 
upstream's AS as far as routing is concerned.



If you have 40k sites, then a /32 is a perfect fit for you. There
are not too many organizations that come close to that though,
making /32's excellent for most organizations, except the very
small ones.  These can request a /48, or something upto a /40
for that purpose.


Let's take our canonical example of McDonald's.  Does each store (let's 
assume they're all company-owned, not franchisees, for a moment) really 
count as a "site"?  It's definitely a location, but if there's a single IT 
group that manages all 100k or so of them, I'd argue they're all one "site", 
certainly one org, and not an LIR.  Give each store a /60 (to make rDNS easy 
and allow for growth), and McD's as a whole would get a /40 or so (to allow 
for internal aggregation).


However, as I noted, some folks would consider McD's an LIR and want to give 
them a /30 or shorter.  I think that's wrong, but policy doesn't clearly say 
either way.  Looking at WHOIS doesn't help much, since many obvious end-user 
orgs like Cisco got LIR allocations back when there was no end-user PIv6 
policy; who knows what they'd be told today if they applied with the same 
rationale.  (Though presumably they wouldn't try since assignments are far 
cheaper to renew)


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: IPv6 Advertisements

2007-05-31 Thread Stephen Sprunk


Thus spake "Stephen Sprunk" <[EMAIL PROTECTED]>
Someone recently posted a link (either on PPML or here -- I can't find it 
now) that showed ARIN's minima for the various v4 and v6 blocks.  The v4 
ones were all over the map, but there are relatively few v6 blocks and all 
of them are /32 except for one that's /48.


This is the page I was thinking of:

http://www.arin.net/reference/ip_blocks.html

The v6 list doesn't show explicit minima like the v4 list, but it's /32 for 
non-micro allocations (the block for which, unfortunately, isn't noted on 
this page) and /48 for direct assignments under current policy.


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: IPv6 Advertisements

2007-05-31 Thread Stephen Sprunk


Thus spake "Donald Stahl" <[EMAIL PROTECTED]>

Current policy allows for greater-than-/48 PI assignments if the
org can justify it.  However, since we haven't told staff (via
policy) what that justification should look like, they are currently
approving all requests and several orgs have taken advantage
of that.


I can't imagine what an end-user could come up with to justify
more than a /48 but what do I know.


First of all, there's disagreement about the definition of "site", and some 
folks hold the opinion that means physical location.  Thus, if you have 100 
sites, those folks would claim you have justified 100 /48s (or one /41). 
Other folks, like me, disagree with that, but there are orgs out there that 
have tens of thousands of locations with a need for multiple subnets per 
location, and that could justify more than a /48 as well via pure subnet 
counts.



And if ARIN's primary goal is to prevent de-aggregation then
shouldn't there be another fixed allocation size (/40) and block
to prevent this?


ARIN's goal in v6 is to try to issue blocks so that aggregation is 
_possible_, by reserving a larger block to allow growth, but ARIN can't 
prevent intentional (or accidental) deaggregation, and there's too many 
folks who want to deaggregate for TE purposes to pass a policy officially 
condemning it.



So, it's entirely possible someone could get a /40 and
deaggregate that into 256 routes if they wanted to.  Given
the entire v6 routing table is around 700 routes today, it's
obviously not a problem yet :)


Obviously that's short sighted :) As for the deaggregation-
anyone deaggregating a /40 into 256 routes should have
there AS permanently bloackholed :)


I'd agree in principle, but all it takes is a brief look at the CIDR report 
and you'll see that nobody does anything in response to far more flagrant 
examples in v4.  If everyone aggregated properly, we could drop over a third 
of the current v4 table.  This makes me extremely suspicious of ISPs that 
continually whine about routing table bloat whenever loosening policies for 
small orgs is discussed.


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: IPv6 Advertisements

2007-05-31 Thread Stephen Sprunk


Thus spake "Donald Stahl" <[EMAIL PROTECTED]>

The upside is that in the block you're expected to accept /48s,
nobody will have a /32.  The downside is that anyone who gets
a larger-than-minimum sized allocation/assignment can
deaggregate down to that level.


I don't think ARIN is planning on giving out more less a /48 but
more than a /32- at least that was the impression I got. End sites
get a /48- ISP's get a /32 or larger- and that's it (I could certainly
be wrong). As such, deaggragation in the /48 block should not
be an issue because no one will have more than a /48 in the
first place.


Current policy allows for greater-than-/48 PI assignments if the org can 
justify it.  However, since we haven't told staff (via policy) what that 
justification should look like, they are currently approving all requests 
and several orgs have taken advantage of that.


So, it's entirely possible someone could get a /40 and deaggregate that into 
256 routes if they wanted to.  Given the entire v6 routing table is around 
700 routes today, it's obviously not a problem yet :)


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: Microsoft and Teredo

2007-05-31 Thread Stephen Sprunk


Thus spake "Adrian Chadd" <[EMAIL PROTECTED]>

On Thu, May 31, 2007, JORDI PALET MARTINEZ wrote:

In windows, you have IPv6 firewall, so even if Teredo traverses
the "IPv4 security", there is still something there.

A good description of all this is available at:
http://www.microsoft.com/technet/network/ipv6/teredo.mspx


I've read that; but again enterprise and ISPs may impose restrictions
on the types of traffic to/from end users, and this circumvents that.
Host-based firewalls are not the be all or end all of network security.


The simplistic answer is that a site with IPv4-only security devices has to 
choose whether they're going to allow or block all Teredo/6to4 traffic.  If 
they want finer control, they need to upgrade to a native v6 network and 
native v6 security devices.


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: IPv6 Advertisements

2007-05-31 Thread Stephen Sprunk


Thus spake "Randy Bush" <[EMAIL PROTECTED]>

small site.  so public servers provide multiple and diverse
services.  if a hostname has a v6 address, then all services
must be v6 capable because clients do not retry the A record.


This seems to argue for having "service" hostnames, which has been standard 
practice at many sites for a long, long time.  For instance, if you have a 
single box which does mail and news, and you have v6 mail but only v4 news, 
then mail.example.com should have both A and  records but 
news.example.com should have A records only.


(Yes, this means you can't just CNAME the service hostname to the real 
hostname, but there are several other strategies.)


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: IPv6 Advertisements

2007-05-31 Thread Stephen Sprunk


Thus spake "Brandon Butterworth" <[EMAIL PROTECTED]>

> Don't give people an excuse to deagg their /32


RIPE may only give out /32's but ARIN gives out /48's so there wouldn't 
be

any deaggregation in that case.


That's not what I said. If /48 are accepted by * then people with
a /32 or whatever will deagg to /48.


The general rule, for both v4 and v6, is that people should filter on 
whatever the minimum allocation/assignment size for each RIR block.  It's 
also been suggested that you accept a few more bits for routes with a short 
AS_PATH length to help with TE, but that's optional.


Someone recently posted a link (either on PPML or here -- I can't find it 
now) that showed ARIN's minima for the various v4 and v6 blocks.  The v4 
ones were all over the map, but there are relatively few v6 blocks and all 
of them are /32 except for one that's /48.


The upside is that in the block you're expected to accept /48s, nobody will 
have a /32.  The downside is that anyone who gets a larger-than-minimum 
sized allocation/assignment can deaggregate down to that level.


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: RTT from NY to New Delhi?

2007-05-16 Thread Stephen Sprunk


Thus spake "Tim Franklin" <[EMAIL PROTECTED]>

Going east from NY, you'd add 70 or 80ms to that - and a quick
look suggests routes going west instead.  (Test from home to .IN
NS goes London -> NY -> West Coast -> Singtel -> India, for
~370ms)

It's starting to head a bit towards walkie-talkie mode for VoIP,
but not too bad other than that...


You'd be surprised what people are willing to accept when the alternatives 
are worse.  I had a customer install VSAT in India just so they could use IP 
phones -- and their only gateway was in the US.  Apparently the audio 
quality and reliability of the PTT was so bad that they were willing to 
_stand in line_ to use the two IP phones there to make calls, even with the 
walkie-talkie effect in full force.  It was cheaper too, despite the 
outrageous cost of VSAT bandwidth.


US telcos and engineers tend to overestimate the importance of audio quality 
and reliability on VoIP; we have an entire generation of people now who have 
been trained by wireless carriers to _expect_ to pay through the nose for 
bad quality.  VoIP across the Internet, even with no QoS at all, looks great 
in comparison because it's cheaper and sounds better.


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: ISP CALEA compliance

2007-05-10 Thread Stephen Sprunk


Thus spake "Donald Stahl" <[EMAIL PROTECTED]>

Working hard to defend privacy does not automatically equal
protecting people who exploit children- and I'm getting sick and
tired of people screaming "Think of the children!" It's a stupid,
fear mongering tactic- and hopefully one day people will think
of it in the same way as crying wolf.


Ditto; I'm sick of all the programs that are pushed with that justification. 
People are all too happy to give up their privacy to "protect" kids, rather 
than just doing a decent job of parenting themselves.



If you don't have anything to hide- then why should you care right?

On the other hand- these sorts of laws may just be enough to
push everyone to use encryption- and then what will LE do?


Arrest everyone!  Have you forgotten the court ruling a year or two ago that 
using PGP was evidence of covering up a crime?


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: IP Block 99/8 (DHS insanity - offtopic)

2007-04-23 Thread Stephen Sprunk


Thus spake <[EMAIL PROTECTED]>

On Mon, Apr 23, 2007 at 05:23:03PM -0400, Sandy Murphy wrote:

You might try taking a look at the various presentations at
NANOG/RIPE/ARIN/APNIC/APRICOT about the whole idea.
Central point: the entity that gives you a suballocation of its
own address space signs something that says you now hold it.

No governments involved.


no problemo...  when i hand out a block of space, i'll expect
my clients to hand me a DS record ...  then I sign the DS.
and I'll hand a DS to my parent, which they sign.
That works a treat today (if you run current code)
and gives you exactly what you describe above.


That roughly matches what I expect, but the process seems backwards.  If 
IANA hands, say, 99/8 to ARIN, I'd expect that to come with a certificate 
saying so.  Then, if ARIN hands 99.1/16 to an ISP, they'd hand a certificate 
saying so to the ISP, which could be linked somehow to ARIN's authority to 
issue certificates under 99/8.  And so on down the line.  Then, when the 
final holder advertises their 99.1.1/24 route via BGP, receivers would check 
that it was signed by a certificate that had a verifiable path all the way 
back to IANA.


Of course, one must be prepared to accept unsigned routes since they'll be 
the majority for a long time, which means you still run afoul of the 
longest-match rule.  If someone has a signed route for 99.1/16, and someone 
else has unsigned routes for one or more (or all) of 99.1.0/24 through 
99.1.255/24, what do you do?  Do you block an unsigned route from entering 
the FIB if there's a signed aggregate present?  Doesn't that break common 
forms of TE and multihoming?  If you don't, doesn't that defeat signing in 
general since hijackers would merely need to use longer routes than the real 
holders of the space?


To paraphrase Barbie, "security is hard; let's go shopping!"

S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: DHCPv6, was: Re: IPv6 Finally gets off the ground

2007-04-16 Thread Stephen Sprunk


Thus spake "Jeroen Massar" <[EMAIL PROTECTED]>

But for the rest it all seems pretty fine to me...

or do you mean that those ibahn things see "NOERROR" and
then no answers, thus wrongly cache that as label has 0 answers
at all? or what I mention above with the redirect?


They do the same thing for requests that don't involve a CNAME, so they're 
either choking on the  query or a NOERROR response in general; it's hard 
to tell which since I can only see one side of their box.  I also don't know 
how they react when you try to contact a site that _does_ have  records, 
since no major content site has them (which is a whole 'nother discussion).


What's weird is that they don't just return a 0-record NOERROR when you do 
the follow-up A query, which would be the most logical failure mode -- they 
return an authoritative answer of 0.0.0.1 instead.


Of course, dealing with idiot consumers on a regular basis, their tech 
support folks insist the problem is on the user's machine and that it's a 
bug in their v6 stack, despite Ethereal captures showing the bad DNS 
response packets coming from their box...


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: DHCPv6, was: Re: IPv6 Finally gets off the ground

2007-04-16 Thread Stephen Sprunk


Thus spake "Jeroen Massar" <[EMAIL PROTECTED]>

Fred Heutte wrote:
> I spent a couple hours in a hotel recently trying to untangle why
> using the DSL system I could see the net but couldn't get to any
> sites other than a few I tried at random like the BBC, Yahoo
> and Google.
>
> That's because they are among the few that apparently have
> IPv6 enabled web systems.

They don't have "IPv6 enabled web systems", a lot of people
wished that they did. What your problem most likely was, was
a broken DNS server, which, when queried for an  simply
doesn't respond.


In fact, it's one particular vendor (whose name I haven't been able to 
discover) of pay-for-Internet transparent proxy/NAT devices, such as those 
commonly used in hotels and at hotspots, that's messing the whole thing up. 
They return an address of 0.0.0.1 in response to any DNS query from an 
IPv6-capable client, and they've decided to train their service providers' 
tech support departments to tell customers to turn off v6 rather than fix 
what should be a very simple bug.


(Granted that's a passable workaround for a few months while a vendor 
prepares a patch, but this issue has been around for _years_.)



I know it is always fun to blame M$ but really it isn't true.


Agreed.  MS is sending a proper query, and every other DNS server on the 
face of the planet responds correctly.  There are a few random apps that 
still bomb when both ends have IPv6 and there's only a v4 path between them 
(though most have been fixed over the last few years), but the OS is working 
correctly.


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: Thoughts on increasing MTUs on the internet

2007-04-14 Thread Stephen Sprunk


Thus spake "Bill Stewart" <[EMAIL PROTECTED]>

One of my customers comments that he doesn't care about
jumbograms of 9K or 4K - what he really wants is to be sure the
networks support MTUs of at least 1600-1700 bytes, so that
various combinations of IPSEC, UDP-padding, PPPoE, etc.
don't break the real 1500-byte packets underneath.


This is a more realistic case, and support for "baby jumbos" of 2kB to 3kB 
is almost universal even on mid-range networking gear.  However, the 
problems of getting it deployed are mostly the same, except one can take the 
end nodes out of the picture in the simplest case.


OTOH, if we had a viable solution to the variable-MTU mess in the first 
place, you could just upgrade every network to the largest MTU possible and 
hosts would figure out what the PMTU was and nobody would be sending 
1500-byte packets; they'd be either something like 1400 bytes or 9000 bytes, 
depending on whether the path included segments that hadn't been upgraded 
yet...


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: Thoughts on increasing MTUs on the internet

2007-04-13 Thread Stephen Sprunk


Thus spake "Lasher, Donn" <[EMAIL PROTECTED]>

PMTU Black Hole Detection works well in my experience, but unfortunately
MS doesn't turn it on by default, which is where all of the L2VPN with 
<1500

MTU issues come from; turn BHD on and the problems just go away...  (And,
as others have noted, there's better PMTUD algorithms that are designed to
work _with_ black holes, but IME they're not really needed)


I wish I'd had your experience. PMTU _can_ work well, but on the internet 
as

a whole, far too many ignorant paranoid admins block PMTU, mostly by
accident, causing all sorts of unpleasantness.


You can't block PMTUD per se, just the ICMP messages that dumber 
implementations rely on.  And, as I noted, MS's implementation is dumb by 
default, which leads to the problems we're all familiar with.  "PMTU Black 
Hole Detection" is appropriately named; one registry change* and a reboot is 
all you need to solve the problem.  Of course, that's non-trivial to 
implement when there's hundreds of millions of boxes with the wrong 
setting...



Clearing DF only takes you so far. Unless both ends are aware, and respond
apppropriately to the squeeze in the middle, you're back to square one.


Smarter implementations still set DF.  The difference is that when they get 
neither an ACK nor an ICMP, they try progressively smaller sizes until they 
do get a response of some kind.  They make a note of what works and continue 
on with that, with the occasional larger probe in case the problem was 
transient.


In fact, one could consider Lorier's "mtud" to be roughly the same idea; 
it's only needed because the stack's own PMTUD code is typically bypassed 
for on-subnet destinations and/or not as smart as it should be.


S

* HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\
Parameters\EnablePMTUBHDetect=1

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: Thoughts on increasing MTUs on the internet

2007-04-13 Thread Stephen Sprunk


Thus spake "Mikael Abrahamsson" <[EMAIL PROTECTED]>

The internet is a very diverse and complicated beast and if end
systems can properly detect PMTU by doing discovery of this, it
might work.  ... Make sure they can properly detect PMTU by
use of nothing more than "is this packet size getting thru" (ie
no ICMP-NEED-TO-FRAG) or alike, then we might see partial
adoption of larger MTU in some parts and if this becomes a
major customer requirement then it might spread.


PMTU Black Hole Detection works well in my experience, but unfortunately MS 
doesn't turn it on by default, which is where all of the L2VPN with <1500 
MTU issues come from; turn BHD on and the problems just go away...  (And, as 
others have noted, there's better PMTUD algorithms that are designed to work 
_with_ black holes, but IME they're not really needed)


Still, we have a (mostly) working solution for wide-area use; what's missing 
is the critical step in getting varying MTUs working on a single subnet. 
All the solutions so far have required setting a higher, but still fixed, 
MTU for every device and that isn't realistic on the edge except in tightly 
controlled environments like HPC or internal datacenters.


Perry Lorier's solution is rather clever; perhaps we don't even need a 
protocol sanctioned by the IEEE or IETF?


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: Jumbo frames

2007-03-30 Thread Stephen Sprunk


Thus spake "Andy Davidson" <[EMAIL PROTECTED]>
The original poster was talking about a streaming application - 
increasing the frame size can cause it take longer for frames to fill  a 
packet and then hit the wire increasing actual latency in your 
application.


Probably doesn't matter when the stream is text, but as voice and  video 
get pushed around via IP more and more, this will matter.


It's a serious issue for voice due to the (relatively) low bandwidth, which 
is why most voice products only put 10-30ms of data in each packet.


Video, OTOH, requires sufficient bandwidth that packetization time is almost 
irrelevant.  With a highly compressed 1Mbit/s stream you're looking at 12ms 
to fill a 1500B packet vs 82ms to fill a 10kB packet.  It's longer, yes, but 
you need jitter buffers of 100-200ms to do real-time media across the 
Internet, so that and speed-of-light issues are the dominant factors in 
application latency.  And, as bandwidth inevitably grows (e.g. ATSC 1080i or 
720p take up to 19Mbit/s), packetization time quickly fades into the 
background noise.


Now, if we were talking about greater-than-64kB jumbograms, that might be 
another story, but most folks today use "jumbo" to mean packets of 8kB to 
10kB, and "baby jumbos" to mean 2kB to 3kB.


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: Ethernet won (was: RE: [funsec] Not so fast, broadband...)

2007-03-14 Thread Stephen Sprunk


Thus spake <[EMAIL PROTECTED]>

perhaps not.  but there is a real issue w/ the number
of businesses that operate from the home (according to
some numbers this is as high as 65% of all US business)
and the telcos still retain a mindset of business areas
and residential areas.  It is not possible to get some
"business services" deployed in a "residential" area.

...

persuading a telco, one home-based business at a time,
that regardless of the zoning - there are really 65% of
those apartments running businesses and want business-class
services is an exercise in futility.


It depends what "business" services you mean.  If you want a T1 or SONET 
pipe, yeah, you're going to hit a serious wall even if the fiber runs 
through your property.


However, most telcos have "business" DSL and "residential" DSL, and the 
physical layer is the same (ditto for cable, all the way back to @Home vs 
@Work).  The only differences are the AUP, the price tag, and the ability to 
get static IPs.  Expect to pay 2-3x for the same bit rate; higher bitrates 
may be available with "business" service, but the upload rates still suck 
because their gear is designed for consumers.  Sticking with "residential" 
service for your home office will pay for basic server colo space somewhere 
else, and you'll get more for your money.


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: [funsec] Not so fast, broadband providers tell big users (fwd)

2007-03-13 Thread Stephen Sprunk


Thus spake "Jack Bates" <[EMAIL PROTECTED]>

I would like to blame the idiots that decided that of the signal range
to be used on copper for dsl, only a certain amount would be
dedicated to upload instead of negotiating. What on earth do I
want to do with 24Mb down and 1Mb up?  Can't I have 12 and 12?
Someone please tell me there's a valid reason why the download
range couldn't be variable and negotiated and that's it's completely 
impossible for one to have 20Mb up and 1.5 Mb down.


That's ADSL.  I have 25+25 VDSL at home.  My ISP frowns on "excessive" 
uploading, though, but they were kind enough to tell me what "excessive" 
means and I happily capped my uploads at that rate.  Everyone wins.


So why has Ma Bell chosen to only use ADSL for consumers?  Economics.  Their 
model of having business customers subsidize residential customers relies on 
having at least one end of every conversation be a business customer.  When 
both ends are residential, as in P2P, there's nobody to pay the bills and 
keep them afloat.  That's also where the net neutrality and peering disputes 
come from; you only care about people using your pipes "for free" when your 
customers aren't paying the true cost to get bits to/from the peering point. 
By limiting residential upload speeds, they make it difficult to source 
content and thus keep their subsidy model on life support.


At least the cablecos have a decent excuse for bad upload speeds; shared 
bandwidth is bad enough, but in addition 1000 nodes transmitting to 1 node 
is much tougher electrically than 1 node transmitting to 1000 nodes.  Sooner 
or later, they're going to have to start shrinking cell sizes and/or 
allocating a heck of a lot more channels to data to keep up with demand.


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: Do routers prioritize control traffic?

2007-02-17 Thread Stephen Sprunk


Thus spake "Christos Papadopoulos" <[EMAIL PROTECTED]>
I know routers today have the ability to prioritize traffic, but last I 
heard,

these controls are not often used for user traffic (let's not discuss
net neutrality here).


They're not often used on _public_ networks for user traffic.  They're used 
extensively on _private_ networks, though, because the people paying the 
bills for network do so for a particular business purpose and they want to 
make sure it's met.



Are they used for control (e.g., routing) traffic?


Many routers automatically put control traffic to/from the local node into a 
separate path that completely bypasses the standard queueing mechanisms (and 
predates operator-accessible QOS).  In other routers, the control plane and 
forwarding plane are segregated, which achieves the same goal but with a 
rather different approach.


S

Stephen Sprunk  "Those people who think they know everything
CCIE #3723 are a great annoyance to those of us who do."
K5SSS --Isaac Asimov 





Re: death of the net predicted by deloitte -- film at 11

2007-02-11 Thread Stephen Sprunk


Thus spake "Daniel Senie" <[EMAIL PROTECTED]>

At 02:57 PM 2/11/2007, Paul Vixie wrote:
...wouldn't there be, if interdomain multicast existed and had a 
billing
model that could lead to a compelling business model?  right now, to 
the

best of my knowledge, all large multicast flows are still intradomain.


IP Multicast as a solution to video distribution is a non-starter. IP 
Multicast for the wide area is a failure. It assumes large numbers of 
people will watch the same content at the same time. The usage model 
that could work for it most mimics the broadcast environment before 
cable TV, when there were anywhere from three to ten channels to 
choose from, and everyone watched one of those. That model has not 
made sense in a long time. The proponents of IP Multicast seem to have 
failed to notice this.


IPmc would be useful for sports, news, and other live events.  Think 
about how many people sit around their TVs staring at such things; it's 
probably a significant fraction of all TV-watching time.  Better yet, 
folks who want to watch particular sports games will be concentrated in 
the two cities that are playing (i.e. high fanout at the bottom of the 
tree), which multicast delivery excels at compared to unicast.


For non-live content, even if one assumes people want their next episode 
of "24" on demand, wouldn't it make more sense to multicast it to STBs 
that are set to record it (or that predict their owners will want to see 
it), vs. using P2P or direct download?  That'll save you gobs and gobs 
of bandwidth _immediately following the new program's release_.  After 
that majority of viewers get their copy, you can transition the program 
to another system (e.g. P2P) that is more amenable to on-demand 
downloading of "old" content.


Of course, this is a pointless discussion since residential multicast is 
virtually non-existent today, and there's no sign of it being imminent. 
Anyone want to take bets on whether IPmc or IPv6 shows up first?  ;-)


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: death of the net predicted by deloitte -- film at 11

2007-02-11 Thread Stephen Sprunk

Thus spake "Geo." <[EMAIL PROTECTED]>

TIVO type setup with a massive archive of every show so you can not
only watch this weeks episode but you can tivo download any show
from the last 6 years worth of your favorite series is one heck of a
draw over cable or satellite and might be enough to motivate the
public
to move to a different service. A better tivo than tivo.


As I've pointed out before, the pirates _are already doing this_, and it
works.  Unfortunately, it remains to be seen if the Net will survive
1000x as many users.  P2P have interesting scaling characteristics;
1000x as many users doesn't mean 1000x as many bit-miles.  In fact,
higher densities may reduce the bit-miles -- and network operators pay
for bit-miles, not bits.


As for making money, just stick a commercial on the front of every
download.


BitTorrent, Inc. is working deals to distribute DRMed files freely over
P2P and individual users would just purchase a license to view the files
after the download is complete.  (Of course, I assume this'll be cracked
relatively soon, but like with iTMS, most people will pay anyways)

The alternative is free viewing with more product placement, inline ads
at the top/bottom of the screen, or a little header with "this program
is brought to you commercial-free by ", like I've seen on
soccer (football to non-US folks) games.  Commercials in their present
form are dying fast with the advent of DVRs, and on-demand shows will
destroy them -- though that won't stop some dinosaurs from trying it.

S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking


smime.p7s
Description: S/MIME cryptographic signature


Re: broken DNS proxying at public wireless hotspots

2007-02-02 Thread Stephen Sprunk


Thus spake "Trent Lloyd" <[EMAIL PROTECTED]>

One thing I have noticed to be unfortunately more common that I would
like is routers that misunderstand IPv6  requests and return an
A record of 0.0.0.1

So if you are using (for the most part) anything other than windows, 
or

Windows Vista, this may be related to what you are seeing.


The same is true if you've enabled IPv6 on XP.  Unfortunately, it's hard 
to find a hotel network these days that _doesn't_ break when presented 
with  queries.


I'm hoping that the flood of support calls from Vista users will 
pressure them to get their systems fixed, but I'm not holding my breath. 
They'll probably just make "disable IPv6" part of their standard 
troubleshooting routine, just like telling you to reboot your PC.  After 
all, nobody uses it, right?


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-21 Thread Stephen Sprunk


Thus spake "Joe Abley" <[EMAIL PROTECTED]>
If there was a big fast server in every ISP with a monstrous pile of 
disk which retrieved torrents automatically from a selection of 
popular RSS feeds, which kept seeding torrents for as long as there 
was interest and/or disk, and which had some rate shaping installed 
on the host such that traffic that wasn't on-net (e.g. to/from 
customers) or free (e.g. to/from peers) was rate-crippled, how far 
would that go to emulating this behaviour with existing live 
torrents?


Every torrent indexing site I'm aware of has RSS feeds for newly-added 
torrents, categorized many different ways.  Any ISP that wanted to set 
up such a service could do so _today_ with _existing_ tools.  All that's 
missing is the budget and a go-ahead from the lawyers.



Speaking from a technical perspective only, and ignoring the legal
minefield.


Aside from that, Mrs. Lincoln, how was the play?

If anybody has tried this, I'd be interested to hear whether on-net 
clients actually take advantage of the local monster seed, or whether 
they persist in pulling data from elsewhere.


Clients pull data from everywhere that'll send it to them.  The 
important thing is what percentage of the bits come from where.  If I 
can reach local peers at 90kB/s and remote peers at 10kB/s, then local 
peers will end up accounting for 90% of the bits I download. 
Unfortunately, due to asymmetric connections, rate limiting, etc. it 
frequently turns out that remote peers perform better than local ones in 
today's consumer networks.


Uploading doesn't work exactly the same way, but it's similar.  During 
the leeching phase, clients will upload to a handful of peers that they 
get the best download rates from.  However, the "optimistic unchoke" 
algorithm will lead to some bits heading off to poorer-performing peers. 
During the seeding phase, clients will upload to a handful of peers that 
they get the best _upload_ rates to, plus a few bits off to "optimistic 
unchoke" peers.


Do I have hard data?  No.  Is there any reason to think real-world 
behavior doesn't match theory?  No.  I frequently stare at the "Peer" 
stats window on my BT client and it's doing exactly what Bram's original 
paper says it should be doing.  That I get better transfer rates with 
people in Malaysia and Poland than with my next-door neighbor is the 
ISPs' fault, not Bram's.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-21 Thread Stephen Sprunk


[ Note: please do not send MIME/HTML messages to mailing lists ]

Thus spake Alexander Harrowell
Good thinking. Where do I sign? Regarding your first point, it's 
really

surprising that existing P2P applications don't include topology
awareness. After all, the underlying TCP already has mechanisms
to perceive the relative nearness of a network entity - counting hops
or round-trip latency. Imagine a BT-like client that searches for
available torrents, and records the round-trip time to each host it
contacts. These it places in a lookup table and picks the fastest
responders to initiate the data transfer. Those are likely to be the
closest, if not in distance then topologically, and the ones with the
most bandwidth.


The BT algorithm favors peers with the best performance, not peers that 
are close.  You can rail against this all you want, but expecting users 
to do anything other than local optimization is a losing proposition.


The key is tuning the network so that local optimization coincides with 
global optimization.  As I said, I often get 10x the throughput with 
peers in Europe vs. peers in my own city.  You don't like that?  Well, 
rate-limit BT traffic at the ISP border and _don't_ rate-limit within 
the ISP.  (s/ISP/POP/ if desired)  Make the cheap bits fast and a the 
expensive bits slow, and clients will automatically select the cheapest 
path.



Further, imagine that it caches the search -  so when you next seek
a file, it checks for it first on the hosts nearest to it in its 
"routing

table", stepping down progressively if it's not there. It's a form of
local-pref.


Experience shows that it's not necessary, though if it has a non-trivial 
positive effect I wouldn't be surprised if it shows up someday.



It's a nice idea to collect popularity data at the ISP level, because
the decision on what to load into the local torrent servers could be
automated.


Note that collecting popularity data could be done at the edges without 
forcing all tracker requests through a transparent proxy.


Once torrent X reaches a certain trigger level of popularity, the 
local

server grabs it and begins serving, and the local-pref function on the
clients finds it. Meanwhile, we drink coffee.  However, it's a 
potential

DOS magnet - after all, P2P is really a botnet with a badge.


I don't see how.  If you detect that N customers are downloading a 
torrent, then having the ISP's peer download that torrent and serve it 
to the customers means you consume 1/N upstream bandwidth.  That's an 
anti-DOS :)



And the point of a topology-aware P2P client is that it seeks the
nearest host, so if you constrain it to the ISP local server only, 
you're
losing part of the point of P2P for no great saving in 
peering/transit.


That's why I don't like the idea of transparent proxies for P2P; you can 
get 90% of the effect with 10% of the evilness by setting up sane 
rate-limits.


As long as they don't interfere with the user's right to choose 
someone

else's content, fine.


If you're getting it from an STB, well, there may not be a way for users 
to add 3rd party torrents; how many users will be able to figure out how 
to add the torrent URLs (or know where to find said URLs) even if there 
is an option?  Remember, we're talking about Joe Sixpack here, not 
techies.


You would, however, be able to pick whatever STB you wanted (unless ISPs 
deliberately blocked competitors' services).


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 



Re: Google wants to be your Internet

2007-01-20 Thread Stephen Sprunk


Thus spake "Jeremy Chadwick" <[EMAIL PROTECTED]>

Chances are that other torrent client authors are going to see
[BitThief] as "major defiance" and start implementing things like
filtering what client can connect to who based on the client name/ID
string (ex. uTorrent, Azureus, MainLine), which as we all know, is
going to last maybe 3 weeks.


BitComet has virtually dropped off the face of the 'net since the 
authors decided to not honor the "private" flag.  Even public trackers 
_that do not serve private torrents_ frequently block it out of 
community solidarity.  Note that the blocking hasn't been incorporated 
into clients, because it's largely unnecessary.



This in turn will solicit the BitThief authors implementing a feature
that allows the client to either spoof its client name or use 
randomly-
generated ones.  Rinse lather repeat, until everyone is fighting 
rather

than cooperating.

Will the BT protocol be reformed to address this?  50/50 chance.


There are lots of smart folks working on improving the tit-for-tat 
mechanism, and I bet the algorithm (but _not_ the protocol) implemented 
in popular clients will be tuned to adjust for freeloaders over time. 
However, the vast majority of people are going to use clients that 
implement things as intended because (a) it's simpler, and (b) it 
performs better.  Freeloading does work, but it takes several times as 
long to download files even with the existing, easily-exploited 
mechanisms.


Note that all it takes to turn any standard client into a BitThief is 
tuning a few of the easily-accessible parameters (e.g. max connections, 
connection rate, and upload rate).  As many folks have found out with 
various P2P clients over the years, doing so really hurts you in 
practice, but you can freeload anything you want if you have patience. 
This is not particularly novel research; it just quantifies common 
knowledge.



The result of these items already been shown: BT encryption.  I
personally know of 3 individuals who have their client to use en-
cryption only (disabling non-encrypted connection support).  For
security?  Nope -- solely because their ISP uses a rate limiting
device.

Bram Cohen's official statement is that using encryption to get
around this "is silly" because "not many ISPs are implementing
such devices" (maybe not *right now*, Bram, but in the next year
or two, they likely will):

http://bramcohen.livejournal.com/29886.html


Bram is delusional; few ISPs these days _don't_ implement rate-limiting 
for BT traffic.  And, in response, nearly every client implements 
encryption to get around it.  The root problem is ISPs aren't trying to 
solve the problem the right way -- they're seeing BT taking up huge 
amounts of BW and are trying to stop that, instead of trying to divert 
that traffic so that it costs them less to deliver.


( My ISP doesn't limit BT, but I've talked with their tech support folks 
and the response was that if I use "excessive" bandwidth they'll 
rate-limit my entire port regardless of protocol.  They gave me a 
ballpark of what "excessive" means to them, I set my client below that 
level, and I've never had a problem.  This works better for me since all 
my non-BT traffic isn't competing for limited port bandwidth, and it 
works better for them since my BT traffic is unencrypted and easy to 
de-prioritize -- but they don't limit it per se, just mark it to be 
dropped first during congestion, which is fair.  Everyone wins. )


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 



Re: Google wants to be your Internet

2007-01-20 Thread Stephen Sprunk


Thus spake "Adrian Chadd" <[EMAIL PROTECTED]>

On Sun, Jan 21, 2007, Charlie Allom wrote:
> This is a pure example of a problem from the operational front 
> which

> can be floated to research and the industry, with smarter solutions
> than port blocking and QoS.

This is what I am interested/scared by.


Its not that hard a problem to get on top of. Caching, unfortunately,
continues to be viewed as anaethma by ISP network operators in the
US. Strangely enough the caching technologies aren't a problem with
the content -delivery- people.


US ISPs get paid on bits sent, so they're going to be _against_ caching 
because caching reduces revenue.  Content providers, OTOH, pay the ISPs 
for bits sent, so they're going to be _for_ caching because it increases 
profits.  The resulting stalemate isn't hard to predict.



I've had a few ISPs out here in Australia indicate interest in a cache
that could do the normal stuff (http, rtsp, wma) and some of the p2p
stuff (bittorrent especially) with a smattering of 
QoS/shaping/control -

but not cost upwards of USD$100,000 a box. Lots of interest, no
commitment.


Basically, they're looking for a box that delivers what P2P networks 
inherently do by default.  If the rate-limiting is sane, then only a 
copy (or two) will need to come in over the slow overseas pipes, and 
it'll be replicated and reassembled locally over fast pipes.  What, 
exactly, is a middlebox supposed to add to this picture?



It doesn't help (at least in Australia) where the wholesale model of
ADSL isn't content-replication-friendly: we have to buy ATM or
ethernet pipes to upstreams and then receive each session via L2TP.
Fine from an aggregation point of view, but missing the true usefuless
of content replication and caching - right at the point where your
customers connect in.


So what you have is a Layer 8 problem due to not letting the network 
topology match the physical topology.  No magical box is going to save 
you from hairpinning traffic between a thousand different L2TP pipes. 
The best you can hope for is that the rate limits for those L2TP pipes 
will be orders of magnitude larger than the rate limit for them to talk 
upstream -- and you don't need any new tools to do that, just 
intelligent use of what you already have.


(Disclaimer: I'm one of the Squid developers. I'm getting an 
increasing

amount of interest from CDN/content origination players but none from
ISPs. I'd love to know why ISPs don't view caching as a viable option
in today's world and what we could to do make it easier for y'all.)


As someone who voluntarily used a proxy and gave up, and has worked in 
an IT dept that did the same thing, it's pretty easy to explain: there 
are too many sites that aren't cache-friendly.  It's easy for content 
folks to put up their own caches (or Akamaize) because they can design 
their sites to account for it, but an ISP runs too much risk of breaking 
users' experiences when they apply caching indiscriminately to the 
entire Web.  Non-idempotent GET requests are the single biggest breakage 
I ran into, and the proliferation of dynamically-generated "Web 2.0" 
pages (or faulty Expires values) are the biggest factor that wastes 
bandwidth by preventing caching.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-20 Thread Stephen Sprunk


Thus spake "Dave Israel" <[EMAIL PROTECTED]>

The past solution to repetitive requests for the same content has been
caching, either reactive (webcaching) or proactive (Akamaizing.)  I
think it is the latter we will see; service providers will push
reasonably cheap servers close to the edge where they aren't too
oversubscribed, and stuff their content there.  A cluster of servers
with terabytes of disk at a regional POP will cost a lot less than
upgrading the upstream links.  And even if the SPs do not want to
invest in developing this product platform for themselves, the price
will likely be paid by the content providers who need performance to
keep subscribers.


Caching per se doesn't apply to P2P networks, since they already do that 
as part of their normal operation.  The key is getting users to contact 
peers who are topologically closer, limiting the bits * distance 
product.  It's ridiculous that I often get better transfer rates with 
peers in Europe than with ones a few miles away.  The key to making 
things more efficient is not to limit the bandwidth to/from the customer 
premise, but limit it leaving the POP and between ISPs.  If I can 
transfer at 100kB/s from my neighbors but only 10kB/s from another 
continent, my opportunistic client will naturally do what my ISP wants 
as a side effect.


The second step, after you've relocated the rate limiting points, is for 
ISPs to add their own peers in each POP.  Edge devices would passively 
detect when more than N customers have accessed the same torrent, and 
they'd signal the ISP's peer to add them to its list.  That peer would 
then download the content, and those N customers would get it from the 
ISP's peer.  Creative use of rate limits and acess control could make it 
even more efficient, but they're not strictly necessary.


The third step is for content producers to directly add their torrents 
to the ISP peers before releasing the torrent directly to the public. 
This gets "official" content pre-positioned for efficient distribution, 
making it perform better (from a user's perspective) than pirated 
content.


The two great things about this are (a) it doesn't require _any_ changes 
to existing clients or protocols since it exploits existing behavior, 
and (b) it doesn't need to cover 100% of the content or be 100% 
reliable, since if a local peer isn't found with the torrent, the 
clients will fall back to their existing behavior (albeit with lower 
performance).


One thing that _does_ potentially break existing clients is forcing all 
of the tracker (including DHT) requests through an ISP server.  The ISP 
could then collect torrent popularity data in one place, but more 
importantly it could (a) forward the request upstream, replacing the IP 
with its own peer, and (b) only inform clients of other peers (including 
the ISP one) using the same intercept point.  This looks a lot more like 
a traditional transparent cache, with the attendant reliability and 
capacity concerns, but I wouldn't be surprised if this were the first 
mechanism to make it to market.



I think the biggest stumbling block isn't technical.  It is a question
of getting enough content to attract viewers, or alternately, getting
enough viewers to attract content.  Plus, you're going to a format
where the ability to fast-forward commercials is a fact, not a risk,
and you'll have to find a way to get advertisers' products in front of
the viewer to move past pay-per-view.  It's all economics and politics
now.


I think BitTorrent Inc's recent move is the wave of the short-term 
future: distribute files freely (and at low cost) via P2P, but 
DRM-protect the files so that people have to acquire a license to open 
the files.  I can see a variety of subscription models that could pay 
for content effectively under that scheme.


However, it's going to be competing with a deeply-entrenched pirate 
culture, so the key will be attractive new users who aren't technical 
enough to use the existing tools via an easy-to-use interface.  Not 
surprisingly, the same folks are working on deals to integrate BT (the 
protocol) into STBs, routers, etc. so that users won't even know what's 
going on beneath the surface -- they'll just see a TiVo-like interface 
and pay a monthly fee like with cable.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-13 Thread Stephen Sprunk


[ Note: Please don't send MIME/HTML messages to mailing lists ]

Thus spake Gian Constantine:

The cable companies have been chomping at the bit for unbundled
channels for years, so have consumers. The content providers will
never let it happen. Their claim is the popular channels support the
diversity of not-so-popular channels. Apparently, production costs
are high all around (not surprising) and most channels do not support
themselves entirely.


Regulators too.  The city here tried forcing the MSOs to unbundle, and 
the result was that a single channel cost the same as the bundle it 
normally came in -- the content providers weren't willing to license 
them individually.  The city gave in and dropped it.


Just like the providers want to force people to pay for unpopular 
channels to subsidize the popular ones, they likewise want people to pay 
for unpopular programs to subsidize the popular ones.  Consumers, OTOH, 
want to buy _programs_, not _channels_.  Hollywood isn't dumb enough to 
fall for that, since they know 90% (okay, that's being conservative) of 
what they produce is crap and the only way to get people to pay for it 
is to jack up the price of the 10% that isn't crap and give the other 
90% away.


Of course, the logical solution is to quit producing crap so that such 
games aren't necessary, but since when has any MPAA or RIAA member 
decided to go that route?


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-11 Thread Stephen Sprunk


Thus spake "Marshall Eubanks" <[EMAIL PROTECTED]>

On Jan 10, 2007, at 11:19 PM, Thomas Leavitt wrote:
I don't think consumers are going to accept having to wait for a 
"scheduled broadcast" of whatever piece of video content they want 
to view - at least if the alternative is being able to download and 
watch it nearly


That's the pull model. The push model will also exist. Both will make 
money.


There's a severe Layer 8 problem, though, because most businesses seem 
to pursue only one delivery strategy, instead of viewing them as 
complementary and using _all_ of them as appropriate.


When IP STBs start appearing, most of them _should_ have some sort of 
feature to subscribe to certain programs.  That means when a program is 
released for distribution, there will be millions of people waiting for 
it.  Push it out via mcast or P2P at 3am and it'll be waiting for them 
when they wake up (or 3pm, ready when they come home from work).  Folks 
who want older programs would need to select a show and the STB would 
grab it via P2P or pull methods.


Mcast has the advantage that STBs could opportunistically cache all 
"recent" content in case the user wants to browse the latest programs 
they haven't subscribed to, aka channel surfing.  This doesn't make 
sense with P2P due to the the waste of bandwidth, and it's not very 
effective with pull content because most folks still can't get a high 
enough bitrate from some distant colo into their homes to pull content 
as fast as they consume it.


The TV pirates have figured most of this out.  Most BitTorrent clients 
these days support RSS feeds, and there are dozens of sites that will 
give you a feed for particular shows (at least those popular enough to 
be pirated) so that your client will start pulling it as soon as it hits 
the 'net; shows like "24" will have _tens of thousands_ of clients 
downloading a new episode within minutes.  Likewise, the same sites 
offer catalogs going back several years so that you can pick nearly any 
episode and watch it within a couple hours.  Mcast is the one piece 
missing, but perhaps if it's not being used that's just yet another sign 
it's a solution in search of a problem, as critics have been saying for 
the last decade?


There is no technical challenge here; what the pirates are already doing 
works pretty well, and with a little UI work it'd even be ready for the 
mass market.  The challenges are figuring out how to pay for the pipes 
needed to deliver all these bits at consumer rates, and how to collect 
revenue from all the viewers to fairly compensate the producers -- both 
business problems, though for different folks.  Interesting problems to 
solve, but NANOG probably isn't the appropriate forum.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: Boeing's Connexion announcement

2006-10-15 Thread Stephen Sprunk


Thus spake "Kevin Day" <[EMAIL PROTECTED]>

There are two different things that are being talked about here. If
your seat has an obviously-meant-for-customer-use outlet, it's
definitely going to be 60Hz.


... or DC.


There are other outlets that look like regular North American
outlets, but hidden behind an access panel. Usually on the floor or
near a door, with no markings on the outside as to what they're for.
These *are* 400Hz, and are meant for support crew to clean the
aircraft with, maintenance tools, etc.


I've seen many outlets on planes marked 400Hz, usually in the galleys.
I've never seen one that a customer could use without running an
extension cord down the aisle, though.

I agree that power isn't as critical on board as the network access; my
laptop battery lasts about 6hrs, and I've got a second one in my bag I
keep charged just in case.  Many airports provide outlets at the gates
you can use to charge phones and laptops before takeoff, and for
non-transoceanic flights that's good enough for virtually everyone these
days.

My problem with Connexion was that it's (a) too pricey for my company's
expense rules, and (b) not available on enough planes to factor into my
travel plans anyways.  I don't doubt that it's worth the $27.95/flight,
but my company allows a max of $10.00 for internet access.  Even if I
could somehow convince the trolls in accounting to accept triple the
standard hotspot rate because it's on a plane, the IRS requires an
original receipt for any expenses over $25 and Connexion doesn't
provide one.  No dice.  Three dollars cheaper and I'd use it regularly;
$9.95 and I'd use it every single flight.

Instead, I use my company's corporate account at the departure airport 
hotspot to grab all my mail, work on it during the flight, and then use 
the hotspot at the other end to send it all when I land.  That's good 
enough for a 2-5hr flight, and it doesn't get me in trouble with 
accounting.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking




Re: IPv6 PI block is announced - update your filters 2620:0000::/23

2006-09-13 Thread Stephen Sprunk


Thus spake "Jeroen Massar" <[EMAIL PROTECTED]>

8<-
IPv6 Assignment Blocks   CIDR Block
2620::/23
->8
Expect blocks in between /40 and /48 there.


Expect mostly /48s and /44s, given that ARIN has not defined any 
criteria for what justifies more than a /48.  Of course, some folks will 
announce a /44 instead since the block is reserved, but it should still 
only be one route.


Still, even if every org that qualified for an assignment today got one, 
you're still only looking at a couple tens of thousands of routes max. 
ARIN using a /23 for PIv6 is either serious overkill or "we'll never 
need to allocate another block" at work.



That is enough space for best-case 2^(40-23) = 131.072 routes, worst
case 2^(48-23) = 33.554.432 extra routes in your routing table, I hope
Vendor C can handle it by the time that happens. In order words: 
better

start saving up those bonus points, you will be buying quite a lot of
new gear if this ever comes off the ground ;)

Most likely case is a bit more optimistic if one takes /44's: 
2.097.152
Still a lot more than the IPv4 routing table is now. It will take 
time,

and possibly a lot, but it could just happen...


IMHO, BGP will fall over and die long before we get to that many ASNs. 
Remember, the goal in giving people really big v6 blocks, vs. IPv4-style 
multiple allocations/assignments, is to reduce the necessary number of 
routes to (roughly) the number of ASNs.


If PIv6 folks start announcing absurd numbers of routes within their 
allocation, I'd expect ISPs to start filtering everything longer than 
/48 -- if they don't do so from the start.  The next step is to filter 
everything longer than /44; since everyone is getting a reserved /44 at 
a minimum, that's safe (everyone just announces the /44 in addition to 
more-specifics).  If filtering at /44 isn't enough, ISPs will just drop 
all PIv6 routes except for their customers' and the concept dies a quick 
death.  No routers will be harmed in the making of this movie.


It just occured to me that this policy is a perfect counterexample to 
Kremen's claims that ARIN is run by big ISPs for their own benefit.  The 
big ISPs wailed and moaned and tried to stop it, and history may even 
prove them right one day, but the little guys won for now.  Even if 
we're wrong, that's a good thing for a variety of reasons.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: [Fwd: Kremen VS Arin Antitrust Lawsuit - Anyone have feedback?]

2006-09-13 Thread Stephen Sprunk


Thus spake "Johnny Eriksson" <[EMAIL PROTECTED]>

"D'Arcy J.M. Cain"  wrote:

If we were still calling central and asking "Hi Mabel, can you put me
through to Doc," no one would give a rat's ass about phone number
portability.  Notice that no one is getting worked up about circuit
number portability.


... or street number portability.  Thanks $deity.


That's the canonical argument against address portability -- you can't 
take your street address with you when you move.


( I suppose now would be a bad time to point out I have a portable ZIP 
code: it's mine for life as long as I pay for the service it came with, 
no matter where I move. )


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: Kremen's Buddy?

2006-09-12 Thread Stephen Sprunk


Thus spake <[EMAIL PROTECTED]>

[EMAIL PROTECTED] wrote:


Once this subject took off on nanog, I have been
oversaturated with people trying to "sell" me ip space.  I
have had offers for several /16's for 10,000.00 each that are
no longer in use by the companies who "own" lol them.


It seems to me that this nicely illustrates a major problem with the
current system.  Here we have large blocks of IP space that, by their
own rules, ARIN should take back.  It all sounds nice on paper, but
clearly there is a hole in the system whereby ARIN doesn't know and
apparently has no way of figuring out that the space is no longer in
use.  It makes me wonder just how much space like that there is out
there artifically increasing IP scarcity.  I don't know what the
solution is, but the way things currently work it seems like if you 
can

justify a block today, it's yours forever even if you stop actively
using it.  Maybe allowing for some kind of IP market would cut down on
that type of hoarding -- you would at the very least change the type 
of

value those subnets have.


ARIN's policies allow for grandfathering of allocations/assignments made 
prior to ARIN's establishment at least in part because they'd be on 
shaky ground legally trying to revoke them for noncompliance.  It's not 
like those folks would willingly sign an RSA that would immediately 
result in losing their resources.  And the community has, so far, agreed 
with this because the problem is at least getting no worse; it's 
manageable to make allowances for a fixed or shrinking number of legacy 
address space holders.


However, I do recall that ISI ran (runs?) a program trying to contact 
folks who had legacy allocations and see if they were willing to return 
the parts they didn't need.  Bill Manning reported on the progress a few 
times, and apparently a large number of those orgs either no longer 
existed or were willing to give back what they didn't need.  I think 
this approach is acceptable to everyone, though I'd like to see more 
stats on what's been done and a more official sanction for the work.


Also, IIRC, folks who have legacy allocations/assignments can't get more 
until their existing space is up to current standards, so it's not like 
they're getting a free ride on the old space _and_ getting new space. 
All we have to complain about are the folks that have so much they'll 
never need more, and those are relatively few in reality.  I'm pretty 
sure the same situation exists for non-legacy space holders; even if you 
comply at the time of the request, if you later fall below the standards 
you're safe -- but you can't get more until you're back up to the 
standards.


All in all, the process is decent, and it has community support.  Ideal? 
No, but nothing ever is when lawyers get involved.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: [Fwd: Kremen VS Arin Antitrust Lawsuit - Anyone have feedback?]

2006-09-08 Thread Stephen Sprunk


Thus spake <[EMAIL PROTECTED]>

[ I said ]
The debate there will be around the preferential treatment that 
larger

ARIN members get (in terms of larger allocations, lower per address
fees, etc), which Kremen construes as being anticompetitive via
creating artificial barriers to entry.  That may end up being 
changed.


Your statement about preferential treatment is factually
incorrect. Larger ARIN members do not get larger allocations.
It is the larger network infrastructures that get the larger
allocations which is not directly tied to the size of the
company. Yes, larger companies often have larger infrastructures.


And that's the point: A company that is established gets preferential 
treatment over one that is not; that is called a barrier to entry by the 
anti-trust crowd.  You may feel that such a barrier is justified and 
fair, but those on the other side of it (or more importantly, their 
lawyers) are likely to disagree.



As for fees, there are no per-address fees and there
never have been. When we created ARIN, we paid special
attention to this point because we did not want to create
the erroneous impression that people were "buying" IP
addresses. The fees are related to the amount of effort
required to service an organization and that is not
directly connected to the number of addresses.


Of course it's directly connected; all you have to do is look at the 
current fee schedule and you'll see:


/24 = $4.88/IP
/23 = $2.44/IP
/22 = $1.22/IP
/21 = $0.61/IP
/20 = $0.55/IP
/19 = $0.27/IP
/18 = $0.27/IP
/17 = $0.137/IP
/16 = $0.069/IP
/15 = $0.069/IP
/14 = $0.034/IP

So, just between the two ends of the fee schedule, we have a difference 
of _two orders of magnitude_ in how much an registrant pays divided by 
how much address space they get.  Smaller folks may use this to say that 
larger ISPs, some of whose employees sit on the ARIN BOT/AC, are using 
ARIN to make it difficult for competitors to enter the market.


Since that argument appears to be true _on the surface_, ARIN will need 
to show how servicing smaller ISPs incurs higher costs per address and 
thus the lower fees for "large" allocations are simply passing along the 
savings from economy of scale.  Doable, but I wouldn't want to be 
responsible for coming up with that proof.


Besides the above, Kremen also points out that larger prefixes are more 
likely to be routed, therefore refusing to grant larger prefixes (which 
aren't justified, in ARIN's view) is another barrier to entry.  Again, 
since the folks deciding these policies are, by and large, folks who are 
already major players in the market, it's easy to put an anticometitive 
slant on that.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: [Fwd: Kremen VS Arin Antitrust Lawsuit - Anyone have feedback?]

2006-09-08 Thread Stephen Sprunk


Thus spake Brandon Galbraith

Two questions regarding thisfor the list (slightly OT):

1) Has any sort of IP address ownership precedence been set in a US
court?


Not that I'm aware of, but I've never looked.  I'm sure ARIN's lawyers 
have.



2) Isn't ARIN considered a non-profit resource management/allocation
organization? To my knowledge, there is no "marketplace" for IPs.


The entire suit is predicated on the concept that IP addresses can be 
owned and traded like other property.  The rest is a house of cards that 
will fall if ARIN can prove that to be incorrect -- and will probably 
stand if they can't.


Also, any technical expert can rip about half of the house down without 
breaking a sweat because it's so flawed to the point of being 
entertaining.  It'd be fun to read the transcripts if this ever goes to 
trial, but my money says it'll be decided one way or the other before it 
actually makes it into a courtroom.


The wording of Kremen's argument made me understand why ARIN is so 
resistant to using the term "rent" for their activities, because that 
implies that there is property exchanging hands.  Courts have 
jurisdiction over property, though it's a minefield to try to dictate 
who someone must rent to.  Keeping the words in registry-speak allows 
them to differentiate the situation and insist that addresses are not 
property at all.


The anti-trust angle is interesting, but even if ARIN were found to be 
one, it's hard to convince people that a _non-profit_ monopoly acting in 
the public interest is a bad thing.  The debate there will be around the 
preferential treatment that larger ARIN members get (in terms of larger 
allocations, lower per address fees, etc), which Kremen construes as 
being anticompetitive via creating artificial barriers to entry.  That 
may end up being changed.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: comast email issues, who else has them?

2006-09-07 Thread Stephen Sprunk


Thus spake "Sean Donelan" <[EMAIL PROTECTED]>

But there is no requirement to use your ISP's mail server or any other
application from your ISP.  Likewise there is no requirement for a ISP 
to offer any E-mail or Usenet, or FTP, or legal music downloads, or 
any
other application to its customers.  There isn't even a requirement 
for it

to have any customer service.  Few of the large free Email providers
have any easy way to talk to any human about mail problems.  So you
don't even get the satisfaction of yelling at a first level tech about 
your

frustrations.


However, the reality is that a significant fraction of users will use 
their ISP's email service, if one is provided.  They'll tolerate minor 
failures because changing your email address and distributing it to 
everyone is such a hassle.  More and more folks are wising up to this 
and switching to Yahoo mail or Gmail so they don't have to do it ever 
again, but OTOH those services are better-run than most ISP mail 
systems.


I happen to think the problem is with the bulk mail forwarding 
services that don't pre-filter mail.  But that's just my opinion.  I 
choose not

to use unfiltered bulk mail forwarding services so I don't have those
problems.


That's not the problem, because I'm not using a bulk mail forwarding 
service.  It's just a single vanity domain hosted by a single Linux box 
with a half-dozen accounts.  And I read the mail _on that box_.  There 
is nothing complicated going on here; we're talking stuff people were 
doing just fine in the 1980s.  I can get email from and send email to 
anyone on the planet reliably except Comcast customers, which, 
unfortunately, includes several family members and friends.  And even 
that worked for years; it just broke a few months ago.


The real killer is it's broken in both directions; I can't come up with 
any legitimate reason for that.  Inbound (to comcast), I could blame on 
spam filters, but not outbound.


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: comast email issues, who else has them?

2006-09-06 Thread Stephen Sprunk


Thus spake "Sean Donelan" <[EMAIL PROTECTED]>

On Thu, 31 Aug 2006, Tony Li wrote:

I've taken the rather extreme approach of bouncing everything through
Gmail first.  Let's see them block Google.  ;-)


Patient: Doctor, Doctor, It hurts when I do this.
Doctor: Don't do that.


Not very helpful advice when Comcast's mail servers block about 75% (but 
oddly not 100%) of mail _to or from_ specific domains, and the reason 
stated for rejection is obviously false after only a couple seconds of 
investigation.


Telling half my family members they have to go get Gmail so they can 
email the other half of my family members is ridiculous.  Too bad 
Comcast has a monopoly (or, where a duopoly, the competition is just as 
incompetent) so they have no incentive to fix it.


There are lots of Mail Service Providers.  AOL, Comcast, Gmail, Yahoo, 
Outblaze, whoever, each have their own quirks and problems.  All have 
blocked various sources including each other at one time or another.

Some people complain about some of the decisions made by each of
them; while other people applaud the same decisions.


Very few people ever applaud a provider blocking legitimate mail.


Perhaps people are using the wrong tools to solve the problems?


Because Comcast's tools are broken and when other mail admins or even 
their own customers call them on it, they're not even competent enough 
to understand the complaint and refuse to escalate?


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: NNTP feed.

2006-09-05 Thread Stephen Sprunk


Thus spake "Greg Boehnlein" <[EMAIL PROTECTED]>

I came to much the same conclusion several years ago, when we finally
decommissioned our NNTP Servers and out-sourced the service to an
outside company. Running an NNTP server was a full-time job, and the
500 or so people that used it didn't generate enough revenue for us to
continue managing it inside.


That seems to be consensus among ISPs: there just aren't enough users 
today to justify the cost of hiring a full-time news admin, deal with 
abuse and customer service issues, and pay for the storage space for 
8TB/day of content.


OTOH, it might be doable if you didn't carry the alt.binaries groups; 
those account for well over 90% of the bytes on usenet today, and 
virtually all of the complainers.  If your customers want the binaries 
groups, you can easily point them to a dozen different commercial news 
providers that do carry them -- and have the economy of scale needed to 
turn a profit.


Another option is to run a caching-only news server, provided you can 
find a willing upstream.  This will save you most of the bandwidth if 
you don't have many users, though again excluding binaries may be needed 
to cut down on the whiney pirates.


(Besides, all the binaries on usenet are available via BitTorrent 
somewhere anyways; NNTP does not make a good piracy protocol from a 
technical perspective, only from an anonymity one)


S

Stephen Sprunk "God does not play dice."  --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity." --Stephen Hawking 





Re: Web typo-correction (Re: Sitefinder II, the sequel...)

2006-07-14 Thread Stephen Sprunk


Thus spake "Edward B. DREGER" <[EMAIL PROTECTED]>

(Note that I've not examined OpenDNS's offering, so I'm _not_ pretending
to comment on what they do.)

Let's quit looking at overly-simplistic correction mechanisms.  Do spell
checkers force autocorrection with only a single choice per misspelled
word?


Ever used Word or Outlook?  They annoyingly "fix" words as you type
without offering multiple choices or even alerting the user that they're 
doing
it.  I've learned to re-read what I write several times now because I've 
been

burned too many times by jargon being "corrected" to unrelated "real"
words -- but I type "teh" and similar things often enough I can't afford to
turn the feature off.  (And my employer requires me to use those apps, so 
all you anti-MS folks please sit back down)


OpenDNS's typo-fixing service can supposedly be turned off, but I don't see
how that would work when you have multiple users behind a NAT or a recursive 
server.  There also may be hidden problems if an ISP pushes all

of their users onto this service and the users have no clue they've been
"opted in" or how to opt back out (and we all know how well "opt out"
systems work for email in general).


Return an A RR that points -controlled system.  Said
system examines HTTP "Host" header, then returns a page listing multiple
possibilities.

"The site you specified does not exist.  Here is a list of sites that
you may be trying to access: ..."


And that solves most of my objections, at least for HTTP.  It still breaks a
lot of other protocols.


I'm generally ignoring other protocols to limit the discussion scope.
However, one can see how SMTP and FTP might be similarly handled.
(IMHO not as good as a SRV-ish system that could return NXDOMAIN
per service, but actually somewhat usable today.)


If web browsers consulted SRV records instead of blindly connecting to the
A, that would appear to solve everything: NXDOMAIN for the A but the HTTP
SRV could point to the typo-correction server.  I'd not be inclined to argue
with such a setup, but it requires a refresh of every browser out there, so
it's not realistic.

S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



Re: Fwd: 41/8 announcement

2006-05-26 Thread Stephen Sprunk


Thus spake "william(at)elan.net" <[EMAIL PROTECTED]>

On Fri, 26 May 2006, Bill Woodcock wrote:

Presumably they're double-natting.  I had to do that once for Y2K
compliance for three large governmental networks that were all statically
addressed in net-10 and wouldn't/couldn't renumber in time.  In fact,
there were _specific hosts_ which had the same IP address, and _had to
talk to each other_.  Gross.  But it can be done.


Please explain how. I simply can't imagine my computer communicating
with another one with exactly same ip address - the packet would never
leave it. The only way I see to achieve this is to have dns resolver
on the fly convert remote addresses from same network into some other
network and then NAT from those other addresses.


Unfortunately, I've done this several times, most notably within one company 
that had multiple instances of 10/8 that needed to talk to each other.  A 
decent (if one can use that term) NAT device will translate the addresses in 
DNS responses, so two hosts that both live at 10.1.2.3 will see the other's 
address as, for example, 192.168.1.2, both in DNS and in the IP headers.


It's extremely ugly, but that's what one gets for using private address 
space.  This exact scenario was a large part of why I supported ULAs for 
IPv6.


S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



Re: MEDIA: ICANN rejects .xxx domain

2006-05-11 Thread Stephen Sprunk


Thus spake "Alain Hebert" <[EMAIL PROTECTED]>

   Why?

   If we can coral them in it and legislate to have no porn anywhere else 
than on .xxx ... should fix the issue for the prudes out there.


And exactly which legislature has the authority to prevent porn sites 
registering in any other gTLD/ccTLD?


S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



Re: Open Letter to D-Link about their NTP vandalism

2006-04-13 Thread Stephen Sprunk

[ In response to Richard A Steenbergen ]

Alain Hebert said:
>
> Well,
>
> With the way you named your address book (North American Noise and
> Off-topic Gripes).
>
> We now know where to fill your futur comments.
> (In the killfile that is)

That Cc: came from my message, and RAS didn't change it back to something
inoffensive when he replied to me.  While one can certainly find reasons
to killfile RAS, this is not one of them.

Grow a sense of humor, already...

S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin


Re: Open Letter to D-Link about their NTP vandalism

2006-04-12 Thread Stephen Sprunk


Thus spake "Alexei Roudnev" <[EMAIL PROTECTED]>

Hmm, if some idiot wrote my NTP IP into his hardware, I just stop to
monitor my NTP and make sure that it have few hours of error in time.
No one require me to CLAIM that I set up wrong time, BUT no one can
require me to maintain correct time just because some idiots use my
server.


What most people participating in this subthread seem to be missing is that 
if one did decide to send (or accidentally sent) false time to these D-Link 
devices, NOBODY WOULD EVER KNOW OR CARE.  Doing so does not solve any 
problems, so whatever the legal risk of acting is, no matter how small, it's 
not worth it.


On the plus side, after seeing D-Link's (lack of) reaction to this, I'll bet 
none of us will buy another of their products again.


S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



Re: IP ranges, re- announcing 'PA space' via BGP, etc

2006-04-07 Thread Stephen Sprunk


Thus spake "Alexander Koch" <[EMAIL PROTECTED]>

On Fri, 7 April 2006 07:03:09 -0400, Patrick W. Gilmore wrote:

Can you give us some examples so us "dumb Americans" can more
precisely explain the problem? :)


When a random customer (content hoster) asks you to accept
something out of 8/8 that is Level(3) space, and there is no
route at this moment in the routing table, do you accept it,
or does Level(3) have some fancy written formal process and
they get approval to do it, etc.?

In Europe we would tell the customer this ain't gonna happen,
as we would re- announce blocks out of 'foreign' LIR allocations
and that is a no-go, unless the holder of that allocations
acks that.


Here, it seems that some ISPs will accept foreign PA routes* _as long as the 
customer is still connected to that other provider_.  Some won't under any 
circumstances.  The remainder aren't filtering their downstreams and have no 
clue what they are providing transit for (i.e. it's the customer's job to 
get it right).


If you do decide to accept a foreign PA route, you need to be very careful 
to point out to the customer (a) some people will filter your route for 
being too long and send their traffic to the owning provider, and (b) if the 
other provider doesn't announce the longer prefix in addition to their 
aggregate, anyone who accepts the longer route will send traffic only to you 
due to longest match.  Both cases can result in suboptimal routing.


The correct** solution is to help them become an LIR, assuming they qualify.

S

* meaning a route for part of another ISP's aggregate
** for some values of "correct"

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



Re: shim6 @ NANOG (forwarded note from John Payne)

2006-03-06 Thread Stephen Sprunk


Thus spake "Daniel Golding" <[EMAIL PROTECTED]>

On 3/6/06 10:25 AM, "Stephen Sprunk" <[EMAIL PROTECTED]> wrote:

So, unless there's policy change, most end-user orgs will have no
choice but to pay the market rate for IPv4 addresses.  Spot markets
are good when demand is elastic, but we're faced with a market that
has growing inelastic demand that will outstrip fixed supply in a
decade.  Capitalism doesn't handle that well.


There will be a average cost per host to transition from v4 to v6. When 
the

cost of IPv4 addresses exceeds the transition cost, then you have the one
thing missing from IPv6 discussions: an ROI.


Please quantify the cost of not being able to multihome your 
mission-critical business.  Compare to the cost of obtaining an IPv4 PI 
block.  Both are likely to exceed the possible revenue for small businesses 
at some point not too far off.


IPv6 is not a replacement for IPv4 today; it's less attractive for a number 
of reasons, and running out of IPv4 addresses will only solve one (maybe 
two) of the problems.



Many organizations wont even look at this without an ROI. Folks who
want to see v6 adopted would be well advised to support the creation
of a hard ROI through these means.


That'd be interesting to see, but there's just too many variables we don't 
(and can't) have numbers for yet.  Maybe it'd be a useful exercise to at 
least identify what needs to be quantified...



ARIN (and/or RIPE, APNIC) should really use a bit of their budget
surplus to provide a few grants to economics professors who are experts
in commodity market issues. As engineers, we grope in the dark
concerning fairly well established scientific principles we are unfamiliar
with. Its like reinventing the wheel. :(


That would require the RIRs to admit that IP addresses are marketable 
commodities, which is something that, to date, they have refused to do.


S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



Re: Welcome back, Ma Bell

2006-03-06 Thread Stephen Sprunk


Thus spake "Justin M. Streiner" <[EMAIL PROTECTED]>

On Mon, 6 Mar 2006, Christian Kuhtz wrote:
That being said, the 'new ATT' with all those assets will need to be 
integrated, and work efficiently.  Turf battles will ensue.  Tens of


Integration, going on past experience, is highly unlikely.  ... It's the
same phenomenon of having 37 different numbers to call to get
anything done at $RBOC, none of which are connected to each other. If 
their phone tree is that disorganized, I have little reason to suspect

the underlying support systems are any different, nor will they be
under the SB^H^H^HNew AT&T.


I was helping $RBOC roll out a new service, and their project schedule had 
six months allocated to determining which billing system (out of a dozen or 
so) would be used and and another six months to determine what the pricing 
model would be.  It's now a year later, and last I heard they still haven't 
figured out either one.  But the technology works great...


Integration?  How are they going to do that when the first thing management 
does is lay off all the people who understand how all the systems work? 
It's all the peons can do to keep the mess running; there's nobody left to 
integrate anything and get the "synergistic cost savings" that management 
touts when they propose mergers.


S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



Re: shim6 @ NANOG (forwarded note from John Payne)

2006-03-06 Thread Stephen Sprunk


Thus spake "Eliot Lear" <[EMAIL PROTECTED]>

Stephen Sprunk wrote:

Shim6 is an answer to "what kind of multihoming can we offer to sites
without PI space?"; it is yet to be seen if anyone cares about the
answer to that question.


This argument is circular.  The only real way to test demand is to offer
a service and see if customers bite.


I'm not a fan of "build it and they will come" engineering.  One might first 
talk to customers and see if your proposal is laughed at, at least.  So far, 
that's the most charitable reaction I've seen to shim6.


It's also not encouraging that many of the folks working on shim6 happen to 
have PI blocks themselves (despite not qualifying as LIRs); I'm also not a 
fan of "it's good enough for everyone else, but not good enough for me."


S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



Re: shim6 @ NANOG (forwarded note from John Payne)

2006-03-06 Thread Stephen Sprunk


Thus spake <[EMAIL PROTECTED]>

Let's face it, IPv6 is close enough to IPv4 that any
attempt to put a price on IPv4 addresses will simply
cause a massive migration to free and plentiful IPv6
addresses.


You assume that there will be a source of free and plentiful IPv6 addresses. 
AFAIK, none of them are rent-free, and they're not even available unless you 
have the clue and resources to prented to be an LIR.


So, unless there's policy change, most end-user orgs will have no choice but 
to pay the market rate for IPv4 addresses.  Spot markets are good when 
demand is elastic, but we're faced with a market that has growing inelastic 
demand that will outstrip fixed supply in a decade.  Capitalism doesn't 
handle that well.


S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



Re: shim6 @ NANOG

2006-03-06 Thread Stephen Sprunk


Thus spake "Tony Li" <[EMAIL PROTECTED]>

Stephen Sprunk wrote:

Who exactly has been trying to find scalable routing solutions?


Well, for the last decade or so, there's been a small group of us who
have been working towards a new routing architecture.  Primary
influences in my mind are Chiappa, O'Dell, Hain, Hinden, Nordmark,
Atkinson, Fuller, Huston, Rekhter and Meyer.  Apologies to any folks
that feel I've incorrectly included or excluded them.  ;-)


And my apologies for not recognizing the work that y'all have done; my point 
was that none of this seems to be officially supported by the IETF, and thus 
hasn't borne much fruit.  I've seen a few proposals by folks listed above, 
and it seems to be old drafts (or even napkins) that get trotted out when 
this discussion comes up.  If there's work actively going on today, it's not 
well publicized.



IPv6 advocates have been pushing a no-PI model for over a decade
because that's what ISPs told them to do.


More accurately, the routing community has been trying to avoid PI
addressing simply because of the scalability (and thus cost) concerns.


s/routing/ISP/ and I'd agree with that.  The IETF has virtually no 
enterprise representation, and those folks (a) have a lot more routers than 
the ISPs, and (b) pay the ISPs' bills.


I agree that PI has scaling/cost problems, but so far all of the 
alternatives presented by the IETF present worse problems _in the eyes of 
the people that pay the bills_.  That doesn't mean the latter are right, but 
their views should not be taken lightly.



When they found end users didn't like that, they went off and
developed what has become shim6 as a poor apology. There has
never been any significant work done on replacing CIDR with
something that scales better.


More accurately, that work (GSE/8+8) was slapped down politically
without due technical consideration.


Correction noted.


Note that replacing CIDR isn't exactly the point.  The point is to have
something that scales.  Where CIDR can't cope is exactly when we
come to multihoming.  When multihoming was a rare exception, the
small number of PI prefixes remains tolerable.  However, over time,
the continued growth in multihoming, even solely as a function of the
growth of the Internet will come to dominate the cost structure of
the routing subsystem.


I'm not sure I agree with that.  The ISPs out there have tens of prefixes 
each even in v6 land (and hundreds in v4 land), whereas the goal is to have 
one per end site.  Until the number of multihomed end sites exceeds the 
number of ISPs by several orders of magnitude, the impact on the routing 
table will be non-dominant though certainly also non-trivial.


Also, consider how easy it is to do PI-based multihoming in v4: all you need 
is a couple pipes (or tunnels), an ASN, and enough hosts to justify a /24. 
If you believed all the chicken littles, this would leave us with millions 
of v4 PI routes and the DFZ would be in ruins, yet only a few hundred people 
have taken ARIN up on that offer.  In short, implementation of PI-based 
multihoming has ground almost to a halt even under today's liberal policies.


Now, given the floodgates are open for v4 and all we see is a trickle of 
water, why is everyone running around screaming that the sky will fall if we 
do something similar for v6?  Do we have any evidence at all that 
multihoming growth will outpace Moore and this whole debate is even 
relevant?



The only alternative to a PI-like architecture is to provide multihomed
sites with multiple prefixes, none of which need to be globally
disseminated.  Making this multiple prefix architecture work was the
charter of the multi6 group.  This was constrained in interesting ways,
as both NAT box solutions were considered politically unacceptable, as
was changing the core functionality of the v6 stack (i.e., redefining
the TCP pseudoheader).  Given these constraints, it was somewhat
unsurprising that NAT got pushed into the host.

From my perspective, we have now explored the dominant quadrants of the
solution space and various factions have vociferously denounced all
possible solutions.  You'll pardon me if some of us are feeling just a
tad frustrated.


I think we're all a bit frustrated at this point.

However, I think we haven't adequately explored several ideas that allow PI 
space for all that need it yet don't require carrying all those routes in 
every DFZ router or schemes that do away with our current idea of the DFZ 
entirely.  The solution space is a lot bigger than the few corners that 
we've explored over and over.



Every such proposal I've seen has been ignored or brushed aside by
folks who've been doing CIDR for their entire careers and refuse to
even consider that anything else might be better.


More accurately, the folks that have been CIDR advocates &#

Re: shim6 @ NANOG

2006-03-05 Thread Stephen Sprunk


Thus spake "Joe Abley" <[EMAIL PROTECTED]>
Was it not the lack of any scalable routing solution after many years  of 
trying that led people to resort to endpoint mobility in end  systems, à 
la shim6?


Who exactly has been trying to find scalable routing solutions?

IPv6 advocates have been pushing a no-PI model for over a decade because 
that's what ISPs told them to do.  When they found end users didn't like 
that, they went off and developed what has become shim6 as a poor apology. 
There has never been any significant work done on replacing CIDR with 
something that scales better.  Every such proposal I've seen has been 
ignored or brushed aside by folks who've been doing CIDR for their entire 
careers and refuse to even consider that anything else might be better.


All this time, energy, and thought spent on shim6 would have been better 
spent on a scalable IDR solution.  Luckily, we still have another decade or 
so to come up with something.


S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



Re: shim6 @ NANOG (forwarded note from John Payne)

2006-03-04 Thread Stephen Sprunk


Thus spake "Mark Newton" <[EMAIL PROTECTED]>

On Fri, Mar 03, 2006 at 09:50:55PM +0100, Iljitsch van Beijnum wrote:
> On 3-mrt-2006, at 21:43, Brandon Ross wrote:
> >What's worse is that unless people start changing their tune soon
> >and make the ownership of IP space official, this will be a black
> >market (like it is now, just much bigger).
>
> But that will end as soon as interdomain routing is protected by
> certificates given out by the RIRs.

No, it'll end as soon as those certificates become mandatory.

Which will, in my humble estimation, happen at some point near the
year 4523.


I agree that RIR certs will never become truly mandatory, but it'll be a 
Good Idea(tm) to have one to prevent hijacking.


However, some bright accountant at a big telco is going to figure out it's 
not RIR certs they want to see -- they'll want to issue their own certs to 
squeeze revenue from non-customers.  "You want to buy transit from our peers 
instead of us?  That's great.  But, if you want reliable access to our 
customers from your PI block, you have to pay $100/mo for a routing slot." 
Bingo, the swamp problem becomes self-correcting.


S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



Re: shim6 @ NANOG

2006-03-03 Thread Stephen Sprunk


Thus spake "Tony Li" <[EMAIL PROTECTED]>

I'm more confident that we'll find an answer
to the IDR problem sooner than we'll convince people to act in the good
of the community at their own expense.


The solution to the IDR problem is to have a scalable routing
architecture.  Unfortunately, that involves change from the status quo,
and thus altruistic action.


Not if/when folks understand that the implosion is imminent and the only way 
to preserve their business is to build a better routing architecture.  Only 
when self-interest and altruism are coincident is the latter consistently 
achieved.



The alternative, of course, is to wait for IDR to implode and let the
finger-pointing begin.


... which is what I expect to happen.  A few folks will see it coming, 
design a fix, and everyone will deploy it overnight when they discover they 
have no other choice.  Isn't that about what happened with CIDR, in a 
nutshell?


S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



Re: absense of multicast deployment

2006-03-03 Thread Stephen Sprunk


Thus spake "Joe Abley" <[EMAIL PROTECTED]>

On 3-Mar-2006, at 11:48, Stephen Sprunk wrote:
That depends on your perspective.  There's a compelling need for  usable 
multicast in many environments, and so far there's nobody  (in the US) 
with a compelling need for IPv6, much less shim6.


If there's such a compelling need for native multicast, why has it  seen 
such limited deployment, and why is it available to such a tiny 
proportion of the Internet?


Just because it's not widely available on the public network doesn't mean 
that it's not widely available on private networks connected to the public 
one.  There are tens of millions of users out there with access to Cisco 
IP/TV, Real, etc. over multicast, not to mention custom business apps 
(particularly common in the securities world) that use multicast.  They're 
self-contained, though, so you don't see the packets/users or even know 
they're out there.


I'm not terribly surprised the public Internet doesn't have real mcast yet; 
the cost to build replicating unicast servers is paid by content sources 
while the cost to deploy PIM SSM is paid by another, and as such the cheaper 
alternative doesn't necessarily win.  In a private network, one org can see 
the total costs for both and pick whichever one makes more sense.


If anything, it's in ISPs interests to keep things unicast since there's 
more bits to bill for.  At least until someone figures out how to bill for 
the traffic exiting the network at the other end (and that still leaves a 
problem for peering).


S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



Re: shim6 @ NANOG

2006-03-03 Thread Stephen Sprunk


Thus spake "Iljitsch van Beijnum" <[EMAIL PROTECTED]>

Man, I hope I never become as cynical as you.


A pessimist is never disappointed.


On 2-mrt-2006, at 11:09, Stephen Sprunk wrote:
Why is it even remotely rational that a corporate admin trust 100k+ 
hosts infested with worms, virii, spam, malware, etc. to handle 
multihoming decisions?


They trust those hosts to do congestion control too, which is even  more 
important.


No, they don't.  That's why nearly every enterprise has deployed intradomain 
QoS of some sort.


Nearly everyone doing VoIP has to use QoS to prevent hosts (with "congestion 
control") from messing with their voice traffic.  Others have had to deploy 
it to prevent non-mission-critical (or even prohibited) apps from 
interfering with mission-critical stuff.  I had one customer that had to 
implement QoS on their entire WAN just to keep Outlook and web access from 
starving out their serial-over-X.25-over-IP business application.


The people who pay for the network want to have control over it.


Especially when we don't even have a sample of working code today?


The IAB goes out of its way to solicit input on ongoing work, and now  you 
whine about lack of working code?


I'm not whining (at least I don't think so), but I think it's very premature 
to talk about shim6 as the solution to IPv6 multihoming when it's not a 
deployable solution or even a fully specified one.


Now, some may take that as a sign the IETF needs to figure out how  to 
handle 10^6 BGP prefixes...  I'm not sure we'll be there for a  few years 
with IPv6, but sooner or later we will, and someone needs  to figure out 
what the Internet is going to look like at that point.


It won't look good. ISPs will have to buy much more expensive  routers. At 
some point, people will start to filter out routes that  they feel they 
can live without and universal reachability will be a  thing of the past.


That's one possible end case.  The other is that all of this is a tempest in 
a teapot and the growth of IPv6 PI routes will continue to be non-dominant 
just as PI is with IPv4.  As others have noted, one prefix per ASN (which is 
the goal of IPv6 PI policy) is nowhere near enough to create a problem 
unless there's a serious explosion in ASN assignment.  The policies for IPv4 
are pretty darn lax, so if we don't have a problem today, why do people 
think we'll have a problem with stricter policies on the IPv6 side?


And I'm the cynic...

It will be just like NAT: every individual problem will be solvable,  but 
as an industry, or even a society, we'll be wasting enormous  amounts of 
time, energy and money just because we didn't want to bite  the bullet 
earlier on.


People pay what they perceive to be the lowest cost to themselves; so far, 
NAT has that honor.  I'm more confident that we'll find an answer to the IDR 
problem sooner than we'll convince people to act in the good of the 
community at their own expense.


S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



Re: 2005-1, good or bad? [Was: Re: Shim6 vs PI addressing]

2006-03-03 Thread Stephen Sprunk
8 by ARIN even  though 
their IPv6 policy (still) says:


[wait wait wait until I fall back to IPv4 because www.arin.net is 
currently unreachable over IPv6]


"6.4.3. Minimum Allocation
RIRs will apply a minimum size for IPv6 allocations, to facilitate 
prefix-based filtering.


The minimum allocation size for IPv6 address space is /32."

The ISP that I used at the time installed prefix length filters 
accordingly so I couldn't reach the F root server over IPv6.


Moral of the story: if you build in a way for people to screw up,  they'll 
do it. After that, they'll start throwing out some babies  with the bath 
water.


There's a different policy for IPv6 microallocations, and your ISP messed up 
by not noticing it.  Not surprising given how little time and attention 
folks have been spending on IPv6 to date.


S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



Re: Shim6 vs PI addressing

2006-03-03 Thread Stephen Sprunk


Thus spake "Iljitsch van Beijnum" <[EMAIL PROTECTED]>

On 1-mrt-2006, at 18:05, David Barak wrote:

Is it easier to scale N routers, or scale 1*N hosts?

...
2 x relatively small is a lot less than 10 x relatively large. Or, in 
other

words: on the host you only pay if you actually communicate. In
routers, you pay more as there is more routing information, whether
the extra information is used or not.


OTOH, hosts go a lot longer between upgrades and generally don't have 
professional admins.  It'll be a long, long time (if ever) until shim6 is 
deployed widely enough for folks to literally bet their company on 
host-based multihoming.



If we simply moved to an "everyone with an ASN
gets a /32" model, we'd have about 30,000 /32s.  It
would be a really long time before we had as many
routes in the table as we do today, let alone the
umpteen-bazillion routes which scare everyone so
badly.


1. We've already walked the edge of the cliff several times (CIDR had  to 
be implemented in a big hurry, later flap dampening and prefix  length 
filtering were needed)


At least this time we know what the likely problems are, and we can build in 
safeguards that can be quickly implemented if we get too close to the edge. 
Not that I agree we'll even get there...



2. We'll have to live with IPv6 a long time


Perhaps.  I know the goal was for it to last 100 years, but what technology 
has ever lasted that long without significant improvements that altered it 
almost beyond recognition?



3. Route processing and FIB lookups scale worse than linear


With an mtrie+ FIB, routing lookups scale far, far better than linear.  What 
matters is the length of the prefix being matched, not how many there are.


TCAMs scale linearly, provided you can build them big enough (and costs 
certainly aren't linear).


4. If the global routing table meltdown happens, it will be extremely 
costly in a short time
5. Even if the meltdown doesn't happen a smaller routing table makes 
everything cheaper and gives us more implementation options (5000  entry 
TCAM is nice, 50 entries not so much as it basically uses  100 times 
as much power)


Agreed.


6. Moore can't go on forever, there are physical limitations


Every time folks claim that, someone makes a breakthrough that continues the 
curve.  Surely we can't count on this forever, but so far money has 
consistently trumped "physical limitations".


But the most important thing we should remember is that currently, 
routing table growth is artificially limited by relatively strict 
requirements for getting a /24 or larger. With IPv6 this goes away,  and 
we don't know how many people will want to multihome then.


The requirements for getting a /24 are pretty darn lax, actually, and the 
current proposals for PI space being debated within ARIN are significantly 
more restrictive.


The reality today is that v4 routing tables are well within our capabilities 
and growing slowly.  If we were on the verge of another serious problem, 
like we where when the CIDR fire drill happened, ISPs could easily cut the 
tables in half simply by filtering prefixes longer than RIR minima.


S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



Re: shim6 @ NANOG (forwarded note from John Payne)

2006-03-03 Thread Stephen Sprunk


Thus spake "Iljitsch van Beijnum" <[EMAIL PROTECTED]>

On 3-mrt-2006, at 17:04, Stephen Sprunk wrote:
Keep in mind that current RIR allocations/assignments are  effectively 
leases (though the RIRs deny that fact) and, like any  landlord, they can 
refuse to renew a lease or increase the rent at  any point.


I can only imagine the fun the lawyers are going to have with this:

1. Get address space from Internic, no questions asked
2. ARIN is formed and starts making policies that say address space  isn't 
owned

3. ARIN never enforces these no ownership policies (that I know of)
4. ARIN tries to take away the addresses

That's the best advertisement IPv6 could ever hope for: "no lawyers!"


Thanks for silently snipping the paragraph that partially answered that.

There may be some legal battles over it, but since the orgs have no records 
of ever purchasing those legacy addresses, it's hard to claim true 
ownership -- not that one could easily establish owning a number even with a 
bill of sale.


My guess is we'll continue to grandfather them forever, but RIR policy will 
change to requiring orgs to start paying rent on them in order to receive 
any new assignments (either v4 or v6).  Wait a few years, and we can reclaim 
most of the space without the lawyers being able to interfere.


v6 does have an advantage (to the RIRs) of not having legacy issues, but 
that's a disadvantage for the orgs getting space.  Consider that the vast 
majority of orgs with multiple legacy swamp allocations haven't traded them 
in for a rent-free CIDR one; part of that is inertia, but part is the risk 
that doing so will more likely expose them to rent in the future.


So even if it's  free, deploying IPv6 today isn't all that useful.  But 
when you're the  last one running IPv4, you'll really want to  move over 
to IPv6, even  if it's very expensive.


Ah, but why?  As long as IPv4 has similar or better performance 
characteristics to IPv6, why would anyone _need_ to migrate?  Add  to 
that the near certainty that vendors will create NAT devices  that will 
allow an entire v4 enterprise to reach the v6 Internet...


Don't they teach you IPv6 network design in CCIE school?


There weren't CCIE schools back when I got mine, but my understanding is 
that the ones today still don't teach anything (or at least anything useful) 
about IPv6.



Once you've worked with link local addressing/routing and generating
addresses from EUI-64s you never want to go back to the tedious
address and subnet management that's necessary in IPv4.


When you're using RFC1918 space, as nearly all leaf orgs do today, subnet 
assignment isn't tedious: just give every VLAN a /24 or so and be done with 
it; similar to assigning /64s.  Maintaining DHCP servers sucks, but it's an 
accepted cost that doesn't amount to much in the budget since they're 
already paid for (or free with your routers).


I agree that IPv6 is better from this perspective, but unless one is 
building out a greenfield network, the transition cost is higher than the 
cost of status quo.  Just upgrading all those L3 switches to v6-capable 
models will cost large enterprises tens of millions of dollars (and don't 
say regular upgrade cycles will fix that, as obsolete equipment just moves 
out of the core to other places).



So building boxes just so you can stick to IPv4 when the rest of the
world is already on IPv6 seems a bit backward to me.


It's not a matter of building boxes: all that needs to happen is for Cisco 
to release an upgrade for PIX (ditto for other vendors) that is free with a 
maintenance contract, and every enterprise will be doing it overnight. 
What's to stop the vendors from doing it?  All it takes is one big (or 
several small) RFP(s) asking for the feature, and it'll be there.


Since you can't express the IPv6 address space in the IPv4 address  space 
(the reverse is easy and available today), the translation  needs to 
happen a bit higher in the stack.


Off-the-cuff solution: translate all incoming v6 addresses to temporary v4 
addresses (172.16/12 will do nicely).  You'll need to intercept DNS, but 
most NAT devices do that today anyways for other reasons.



When I was testing running IPv6-only I installed an Apache 2 proxy in
order to reach the IPv4 web from my IPv6-only system. But it worked
the other way around too, of course: using the proxy, I could visit
sites over IPv6 with IPv4-only systems.


Which supports my point: why upgrade when you can proxy / translate / 
whatever for (almost) free?  Especially when you're using 10/8 internally 
and thus will never directly feel any v4 exhaustion pain?


S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



Re: Shim6 vs PI addressing

2006-03-03 Thread Stephen Sprunk


Thus spake "Todd Vierling" <[EMAIL PROTECTED]>

On Wed, 1 Mar 2006, Iljitsch van Beijnum wrote:


3. Route processing and FIB lookups scale worse than linear



6. Moore can't go on forever, there are physical limitations


The funny part:  Those on this list who have cited Moore's Law don't
seem to have an understanding that it does not directly apply to
custom routing logic (since general-purpose CPUs are no longer fast
enough to do the lookups on the high end).  In addition, GP CPUs
are no longer scaling exponentially, but rather closer to quadratically
and approaching linear.

In short, Moore's Law is dying,


Moore's Law says nothing about performance; it only refers to transistor 
densities.  In fact, current CPUs are still following the predicted curve, 
but they're turning fewer and fewer of those transistors into actual 
performance improvements.  That's what the move to dual-core is about: 
finding more productive ways to use the wealth transistors now available.


However, I agree that custom logic for routers does not necessarily follow 
the same curve; the volume is still low enough that vendors can't (or don't) 
use the best processes available.  Heck, even the best available main CPUs 
are several years behind what's available in the PC market (why ship a 2GHz 
CPU when you can ship a 500MHz one at ten times the price?).



and even if it weren't, it is not a valid argument for "let the swamp in".


One of the key attributes of the v4 swamp is that most orgs got more than 
one assignment (aka routing slot), often dozens to hundreds; the proposed 
policies for a "v6 swamp" do not allow that.


S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



Re: shim6 @ NANOG (forwarded note from John Payne)

2006-03-03 Thread Stephen Sprunk


Thus spake "Tony Li" <[EMAIL PROTECTED]>

Marshall,


That's after 6 years.

I would be surprised if Shim6 going into actual deployed boxes was any
faster.  So, if Shim6 was finalized today, which it won't be, in 2010 we
might have 70% deployment and in 2012 we might have 90% deployment.

I actually think that 2012 would be a more realistic date for 70%
deployment of Shim6, given the lack of running code and a finalized
protocol now.

In my opinion, that doesn't imply that Shim6 should be abandoned. But it
does mean IMHO that regarding it as a
means to spur IPv6 deployment is just not realistic.


Sorry, but I'm just not buying the analogy.  The market drivers for IGMP
are somewhat smaller than they are for IPv6.


That depends on your perspective.  There's a compelling need for usable 
multicast in many environments, and so far there's nobody (in the US) with a 
compelling need for IPv6, much less shim6.



Yes, it would take a couple of years for Shim6 to be implemented and
depending on where we hit Redmond's release cycle, actually
penetrate a significant number of hosts.


Shim6 needs to be finalized first, then someone has to convince MS to 
implement it.  I'd put that, conservatively, at 4 years.



6 years is probably long, and definitely long if we get a confluence of
panic about the death of v4 plus a strong endorsement about Shim6
from the IETF.


The most dire predictions of v4's death have it at least 12-15 years away. 
To companies worried about next quarter's profits, you might as well be 
talking about global warming.



Consider that the IETF *could* conceivably require every compliant v6
implementation to include it.  I grant that that's unlikely and some
lesser endorsement is probably more reasonable, but I don't think
that you should underestimate the capability of the IETF/ISP/vendor/
host community to act a bit more quickly, if there is sufficient
motivation.


Without any enforcement powers, an IETF "requirement" is pretty useless. 
Those vendors that care will merely see one more complicated thing they have 
to add to their IPv6 stack and put it off adding IPv6 even longer.



I suggest that we compromise, split the difference and swag it at 4 years.


His was a minimum; I'd put the likely number at 4-6 years after shim6 is 
finally published (itself no fixed date), and potentially much longer if 
middlebox support is added (and without which shim6 will certainly never see 
the light of day).


S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



Re: shim6 @ NANOG (forwarded note from John Payne)

2006-03-03 Thread Stephen Sprunk


Thus spake "Iljitsch van Beijnum" <[EMAIL PROTECTED]>

On 3-mrt-2006, at 0:22, Mark Newton wrote:

Right now we can hand them out to anyone who demonstrates a need
for them.  When they run out we'll need to be able to reallocate
address blocks which have already been handed out from orgs who
perhaps don't need them as much as they thought they did to orgs
which need them more.



Sounds like a marketplace to me.  How much do you think a /24 is
worth?  How many microseconds do you think it'll take for members
of each RIR to debate the policy changes needed to alter their
rules to permit trading of IPv4 resource allocations once IANA
says, "No!" for the first time?


This is what I wrote about this a couple of months ago: http:// 
ablog.apress.com/?p=835


An interesting aspect about address trading is that some  organizations 
have huge amounts of address space which didn't cost  them anything, or at 
least not significantly more than what smaller  blocks of address space 
cost others. Having them pocket the proceeds  strikes me as rather unfair, 
and also counter productive because it  encourages hoarding. Maybe a 
system where ARIN and other RIRs buy  back addresses for a price per bit 
prefix length rather than per  address makes sense.


Keep in mind that current RIR allocations/assignments are effectively leases 
(though the RIRs deny that fact) and, like any landlord, they can refuse to 
renew a lease or increase the rent at any point.


There might be some interesting political battles when it comes to legacy 
allocations which are currently rent-free, but those tenants will find 
themselves woefully outnumbered when that day comes.


We'll also have a reasonably good idea of what it'd cost to perform  an 
IPv6 migration as we gather feedback from orgs who have

actually done it.


I don't think the cost is too relevant (and hard to calculate because  a 
lot of it is training and other not easily quantified  expenditures), what 
counts is what it buys you. I ran a web bug for a  non-networking related 
page in Dutch for a while and some 0.16% of  all requests were done over 
IPv6. (That's 1 in 666.) So even if it's  free, deploying IPv6 today isn't 
all that useful. But when you're the  last one running IPv4, you'll really 
want to move over to IPv6, even  if it's very expensive.


Ah, but why?  As long as IPv4 has similar or better performance 
characteristics to IPv6, why would anyone _need_ to migrate?  Add to that 
the near certainty that vendors will create NAT devices that will allow an 
entire v4 enterprise to reach the v6 Internet...


S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



Re: shim6 @ NANOG (forwarded note from John Payne)

2006-03-02 Thread Stephen Sprunk


Thus spake "Joe Abley" <[EMAIL PROTECTED]>

On 1-Mar-2006, at 11:55, David Barak wrote:

It isn't fearing change to ask the question "it's not
broken today, why should I fix it?"


What's broken today is that there's no mechanism available for people  who 
don't qualify for v6 PI space to multi-home. That's what shim6 is  trying 
to fix.


Shim6 is an answer to "what kind of multihoming can we offer to sites 
without PI space?"; it is yet to be seen if anyone cares about the answer to 
that question.


The question that folks with money are asking is "how do I ensure that any 
random user can get reliable access to my website", and that's a question 
that the IETF is, in general, uninterested in.


However, it's not hard to find examples in today's v4 Internet where 
reconvergence following a re-homing event can take 30 to 60 seconds  to 
occur. In the case where such an event includes some interface  flapping, 
it's not that uncommon to see paths suppressed due to  dampening for 20-30 
minutes.


That may be acceptable compared to the general limitations of PA space. 
Folks have learned to deal with the limitations of BGP-based redundancy; 
asking them to give those benefits up without substantially greater benefits 
is foolhardy.


I would expect (in some future, hypothetical implementation of shim6) 
that the default failure detection timers to start rotating through  the 
locator set far sooner than 30-60 seconds.


If we ever see shim6 (or its equivalent) widely deployed...  So far, we 
don't even have simple IPv6 on even a noticeable fraction of end nodes.


Any solution which requires upgrading all the end nodes is a non-starter, 
and the IETF needs to wake up to that fact.  It's taken over a _decade_ for 
simple IPv6 to make it into host stacks, and it's still not viable yet.  No 
host-dependent upgrade will matter to the Internet over the long run.



No; maintain one address per PA netblock on each host.


And so, if I have 6 upstream providers, every one of my hosts has to keep 
track of the outbound policy I want for each?  How exactly am I supposed to 
keep track of that?  Even the outbound policy for a single host (aka 
firewall) is beyond most organizations' capabilities today...


Why is it even remotely rational that a corporate admin trust 100k+ hosts 
infested with worms, virii, spam, malware, etc. to handle multihoming 
decisions?  Especially when we don't even have a sample of working code 
today?  I don't even trust the <5 PCs I have at home to make those kind of 
decisions, much less every PC in my corporate network...


There's a vast difference in impact on the state held in the core  between 
deaggregating towards direct peers, and deaggregating towards  transit 
providers and having the deaggregated swamp propagated globally.


Obviously, folks differ in their definition of "swamp".

I'd love a world where $large orgs could connect to N providers and not have 
to figure out the vagaries of BGP, but the reality is that if a large 
customer depends on the Internet for their financial health connectivity, 
the only answer today (with either v4 or v6) is PI space.


Now, some may take that as a sign the IETF needs to figure out how to handle 
10^6 BGP prefixes...  I'm not sure we'll be there for a few years with IPv6, 
but sooner or later we will, and someone needs to figure out what the 
Internet is going to look like at that point.  If the IETF isn't interested, 
some group of vendors will, if for no other reason than that's what will be 
needed for the vendors to sell routers in a few years.  Is it any surprise 
that $vendor is pushing how many millions of routes they can handle in the 
FIB today?


IPv6 is just a convenient placeholder for all the problems that today's ISPs 
are ignoring about today's Internet.


S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



Re: Transit LAN vs. Individual LANs

2006-02-28 Thread Stephen Sprunk


Thus spake "Scott Weeks" <[EMAIL PROTECTED]>

From: "Stephen Sprunk" <[EMAIL PROTECTED]>

ITYM two big transit LANs -- one must be prepared for a
switch to fail.


These're going to be router-to-router connections (each AR
is connected to both CRs) and I had thought about tying them
all into one VLAN vs. PTP Gig-E.  I was just trying to find
out the operational benefits of either design.


If your physical topology is going to be PTP links, then you should go with 
PTP at the logical level as well.  Making one topology look like another is 
generally a bad idea.


S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



Re: Transit LAN vs. Individual LANs

2006-02-28 Thread Stephen Sprunk


Thus spake "Ejay Hire" <[EMAIL PROTECTED]>

From my perspective...
...a physical mesh requires too many ports to be economical.


But, if one has the money, it's probably the better technical choice.  Since 
his folks are already familiar with having things set up PTP using some 
other physical layer, that also reduces the odds of human error.



...a logical mesh has a couple of things against it.  It
requires a lot of configuration, and each router will be
connected with a trunk interface, (on the antique switches
I've worked with) every trunk will carry all the traffic in
the switch, your maximum bandwidth across the whole switch
is 1gbps, instead of the next option which gives you more
bandwidth across the switch.


Not true, unless you're using some antique switching gear.  Assuming all 
traffic is up/downstream and not sideways, you can get 4Gb/s in each 
direction (two CRs connected to two switches each).  Whether you break that 
into PTP VLANs or shared VLANs shouldn't affect anything.


[ Note that this is moot since the OP responded he's running a physical 
mesh ]


S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



Re: Transit LAN vs. Individual LANs

2006-02-25 Thread Stephen Sprunk


Thus spake "Patrick W. Gilmore" <[EMAIL PROTECTED]>

On Feb 24, 2006, at 9:03 PM, Scott Weeks wrote:

I have 2 core routers (CR) and 3 access routers (AR)
currently connected point-to-point where each AR connects to
each CR for a total of 6 ckts.  Now someone has decided to
connect them with Gig-E.  I was wondering about the benefits
or disadvantages of keeping the ckts each in their own
individual LANs or tying them all into one VLAN for a
"Transit LAN" as those folks that decided on going to Gig-E
aren't doing any logical network architecting (is that a
real word?).


Personally, I like the to KISS, so one big 'transit LAN'.


ITYM two big transit LANs -- one must be prepared for a switch to fail.

An argument could be made for individual VLANs to keep things like b- cast 
storms isolated.  But I think the additional complexity will  cause more 
problems than it will solve.


If you have broadcast storms on a subnet with five routers and nothing else 
on it, you've got bigger problems than config complexity.



Or maybe I'm just too dumb to keep up with the additional complexity. :)


One must keep in mind that human error is the dominant cause of outages, and 
since there's not likely to be backhoes running around in a data center, 
IMHO the goal should be to remove as many ways as possible that your 
coworkers can muck things up.


I'd go with two plain GigE switches, as dumb as I could find them, barely 
configured or possibly not even managed at all, and one /28 (and one /64) on 
each to allow for adding more ARs later.


There are a few advantages to going with PTP VLANs, such as eliminating 
DR/BDR elections needed on shared ones, but you'd need 10 of them to get a 
full mesh, and 15 if you add one more router.  That's just too much 
complexity for virtually no gain, and as Owen notes, it is generally bad for 
your logical topology to not match the physical one.


S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



  1   2   3   4   >