Re: Any sign of supply chain returning to normal?

2022-04-22 Thread George Metz
There's some queue-jumping happening for other reasons -
medical/hospital a significant portion of that - but even there I'm
hearing 6+ months for some switch hardware and Cisco APs are pretty
uniformly "if you didn't order before March, you won't see them for
over a year".

On Fri, Apr 22, 2022 at 6:29 PM na...@jima.us  wrote:
>
> Anecdotally, I had a pair of Nexus 93180s that I ordered in May 2021 show up 
> in February 2022, so 9 months. The estimated ship date got punted several 
> times (probably due to being preempted by folks employing the approach Laura 
> outlined ;-) ).
>
> I haven't ordered anything since then, but I understand that 4-8 months isn't 
> unexpected, still.
>
> - Jima
>
> From: NANOG  On Behalf Of Drew Weaver
> Sent: Friday, April 22, 2022 07:24
> To: 'nanog@nanog.org' 
> Subject: Any sign of supply chain returning to normal?
>
> I'm not sure if this is the right place for this discussion but I can't think 
> of anywhere better to ask.
>
> Has anyone seen any progress whatsoever on supply chain issues with 
> networking hardware?
>
> I've noticed that primary market lead times have been increasing and at the 
> same time secondary market pricing has also been going higher at the same 
> time, still.
>
> What have you seen?
>
>
>


Re: massive facebook outage presently

2021-10-04 Thread George Metz
Also impacting Instagram and, apparently, WhatsApp.

On Mon, Oct 4, 2021 at 12:05 PM Eric Kuhnke  wrote:
>
> https://downdetector.com/status/facebook/
>
> Normally not worth mentioning random $service having an outage here, but this 
> will undoubtedly generate a large volume of customer service calls.
>
> Appears to be failure in DNS resolution.
>


Re: Something that should put a smile on everybody's face today

2021-04-28 Thread George Metz
Respectfully Mel, the patent with Blackbird may well have been that -
my reading of the past case agrees with yours for the most part - but
the current case is Sable Networks suing Cloudflare over a patent
involving routers. Given the patent involved and the choice of
Cloudflare as a target, this well could snowball into a situation
where ANYONE using a router would be considered to be infringing, and
I submit that such a broad possible hit against the operator community
in general is most certainly a danger that operators should be aware
of, and if possible assist with defeating.

I'm well aware you said you were folding, but I think you were
accidentally looking at only the original case from a couple of years
ago, not the current case that is what brought this up - which is why
a number of us feel it meets the letter of the rules, as well as the
spirit.

George

On Wed, Apr 28, 2021 at 2:26 PM Mel Beckman  wrote:
>
> Bill,
>
> Blackbird chooses its victims based on whether any of a couple dozen vague 
> patents they hold can plausibly be used to extort money out of a victim 
> company. BB doesn’t go after service providers in particular, it just happens 
> to have chosen a service provider (unwisely, it turns out) in this case.
>
> There are no operational issues here. No individual Internet protocol or 
> technology “many of  us use” was named. The patent was invalid on its face, 
> as it only described an abstract idea — “Providing an internet third party 
> data channel” — in the most general terms possible, not as an invention, as 
> required by U.S. patent law.
>
> The only difference between Cloudfare and BB’s other victims was that, rather 
> than compute the instant cost-benefit analysis most companies do (“It will 
> cost us tens of thousands to fight this, but only a few thousand to settle” 
> ), Cloudfare valiantly chose to stand on principle, rather than mathematics, 
> and fought the claim. By that simple act, the case by BB was thrown out 
> virtually instantaneously.
>
> Judge Vince Chhabria held that “abstract ideas are not patentable” and 
> Blackbird’s assertion of the patent “attempts to monopolize the abstract idea 
> of monitoring a preexisting data stream between a server and a client” was 
> not an invention. The case was rejected before it started because the court 
> found Blackbird’s patent to be invalid.
>
> The choice to fold or fight in a patent troll battle is clearly a 
> philosophical one, not a network operational decision. Now, rather than 
> lengthen this out-of-policy thread further, I will take the non-valiant 
> “fold” path, and leave the rest of you to your perpetual arguments.
>
>  -mel
>
> On Apr 28, 2021, at 10:41 AM, William Herrin  wrote:
>
> On Wed, Apr 28, 2021 at 10:20 AM Mel Beckman  wrote:
>
> This dispute is no different than if they had gotten into an argument
>
> over a copier toner scammer.
>
>
> Hi Mel,
>
> If the patents at issue pertained to copier toner I might agree with
> you. They're networking patents purporting to govern technologies many
> if not most of us use.
>
> Regards,
> Bill Herrin
>
>
> --
> William Herrin
> b...@herrin.us
> https://bill.herrin.us/


Re: Myanmar internet - something to think about if you're having a bad day

2021-04-26 Thread George Metz
First you say "not at all" and then you say "stop complying". If your
employees stop complying with the orders coming from the angry men
with guns held to said employees' heads, someone's going to get shot -
and it's going to be the telecom employees. That's significantly more
than a financial hardship and I cannot grasp how you think it could
possibly be otherwise.

On Mon, Apr 26, 2021 at 5:57 PM scott  wrote:
>
>
> On 4/26/2021 11:27 AM, Mel Beckman wrote:
> > Scott, are you saying that employees of Telenor and Ooredoo are 
> > “facilitating violent repression” by following the orders of soldiers 
> > holding guns to their heads?
>
> -
>
> No.  Not at all.  Of course not.  That would be ridiculous.  I meant to
> say,"Myanmar’s two foreign-owned telecom operators, Telenor and
> Ooredoo..." should stop  facilitating the repression by complying
> "...with numerous demands from the military, including instructions to
> cut off the internet each night for the past week, and block specific
> websites, such as Facebook, Twitter and Instagram."  And, yeah, that
> means financial repercussions for the companies.
>
>
> > My understanding of the rules of nano guess that there is to be no “naming 
> > and shaming“. please retract your post.
>
> ---
>
> What?  You know folks do that all the time.  Did I miss the change in
> rules?   If it makes you or others feel better...I retract the post.
>
>
> I was having a bad day (Monday) and saw this.  It made me feel better
> about the crap I am going through today and thought it might be the same
> for other ops.  I also found it interesting that they were manipulating
> DNS servers with false IP addresses.  I wonder if the people can use a
> different DNS server than the two ISPs?
>
> scott
>


Re: Broken Mini-SAS cable removal?

2021-04-23 Thread George Metz
One of the best DACs I've ever had - and I wish I could find them or
the manufacturer again - was one with a relatively thick metal T push
bar that you had to push in towards the switch to release the latch.
Almost impossible to break, and nearly as impossible to accidentally
get unplugged.

On Fri, Apr 23, 2021 at 12:20 PM Alain Hebert  wrote:
>
> Hi,
>
> That happened to me more often with the DAC cables I had the displeasure 
> to deal with.
>
> And yeah got old valve gap feeler gauge to the rescue =D
>
> -
> Alain Hebertaheb...@pubnix.net
> PubNIX Inc.
> 50 boul. St-Charles
> P.O. Box 26770 Beaconsfield, Quebec H9W 6G7
> Tel: 514-990-5911  http://www.pubnix.netFax: 514-990-9443
>
> On 4/23/21 11:51 AM, Ryland Kremeier wrote:
>
> Hit the wrong reply button before, but we were able to get it removed by 
> unscrewing the top latch and removing that first at an angle. Then the 
> connector was able to be pulled straight out. Plastic was very thin on the 
> pull tab and it snapped without much resistance.
>
>
>
> Thank you,
>
> -- Ryland
>
>
>
> From: Eric Litvin
> Sent: Friday, April 23, 2021 10:49 AM
> To: Joe Klein
> Cc: nanog@nanog.org
> Subject: Re: Broken Mini-SAS cable removal?
>
>
>
> Joe’s response is spot on. I would also suggest you look at the “latching 
> finger” mechanism on a spare,  then apply some of the techniques Joe suggests.
>
> Eric
> Luma optics
>
>
>
>
> Sent from my iPhone
>
> > On Apr 23, 2021, at 8:27 AM, Joe Klein  wrote:
> >
> > Try shim stock or a feeler gauge between the plug and socket to work the 
> > latching fingers. This isn't something that I've tried specifically in this 
> > case.
> >
> > You might need to put a notch in the stock or feeler gauge so that you can 
> > work the fingers from the backside. Kinda like that old trick of using a 
> > credit card to prise a door latch, except this should work since there's no 
> > deadlatch. :)
> >
> > You might also try gently twisting a small screwdriver or spudger stick 
> > between the plug and socket too to increase the gap between the socket and 
> > plug.
> >
> > -joe
> >
> > From: NANOG  On Behalf Of 
> > Ryland Kremeier
> > Sent: Friday, April 23, 2021 09:31
> > To: nanog@nanog.org
> > Subject: Broken Mini-SAS cable removal?
> >
> >
> >  External Mail
> >
> > Anyone here have experience removing a mini-SAS cable when the plastic tab 
> > has broken off? Tried checking online but couldn't find anything.
> >
> > Thank you,
> > -- Ryland
> >
>
>
>
>


Re: Famous operational issues

2021-02-18 Thread George Metz
Normally I reference this as an example of terrible government
bureaucracy, but in this case it's also how said bureaucracy can delay
operational changes.

I was a contractor for one of the many branches of the DoD in charge
of the network at a moderate-sized site. I'd been there about 4
months, and it was my first job with FedGov. I was sent a pair of
Cisco 6509-E routers, with all supervisors and blades needed, along
with a small mountain of SFPs, to replace the non-E 6509s we had
installed that were still using GBICs for their downlinks. These were
the distro switches for approximately half the site.

Problem was, we needed 84 new SC-LC fiber jumpers to replace the SC-SC
we had in place for the existing switch - GBICs to SFPs remember. We
hadn't received any with the shipment. So I reached out to the project
manager to ask about getting the fiber jumpers. "Oh, that should be
coming from the server farm folks, since it's being installed in a
server farm." Okay, that seems stupid to me, but $FedGov, who knows. I
tell him we're stalled out until we get those cables - we have the
routers configured and ready to go, just need the jumpers, can he get
them from the server farm folks? He'll do that.

It took FIFTEEN MONTHS to hash out who was going to pay for and order
the fiber jumpers. Any number of times as the months dragged on, I
seriously considered ordering them on Amazon Prime using my corporate
card. We had them installed a week and a half after we got them. Why
that long? Because we had to completely reconfigure them, and after 15
months, the urgency just wasn't there.

By the way, the project ended up buying them, not the server farm team.

On Tue, Feb 16, 2021 at 2:38 PM John Kristoff  wrote:
>
> Friends,
>
> I'd like to start a thread about the most famous and widespread Internet
> operational issues, outages or implementation incompatibilities you
> have seen.
>
> Which examples would make up your top three?
>
> To get things started, I'd suggest the AS 7007 event is perhaps  the
> most notorious and likely to top many lists including mine.  So if
> that is one for you I'm asking for just two more.
>
> I'm particularly interested in this as the first step in developing a
> future NANOG session.  I'd be particularly interested in any issues
> that also identify key individuals that might still be around and
> interested in participating in a retrospective.  I already have someone
> that is willing to talk about AS 7007, which shouldn't be hard to guess
> who.
>
> Thanks in advance for your suggestions,
>
> John


Re: FCC Hurricane Michael after-action report

2019-05-14 Thread George Metz
There's more to it than this too. I was down there (I have sites I'm
responsible for in Panama City Beach) in February and I was talking to a
bunch of folks in the area as a result. This storm was fairly unusual for
the area for a number of reasons. One, it normally doesn't hit the
panhandle at anywhere near a category 5, and two, it was still a high
category 3 by the time it hit Georgia. The amount of damage done was
immense, is still not cleaned up (I drove past multiple buildings that were
still piles of rubble, 4 months after the storm), and I was seeing forests
full of damaged and destroyed trees all the way to I-10.

All in all, the vast majority of Panama City looked much more like 4 months
after a tornado rather than a hurricane, and all that damage continued all
the way into Georgia. Thinking this was just like any other hurricane to
hit the area is the absolute wrong tack to take - from what I heard there
was some discussion of whether it was worth it to reopen Tyndall AFB,
because the only thing left standing was some WWII era bomb-proof concrete
hangars.

On the flip side, improvements in response are a good thing - as long as
people aren't beating up on the people who did the responding in the first
place without cause.

On Tue, May 14, 2019 at 9:52 AM Rich Kulawiec  wrote:

> On Mon, May 13, 2019 at 11:48:02PM -0500, frnk...@iname.com wrote:
> > One of my takeaways from that article was that burying fiber underground
> > could likely have avoided many/most of these fiber cuts, though I???m
> > not familiar enough with the terrain to know how feasible that is.
>
> I suspect that may not be possible in (parts of) Florida.
>
> However, even in places where it's possible, fiber installation is
> sometimes miserably executed.  Like my neighborhood.  A couple of
> years ago, Verizon decided to finally bring FIOS in.  They put in the
> appropriate calls to utility services, who dutifully marked all the
> existing power/cable/gas/etc. lines and then their contractors (or
> sub-sub-contractors) showed up.
>
> The principle outcome of their efforts quickly became clear, as one
> Comcast cable line after another was severed.   Not a handful, not even
> dozens: well over a hundred.  They managed to cut mine in three places,
> which was truly impressive.  (Thanks for the extended outage, Verizon.)
> After this had gone on for a month, Comcast caught on and took the
> expedient route of just rolling a truck every morning.  They'd park at
> the end of the road and just wait for the service calls that they knew
> were coming.  Of course Comcast's lines were not the only victims of
> this incompetence and negligence.  Amusingly, sometimes Verizon had to
> send its own repair crews for their copper lines.
>
> There's a lot more but let me skip to the end result.  After inflicting
> months of outages on everyone, after tearing up lots of lawns, after all
> of this, many of the fiber conduits that are allegedly underground: aren't.
>
> ---rsk
>


Re: Waste will kill ipv6 too

2017-12-20 Thread George Metz
I think he's referring to all the Unicast IPv6 outside of 2000::/3 getting
designated as "reserved", and therefore no gear will ever successfully
route it... just like happened with the Class E space.

You'd think we would know better than to let that happen, but there's a lot
of things you'd think we would know better than to let happen, and yet it
still happens, with dreary regularity.

On Wed, Dec 20, 2017 at 7:14 PM,  wrote:

> On Wed, 20 Dec 2017 18:15:44 -0500, Joe Maimon said:
>
> > There is plenty more to wonder about, for example, will the rest of the
> > unicast space get Class E'd?
>
> That's a non-starter, as pretty much all the gear out there has code that
> says
> 'Class E is reserved" (including gear that's *already* doing production
> IPv6).  If
> you're going to upgrade everything *anyhow*, deploying IPv6 has better
> bang for
> the buck than Class E support.
>


Re: Why the US Government has so many data centers

2016-03-14 Thread George Metz
Datacenter isn't actually an issue since there's room in the same racks
(ironically, in the location the previous fileservers were) as the Domain
Controllers and WAN Accelerators. Based on the "standard" (per the Windows
admins) file storage space of 700 meg, that sounds like 3TB for user
storage. Even if it were 30TB, I still can't see a proper setup costing
more than the OC-12 after a period of two years.

Org is within the Federal Government, so they're not allowed to buy
non-top-line anything. I agree we should check how much bandwidth is
storage, but since there's a snowball's chance in hell of them actually
making a change, it's almost certainly not worth the paperwork.

On Mon, Mar 14, 2016 at 1:28 PM, George Herbert 
wrote:

>
> At enterprise storage costs, that much storage will cost more than the
> OC-12, and then add datacenter and backups.  Total could be 2-3x OC-12
> annual costs.
>
> If your org can afford to buy non-top-line storage then it would probably
> be cheaper to go local.
>
> However, you should check how much of the bandwidth is actually storage.
> I see multimillion dollar projects without basic demand / needs analysis or
> statistics more often than not.
>
>
> George William Herbert
> Sent from my iPhone
>
> > On Mar 14, 2016, at 10:01 AM, George Metz  wrote:
> >
> >> On Mon, Mar 14, 2016 at 12:44 PM, Lee  wrote:
> >>
> >>
> >> Yes, *sigh*, another what kind of people _do_ we have running the govt
> >> story.  Altho, looking on the bright side, it could have been much
> >> worse than a final summing up of "With the current closing having been
> >> reported to have saved over $2.5 billion it is clear that inroads are
> >> being made, but ... one has to wonder exactly how effective the
> >> initiative will be at achieving a more effective and efficient use of
> >> government monies in providing technology services."
> >>
> >> Best Regards,
> >> Lee
> >
> > That's an inaccurate cost savings though most likely; it probably doesn't
> > take into account the impacts of the consolidation on other items. As a
> > personal example, we're in the middle of upgrading my site from an OC-3
> to
> > an OC-12, because we're running routinely at 95+% utilization on the OC-3
> > with 4,000+ seats at the site. The reason we're running that high is
> > because several years ago, they "consolidated" our file storage, so
> instead
> > of file storage (and, actually, dot1x authentication though that's
> > relatively minor) being local, everyone has to hit a datacenter some 500+
> > miles away over that OC-3 every time they have to access a file share.
> And
> > since they're supposed to save everything to their personal share drive
> > instead of the actual machine they're sitting at, the results are
> > predictable.
> >
> > So how much is it going to cost for the OC-12 over the OC-3 annually? Is
> > that difference higher or lower than the cost to run a couple of storage
> > servers on-site? I don't know the math personally, but I do know that if
> we
> > had storage (and RADIUS auth and hell, even a shell server) on site, we
> > wouldn't be needing to upgrade to an OC-12.
>


Re: Why the US Government has so many data centers

2016-03-14 Thread George Metz
On Mon, Mar 14, 2016 at 12:44 PM, Lee  wrote:

>
> Yes, *sigh*, another what kind of people _do_ we have running the govt
> story.  Altho, looking on the bright side, it could have been much
> worse than a final summing up of "With the current closing having been
> reported to have saved over $2.5 billion it is clear that inroads are
> being made, but ... one has to wonder exactly how effective the
> initiative will be at achieving a more effective and efficient use of
> government monies in providing technology services."
>
> Best Regards,
> Lee
>

That's an inaccurate cost savings though most likely; it probably doesn't
take into account the impacts of the consolidation on other items. As a
personal example, we're in the middle of upgrading my site from an OC-3 to
an OC-12, because we're running routinely at 95+% utilization on the OC-3
with 4,000+ seats at the site. The reason we're running that high is
because several years ago, they "consolidated" our file storage, so instead
of file storage (and, actually, dot1x authentication though that's
relatively minor) being local, everyone has to hit a datacenter some 500+
miles away over that OC-3 every time they have to access a file share. And
since they're supposed to save everything to their personal share drive
instead of the actual machine they're sitting at, the results are
predictable.

So how much is it going to cost for the OC-12 over the OC-3 annually? Is
that difference higher or lower than the cost to run a couple of storage
servers on-site? I don't know the math personally, but I do know that if we
had storage (and RADIUS auth and hell, even a shell server) on site, we
wouldn't be needing to upgrade to an OC-12.


Re: Another Big day for IPv6 - 10% native penetration

2016-01-04 Thread George Metz
On Mon, Jan 4, 2016 at 9:37 PM, Randy Bush  wrote:

> the more interesting question to me is: what can we, ops and ietf, do
> to make it operationally and financially easier for providers and
> enterprises to go to ipv6 instead of ipv4 nat?  carrot not stick.
>
> randy
>

The problem is, the only way to make it easier for providers and
enterprises to switch is to make it less scary looking and less complicated
sounding. That door closed when it was decided to go with hex and 128-bit
numbering. *I* know it's not nearly as bad as it seems and why it was done,
and their network folks by and large know it's not as bad as it seems, but
the people making the decisions to spend large sums of money upgrading
stuff that works just fine thank-you-very-much are looking at it and saying
"Ye gods... I sort of understand what IP means but that looks like an alien
language!"

At which point the ugly duckling gets tossed out on it's ear before it has
a chance to become a swan.


Re: Re: SEC webpages inaccessible due to Firefox blocking servers with weak DH ciphers

2015-07-18 Thread George Metz
Federal government lands on you like a sack of bricks if you don't provide
this information through their (in)secure website. No exceptions.

Sometimes you can't fire the vendor because they're not a vendor, they're a
freaking regulatory agency with the power to crush you like a bug, and a 5
year approval process to get anything done, never mind a month turnaround
for a recently discovered exploit.

On Fri, Jul 17, 2015 at 10:50 PM,  wrote:

> Weak ciphers? Old (insecure) protocol versions? Open security issues?
> Vendor
> will never provide a patch? Trash goes in the trash bin, no exceptions.
>


Re: Dual stack IPv6 for IPv4 depletion

2015-07-15 Thread George Metz
On Wed, Jul 15, 2015 at 2:11 PM, Doug Barton  wrote:

> On 7/15/15 8:20 AM, George Metz wrote:
>
>>
>>
Snip!


> Also, as Owen pointed out, the original concept for IPv6 networking was a
> 64 bit address space all along. The "extra" (or some would say, "wasted")
> 64 bits were tacked on later.
>
>  Still oodles of addresses, but worth
>> noting and is probably one reason why some of the "conservationists"
>> react the way they do.
>>
>
> It's easy to look at the mandatory /64 limit and say "See, the address
> space is cut in half to start with!" but it's not accurate. Depending on
> who's using it a single /64 could have thousands of devices, up to the
> limit of the broadcast domain on the network gear. At minimum even for a
> home user you're going to get "several" devices.


Allow me to rephrase: "A single /32 could have thousands of devices, up to
the limit of a 10/8 NATted behind it". This, plus the fact that it WAS
originally 64-bit and was expanded to include RA/SLAAC, is why I chose that
analogy.


>  Next, let's look at the wildest dreams aspect. The current
>> "implementation" I'm thinking of in modern pop culture is Big Hero 6
>> (the movie, not the comics as I've never read them). Specifically,
>> Hiro's "microbots". Each one needs an address to be able to communicate
>> with the controller device. Even with the numbers of them, can probably
>> be handled with a /64, but you'd also probably want them in separate
>> "buckets" if you're doing separated tasks. Even so, a /48 could EASILY
>> handle it.
>>
>
> Right, 65k /64s in a /48.
>
>  Now make them the size of a large-ish molecule. Or atom. Or protons.
>> Nanotech or femtotech that's advanced enough gets into Clarke's Law -
>> any sufficiently advanced technology is indistinguishable from magic -
>> but in order to do that they need to communicate. If you think that
>> won't be possible in the next 30 years, you probably haven't been paying
>> attention.
>>
>
> I do see that as a possibility, however in this world that you're
> positing, how many of those molecules need to talk to the big-I Internet?
> Certainly they need to communicate internally, but do they need routable
> space? Also, stay tuned for some math homework. :)


So, you're advising that all these trillions of nanites should, what, use
NAT? Unroutable IP space of another kind? Why would we do that when we've
already got virtually unlimited v6 address space?

See what I mean? Personally I'd suspect something involving quantum states
would be more likely for information passage, but who knows what the end
result is?


> I wrote my email as a way of pointing out that maybe the concerns (on
>> both sides)- aren't baseless,
>>
>
> Please note that I try very hard not to dismiss anyone's concerns as
> baseless, whether I agree with them or not. As I mentioned in my previous
> message, I believe I have a pretty good understanding of how the "IPv6
> conservationists" think. My concern however is that while their concerns
> have a basis, their premise is wrong.


I wasn't intending yourself as the recipient keep in mind. However, IS
their premise wrong? Is prudence looking at incomprehensible numbers and
saying "we're so unlikely to run out that it just doesn't matter" or is
prudence "Well, we have no idea what's coming, so let's be a little less
wild-haired in the early periods"? The theory being it's a lot harder to
take away that /48 30 years from now than it is to just assign the rest of
it to go along with the /56 (or /52 or whatever) if it turns out they're
needed. I personally like your idea of reserving the /48 and issuing the
/56.


> So you asked an interesting question about whether or not we NEED to give
> everyone a /48. Based on the math, I think the more interesting question
> is, what reason is there NOT to give everyone a /48? You want to future
> proof it to 20 billion people? Ok, that's 1,600+ /48s per person. You want
> to future proof it more to 25% sparse allocation? Ok, that's 400+ /48s per
> person (at 20 billion people).
>
> At those levels even if you gave every person's every device a /48, we're
> still not going to run out, in the first 1/8 of the available space.
>
>  Split the difference, go with a /52
>>
>
> That's not splitting the difference. :)  A /56 is half way between a /48
> and a /64. That's 256 /64s, for those keeping score at home.
>

It's splitting the difference between a /56 and a /48. I can'

Re: Dual stack IPv6 for IPv4 depletion

2015-07-15 Thread George Metz
Reasonability, like beauty, is in the eye of the beholder, but I thank you
for the compliment. :)

The short answer is "yes, that constitutes being prudent". The longer
answer is "it depends on what you consider the wildest dreams".

There's a couple of factors playing in. First, look at every /64 that is
assigned as an IPv4 /32 that someone is running NAT behind. This is flat
out WRONG from a routing perspective, but from an allocation perspective,
it's very much exactly what's happening because of SLAAC and the 48-bit MAC
address basis for it. Since /64 is the minimum, that leaves us with less
than half of the available bit mask in which to hand out that 1/8th the
address space. Still oodles of addresses, but worth noting and is probably
one reason why some of the "conservationists" react the way they do.

Next, let's look at the wildest dreams aspect. The current "implementation"
I'm thinking of in modern pop culture is Big Hero 6 (the movie, not the
comics as I've never read them). Specifically, Hiro's "microbots". Each one
needs an address to be able to communicate with the controller device. Even
with the numbers of them, can probably be handled with a /64, but you'd
also probably want them in separate "buckets" if you're doing separated
tasks. Even so, a /48 could EASILY handle it.

Now make them the size of a large-ish molecule. Or atom. Or protons.
Nanotech or femtotech that's advanced enough gets into Clarke's Law - any
sufficiently advanced technology is indistinguishable from magic - but in
order to do that they need to communicate. If you think that won't be
possible in the next 30 years, you probably haven't been paying attention.

However, that's - barring a fundamental breakthrough - probably a decade or
two off. Meanwhile we've got connected soda cans to worry about.

I wrote my email as a way of pointing out that maybe the concerns (on both
sides)- aren't baseless, but at the same time maybe there's a way to split
the difference. It's not too much of a stretch to see that, soon, 256
subnets may not actually be enough to deal with the connected world and
"Internet of Things" that's currently being developed. But would 1024? How
about 4096? Is there any need in the next 10-15 years for EVERYONE to be
getting handed 65,536 /64 subnets? Split the difference, go with a /52 and
suddenly you've got FOUR THOUSAND subnets for individual users so that
their soda cans can tell the suspension on their car that it's been opened
and please smooth out the ride.

Frankly, both sides seem intent on overkill in their preferred direction,
and it's not particularly hard to meet in the middle.

On Tue, Jul 14, 2015 at 8:38 PM, Doug Barton  wrote:

> On 7/14/15 6:23 AM, George Metz wrote:
>
>> It's always easier to be prudent from the get-go than it is to rein in the
>> insanity at a later date. Just because we can't imagine a world where IPv6
>> depletion is possible doesn't mean it can't exist, and exist far sooner
>> than one might expect.
>>
>
> I've been trying to stay out of this Nth repetition of the same
> nonsensical debate, since neither side has anything new to add. However
> George makes a valid point, which is "learn from the mistakes of the past."
>
> So let me ask George, who seems like a reasonable fellow ... do you think
> that creating an addressing plan that is more than adequate for even the
> wildest dreams of current users and future growth out of just 1/8 of the
> available space (meaning of course that we have 7/8 left to work with
> should we make a complete crap-show out of 2000::/3) constitutes being
> prudent, or not?
>
> And please note, this is not a snark, I am genuinely interested in the
> answer. I used to be one of the people responsible for the prudent use of
> the integers (as the former IANA GM) so this is something I've put a lot of
> thought into, and care deeply about. If there is something we've missed in
> concocting the current plan, I definitely want to know about it.
>
> Even taking into account some of the dubious decisions that were made 20
> years ago, the numbers involved in IPv6 deployment are literally so
> overwhelming that the human brain has a hard time conceiving of them.
> Combine that with the conservation mindset that's been drilled into
> everyone regarding IPv4 resources, and a certain degree of over-enthusiasm
> for conserving IPv6 resources is understandable. But at the same time,
> because the volume of integers is so vast, it could be just as easy to slip
> into the early-days v4 mindset of "infinite," which is why I like to hear a
> good reality check now and again. :)
>
> Doug
>
> --
> I am conducting an experiment in the efficacy of PGP/MIME signatures. This
> message should be signed. If it is not, or the signature does not validate,
> please let me know how you received this message (direct, or to a list) and
> the mail software you use. Thanks!
>
>


Re: Dual stack IPv6 for IPv4 depletion

2015-07-14 Thread George Metz
That's all well and good Owen, and the math is compelling, but 30 years ago
if you'd told anyone that we'd go through all four billion IPv4 addresses
in anyone's lifetime, they'd have looked at you like you were stark raving
mad. That's what's really got most of the people who want (dare I say more
sane?) more restrictive allocations to be the default concerned; 30 years
ago the math for how long IPv4 would last would have been compelling as
well, which is why we have the entire Class E block just unusable and large
blocks of IP address space that people were handed for no particular reason
than it sounded like a good idea at the time.

It's always easier to be prudent from the get-go than it is to rein in the
insanity at a later date. Just because we can't imagine a world where IPv6
depletion is possible doesn't mean it can't exist, and exist far sooner
than one might expect.

On Tue, Jul 14, 2015 at 12:22 AM, Owen DeLong  wrote:

> How so?
>
> There are 8192 /16s in the current /3.
>
> ISPs with that many pops at 5,000,000 end-sites per POP, even assuming 32
> end-sites per person
> can’t really be all that many…
>
>
> 25 POPS at 5,000,000 end-sites each is 125,000,000 end-sites per ISP.
>
> 7,000,000,000 * 32 = 224,000,000,000 / 125,000,000 = 1,792 total /16s
> consumed.
>
> Really, if we burn through all 8,192 of them in less than 50 years and I’m
> still alive
> when we do, I’ll help you promote more restrictive policy to be enacted
> while we
> burn through the second /3. That’ll still leave us 75% of the address
> space to work
> with on that new policy.
>
> If you want to look at places where IPv6 is really getting wasted, let’s
> talk about
> an entire /9 reserved without an RFC to make it usable or it’s partner /9
> with an
> RFC to make it mostly useless, but popular among those few remaining NAT
> fanboys. Together that constitutes 1/256th of the address space cast off to
> waste.
>
> Yeah, I’m not too worried about the ISPs that can legitimately justify a
> /16.
>
> Owen
>
> > On Jul 13, 2015, at 16:16 , Joe Maimon  wrote:
> >
> >
> >
> > Owen DeLong wrote:
> >> JimBob’s ISP can apply to ARIN for a /16
> >
> > Like I said, very possibly not a good thing for the address space.
>
>