Re: 10 years from now... (was: internet futures)

2021-03-29 Thread Michael Thomas



On 3/29/21 11:36 AM, Matt Erculiani wrote:



We might be talking a lot more about PRKI as it becomes compulsory, 
maybe 400G transit links will start being standard across the 
industry. If we're lucky (or unlucky, depending on how you look at it) 
maybe a whole new routing protocol will be introduced and rapidly gain 
popularity.


One interesting observation is that QUIC has the potential to open the 
floodgates for new purpose built transport protocols for things other 
than http that have their own requirements. It also shows that it can 
navigate the problem of pleading kernel code and firewalls that block 
unknown (it it) IP protocol numbers. It's my guess that those were what 
really sunk SCTP. Another thing that is coming up is that with 
increasingly high bandwidth, the TCP checksum is showing its age and 
we'd probably like to leverage crypto-grade hashes instead of being at 
the mercy of a 40 year old algorithm.


Mike



Re: OT: Re: Facebook and other walled gardens

2021-03-22 Thread Michael Thomas



On 3/22/21 11:41 AM, William Herrin wrote:

On Mon, Mar 22, 2021 at 10:23 AM Andy Ringsmuth  wrote:

No. Use a communication method that is available globally, not proprietary and 
doesn’t require me to sell my soul to the devil simply to participate.

Hi Andy,

I refused to get a Facebook account until I was paid to. Now that I
have one, I wonder why I bothered. I isolate it in its own browser
profile so it can't snoop the rest of my web activity and I gave it an
alias email address that only they have. I mostly  control what
information I give them. I like having an effortless way to keep up
with my extended friends and family. In spite of that, I was surprised
how good a job Facebook did targeting ads to my interests -- the
knight hoodies were just too cool.


Air-gapping the browsers is certainly best, but Firefox has been really 
clamping down on super-cookies and the like. It will be interesting to 
see what the overall effect is.


My husband and friends have been having good laughs lately about some of 
the weird targeted ads they get. They are definitely not perfect. I use 
Facebook Purity and don't see their ads at all.


Mike


Re: OT: Re: Younger generations preferring social media(esque) interactions.

2021-03-23 Thread Michael Thomas



On 3/23/21 2:55 PM, Grant Taylor via NANOG wrote:

On 3/23/21 1:40 PM, Michael Thomas wrote:
The big problem with mailing lists is that they screw up security by 
changing the subject/body and breaking DKIM signatures.


What you are describing is a capability, configuration, execution 
issue with the mailing list manager software.


Said another way, what you are describing is *NOT* a problem with the 
concept of mailing lists.


MLMs can easily receive messages -- after their MTA imposes all 
germane filtering -- and generate /new/ but *completely* *independent* 
messages substantially based on the incoming message's content.  These 
/new/ messages come /from/ /the/ /mailing/ /list/!  Thus the mailing 
list operators can leverage all the aforementioned security / safety 
measure for the mailing list.
But they still have the originating domain's From: address. Manifestly 
using MLM signatures as means of doing a reputation check is a 
previously unsolved problem hence the silliness of the ARC experiment 
which relies on the same assumption you are making here. Since Google 
participated in ARC, that is a pretty tacit admission they don't know 
how to do mailing list reputation either.


SPF / DKIM / DMARC are mean to enable detection (and optionally 
blocking) of messages that do not come from their original source. 
Mailing lists are inherently contrary to this.  But the mailing list 
can be a /new/ source.
The sticking point is the From: address. If I set up a DMARC p=reject 
policy, I should not be surprised that the receiver does what I asked 
and trashes mailing list traffic. The point in my blog post is that 
after over 15 years a solution is not going to be found, and trust me I 
have tried back in the day. That we should just give up caring about 
mailing list traversal and put the burden on MLM's to figure it out by 
either not changing the message body/subject, or using that horrible 
hack of rewriting the From address.


This makes companies leery of setting the signing policy to reject 
which makes it much easier for scammers to phish.


Hence, having the mailing list send out /new/ messages with /new/ 
protection measures mean less breakage for people that send messages 
to the mailing list.


Mailing lists have been sending out resigned messages for over a decade. 
We still have really low adoption of p=reject signing policy and at 
least part of the problem is because of fear of mailing lists affecting 
users.




Treating the mailing list as it's own independent entity actually 
enables overall better security.


Aside:  It is trivial to remove things that cause heartburn (DKIM) 
/after/ NANOG's SMTP server applies filtering /before/ it goes into 
Mailman.



An unsigned message is treated the same as a broken signature. That 
doesn't help from the From: signing policy standpoint.


Mike


Re: OT: Re: Younger generations preferring social media(esque) interactions.

2021-03-24 Thread Michael Thomas



On 3/24/21 5:38 PM, Bryan Fields wrote:

On 3/23/21 8:04 PM, Michael Thomas wrote:

This has the unfortunate downside of teaching people not to pay
attention to the From: domain. For mailing lists maybe that's an OK
tradeoff, but it definitely not a good thing overall. I noticed that the
IETF list does From re-writing for DMARC domains that are p=reject.

This is another reason why DMARC is a shitty solution.

NANOG will rewrite the From: as well in this case.

What's your solution to phishing then? FWIW, nanog doesn't alter 
messages. All lists have the option to follow suit.


Mike



Re: OT: Re: Younger generations preferring social media(esque) interactions.

2021-03-23 Thread Michael Thomas



On 3/23/21 1:44 AM, Mikael Abrahamsson via NANOG wrote:

On Mon, 22 Mar 2021, Grant Taylor via NANOG wrote:

If it's the latter, does that mean that you have to constantly keep 
changing /where/ messages are sent to in order to keep up with the 
latest and greatest or at least most popular (in your audience) 
flavor of the day / week / month / year social media site?


All good questions. I've been using IRC+email for 25+ years now and 
from what I can see, IRC has been replaced by slack/discord etc, and 
email has been replaced by Reddit or Github Issues discussions etc. I 
was on a project where the mailing list was shut down and all further 
discussions were pushed to github instead.


I personally think the "web forum" format is inferior but that might 
be a way to reach out as well...


The big problem with mailing lists is that they screw up security by 
changing the subject/body and breaking DKIM signatures. This makes 
companies leery of setting the signing policy to reject which makes it 
much easier for scammers to phish. The Nanog list is something of an 
outlier in that they don't do modifications and the DKIM signature survives.


I wrote a piece about this a while back that companies should just set 
p=reject and ignore the mailing list problem.


https://rip-van-webble.blogspot.com/2020/12/are-mailing-lists-toast.html

Mike




Re: OT: Re: Younger generations preferring social media(esque) interactions.

2021-03-23 Thread Michael Thomas



On 3/23/21 4:34 PM, Grant Taylor via NANOG wrote:

On 3/23/21 4:16 PM, Michael Thomas wrote:

But they still have the originating domain's From: address.


My opinion is that messages from the mailing list should not have the 
originating domain in the From: address.  The message from the mailing 
list should be from the mailing list's domain.


This has the unfortunate downside of teaching people not to pay 
attention to the From: domain. For mailing lists maybe that's an OK 
tradeoff, but it definitely not a good thing overall. I noticed that the 
IETF list does From re-writing for DMARC domains that are p=reject.





Don't try to graft "can I trust what the mailing list purports or not" 
question onto the problem.  Simplify it to "does this message (from 
the mailing list) pass current best practice security tests or not".  
Notice how the second question is the same question that is already 
being posed about all email (presuming receiving server is doing so).



That is the essence of the problem and always has been. If somebody 
resigns an altered message, does that change my decision of what to do 
in the face of DMARC p=reject? That means I need to know something about 
that mailing list if the answer is yes. Best practices have nothing to 
do with it. It is all about reputation. A message mangler can be Lawful 
Evil, after all.




Since Google participated in ARC, that is a pretty tacit admission 
they don't know how to do mailing list reputation either.


IMHO ARC has at least a priming / boot strapping problem.  How does a 
receiver know if they can trust the purported information they receive 
from the sending system or not.  Simply put, it doesn't.  Hence why I 
think that ARC, as I understand it, is going to fail to thrive.


I went back to the DMARC mailing list wondering what magic that ARC 
provided that we didn't think about 15 years ago only to be disappointed 
that the answer was "none". I really don't understand how this got past 
IESG muster, but it was an experiment.





I personally believe that the mailing list manager, or better it's 
underlying SMTP server infrastructure, should uphold strict 
requirements on the incoming messages.  Only clean messages should be 
emitted from the mailing list manager.  Further, those messages should 
themselves adhere to the same high security standards.


Yes, I think that's a given and feeds into their reputation.




Think about it this way:  Is there really anything (of significant 
value) different between a mailing list manager and a person (or other 
form of automation) receiving a message from a mailbox, copying and 
pasting it (work with me here) into a new message and sending it 
$NumberOfSubscribers times per message to the mailing list?  --  I 
don't think there is.


From the standpoint of the receiving domain, it has no clue who mangled 
the original message. The only thing they know is that there isn't a 
valid signature from the originating domain and what the originating 
domain's advice is for that situation.





What would you want SPF / DKIM / DMARC to do if I took a message from 
you (directly vs passing through the mailing list manager) and changed 
the recipient(s) and re-sent it out to one or more other people?  -- 
I'd wager a reasonable lunch that most people would want SPF / DKIM / 
DMARC to detect and possibly thwart such forwarding.  --  So why is a 
mailing list held to different (lower) standards?


This is the so-called replay attack. It's nonsense. Email has always 
been essentially multicast.



An unsigned message is treated the same as a broken signature. That 
doesn't help from the From: signing policy standpoint.


The original From: signature should have been validated, weighted, and 
judged /before/ it made it to the mailing list manager. Further, the 
mailing list manager should have removed any reference to the original 
signature.  


Signatures shouldn't be removed: a broken signature is identical to a 
missing signature security-wise, but broken signatures can be used for 
forensics. I, for example, could reconstruct a very large percentage of 
mailing list messages to unbreak signatures. It was to the point that it 
was quite tempting to use that approach to deal with MLM traversal.


Mike


Re: OT: Re: Facebook and other walled gardens

2021-03-22 Thread Michael Thomas



On 3/22/21 9:02 AM, Grant Taylor via NANOG wrote:

On 3/22/21 8:00 AM, Mike Hammett wrote:

most discussion in the WISP space has moved to Facebook


So ... a walled garden.

I have a severe problem with professional communities /requiring/ me 
to have a Facebook, et al., account to participate in community 
discussions.


What am I supposed to do if I don't have / can't get / want to avoid 
$WalledGardenInQuestion?  Am I forced to choose to break down and get 
an account with $WalledGardenInQuestion XOR not participate in said 
professional community?


What happens when $WalledGardenInQuestion changes policies or 
otherwise disagrees with something or is decides to ban the country 
where I'm at?


I think that such walled gardens are a Bad Idea™.



That's especially true since their moderation AI's are terrible and 
arbitrary and absolutely not up to the task. Getting put into FB jail 
for something outside of work affecting you at work is not OK. And since 
they use browser fingerprinting, etc, having two separate account while 
in FB jail looks like ban evasion to them and will get you site banned.


Mike



Re: Texas ERCOT power shortages (again) April 13

2021-04-15 Thread Michael Thomas



On 4/14/21 7:00 AM, Brian Johnson wrote:
There is no profit motive for a non-profit company. It’s completely 
relevant to your response.



This is patently absurd. It's an industry group/organization. It's 
raison d'etre is to serve its industry which definitely has a profit 
motive. That and even non-profits have a profit motive to stay afloat. 
See the NRA for one that has gone terribly wrong.


Mike



Re: Malicious SS7 activity and why SMS should never by used for 2FA

2021-04-18 Thread Michael Thomas
I wonder how much of this is moot because the amount of actual SS7 is 
low and getting lower every day. Aren't most "SMS" messages these days 
just SIP MESSAGE transactions, or maybe they use XMPP? As I understand a 
lot of the cell carriers are using SIPoLTE directly to your phone.


Mike

On 4/18/21 8:24 AM, Mel Beckman wrote:
Although NIST “softened” its stance on SMS for 2FA, it’s still a bad 
choice for 2FA. There are many ways to attack SMS, not the least of 
which is social engineering of the security-unconscious cellular 
carriers. The bottom line is, why use an insecure form of 
communication for 2FA at all? Since very good hardware-token-quality 
OTP apps are freely available, why be so lazy as to implement 2FA 
using radically insecure SMS?


Your argument that 2FA is only meant to “enhance” the security of a 
memorized password is just wrong. 2FA is meant as a /bulwark /against 
passwords that very often are disclosed by data breaches, through no 
fault of the password owner. 2FA enhances nothing. It guards against 
the abject security failures of others.


Consider this sage advice from 2020, long after NIST caved to industry 
pressure on its recommendations.


https://blog.sucuri.net/2020/01/why-2fa-sms-is-a-bad-idea.html 



  -mel

On Apr 18, 2021, at 8:02 AM, William Herrin > wrote:


On Sun, Apr 18, 2021 at 7:32 AM Mel Beckman > wrote:
SMS for 2FA is not fine. I recommend you study the issue in more 
depth. It’s not just me who disagrees with you:


https://www.schneier.com/blog/archives/2016/08/nist_is_no_long.html 



Mel,

That Schneier article is from 2016. The 3/2020 update to the NIST
recommendation (four years later and the currently active one) still
allows the use of SMS specifically and the PSTN in general as an out
of band authenticator in part of a two-factor authentication scheme.
The guidance includes a note explaining the social engineering threat
to SMS authenticators: "An out of band secret sent via SMS is received
by an attacker who has convinced the mobile operator to redirect the
victim’s mobile phone to the attacker."

https://pages.nist.gov/800-63-3/sp800-63b.html#63bSec8-Table1 



The bottom line is that an out-of-band authenticator like SMS is meant
to -enhance- the security of a memorized secret authenticator, not
replace it. If properly used, it does exactly that. If misused, it of
course weakens your security.

Regards,
Bill Herrin



--
William Herrin
b...@herrin.us
https://bill.herrin.us/




Re: Texas internet connectivity declining due to blackouts

2021-02-16 Thread Michael Thomas



On 2/16/21 3:05 AM, Jared Mauch wrote:
Almost exactly 4 years ago we were out up here in Michigan for over 
120 hours after a wind storm took out power to 1 million homes. Large 
scale restoration takes time. When the load and supply are imbalanced 
it can make things worse as well.


I'm hoping things return to normal soon but also am reminded it can 
take some time.


We now have a large generator with automatic switchover after that 
event. Filling gas cans every 12 hours to feed the generator is no fun.


We use propane. It's less dense energy-wise than gasoline, but it's 
really easy to switch over.


Mike



Re: Texas internet connectivity declining due to blackouts

2021-02-16 Thread Michael Thomas



On 2/16/21 8:50 AM, John Von Essen wrote:
I just assumed most people in Texas have heat pumps- AC in the summer 
and minimal heating in the winter when needed. When the entire state 
gets a deep freeze, everybody is running those heat pumps non-stop, 
and the generation capacity simply wasn’t there. i.e. coal or natural 
gas plants have some turbines offline, etc.,. in the winter because 
historically power use is much much less. The odd thing is its been 
days now, those plants should be able to ramp back up to capacity - 
but clearly they haven’t. Blaming this on wind turbines is BS. In 
fact, if it weren’t for so many people in Texas with grid-tie solar 
systems, the situation would be even worse.


You'd think that mid-summer Texas chews a lot more peak capacity than 
the middle of winter. Plus I would think a lot of Texas uses natural gas 
for heat rather than electricity further mitigating its effect on the grid.


Mike


Re: Texas internet connectivity declining due to blackouts

2021-02-16 Thread Michael Thomas



On 2/16/21 3:19 PM, Sabri Berisha wrote:

- On Feb 16, 2021, at 6:28 AM, Michael Thomas m...@mtcc.com wrote:


We use propane. It's less dense energy-wise than gasoline, but it's
really easy to switch over.

Why not use both? Plenty of generators that are dual fuel out there.
Last year I converted my Duramax to dual fuel by replacing the
carburator. Easy-peasy.


gasoline has a shelf life, though with PG that isn't a problem :/

but the larger issue is that i really would prefer not have a bunch of 
gasoline around. it's messier too in comparison to just switching a 
propane tank. we have like three or four 5 gallon tanks which we use in 
the mean time for bbq's, etc. we manage to run the things we need for 
about 24 hours on one tank.


Mike



dumb question: are any of the RIR's out of IPv4 addresses?

2021-02-16 Thread Michael Thomas



Basically are there places that you can't get allocations? If so, what 
is happening?


Mike



Re: dumb question: are any of the RIR's out of IPv4 addresses?

2021-02-16 Thread Michael Thomas



On 2/16/21 4:18 PM, Fred Baker wrote:
You may find this article interesting: 
https://blog.apnic.net/2019/12/13/keep-calm-and-carry-on-the-status-of-ipv4-address-allocation/ 
<https://blog.apnic.net/2019/12/13/keep-calm-and-carry-on-the-status-of-ipv4-address-allocation/>


So aside from Afrinic, this is all being done on the gray market? 
Wouldn't you expect that price to follow something like an exponential 
curve as available addresses become more and more scarce and unavailable 
for essentially any price?


Mike



Sent from my iPad


On Feb 16, 2021, at 3:07 PM, Michael Thomas  wrote:


Basically are there places that you can't get allocations? If so, 
what is happening?


Mike



Re: Texas internet connectivity declining due to blackouts

2021-02-17 Thread Michael Thomas



On 2/17/21 7:15 AM, Sean Donelan wrote:


The price of electricity is a major component of the decision where 
data centers operators choose to build large data centers.



Total electric price to end consumer (residential).  Although 
industrial electric prices are usually lower, its easier to compare 
residential prices across countries.


Europe (Residential):
Lowest Bulgaria: EU 9.97 cents/kWh (USD 12.0 cents/kWh)
Highest Germany: EU 30.88 cents/kWh (USD 37.33 cents/kWh)

Average: EU 20.5 cents/kWh (USD 25.2 cents/kWh)

USA (Residential):
Lowest Idaho: USD 9.67 cents/kWh (EU 8.3 cents/kWh)
Highest Hawaii: USD 28.84 cents/kWh (EU 24.07 cents/kWh)

Average: USD 13.25 cents/kWh (EU 10.79 cents/kWh)


Texas is slightly below the US average at
Texas: USD 12.2 cents/kWh (EU 9.96 cents/kWh)



here in California it's like $.20 - $.30 with pg i recently looked up 
Oregon and it was like $.03 which is why you probably see data centers 
being built by The Dalles and Prineville.


Mike



Re: Texas internet connectivity declining due to blackouts

2021-02-17 Thread Michael Thomas



On 2/17/21 1:23 PM, b...@uu3.net wrote:

Hold on.. Math doesnt add-up here.
Are you telling me that a gallon propane tank (3.8l) can last
24 hours for about 1000W power generation. Are you sure?
I could belive for 6 hours... maybe 8.. not 24 hours.
So either you are using up 200-300W.. or you have superior power
generator. Can you share what are you using?


Sorry I noticed my error right after I hit send. I meant a 5 gallon 
tank, not 1. Inverter generators are definitely worth the extra cost though.


Mike




-- Original message --

From: Michael Thomas 
To: nanog@nanog.org
Subject: Re: Texas internet connectivity declining due to blackouts
Date: Wed, 17 Feb 2021 09:56:06 -0800

We just run extension cords and don't have a transfer switch. It's pretty
surprising what you can run on about a kw. A gallon propane tank lasts close to
24 for us.

Mike



Re: Texas internet connectivity declining due to blackouts

2021-02-17 Thread Michael Thomas



On 2/17/21 2:37 PM, Carsten Bormann wrote:


I actually tend to believe that buried HVDC is the future of long-distance 
power transmission.
We might be able to pull off that this transitions from a niche technology to 
the mainstream, like we did with photovoltaics (at the cost of 200 G€).
Let’s see...

I wonder what a world with 5V DC distributed within the house would look 
like. All of those power adapters are both ugly and a PITA. Of course 
that wouldn't have to come in from the grid, but still. I found a 
powerstrip which has a couple of USB slots in it and it's very nice. It 
also allows the AC plugs to be rotated which is nice for the remaining 
adapters.


Mike



Re: Texas internet connectivity declining due to blackouts

2021-02-17 Thread Michael Thomas



On 2/17/21 9:40 AM, Aaron C. de Bruyn via NANOG wrote:
It might not be an easy fix in the moment, but in the long run, buy a 
generator and install a propane tank.
When power prices spike to insane levels like this, just flip your 
transfer switch over and run off propane.

When utility power becomes cheaper, switch back to the grid.

Maybe some sort of Raspberry Pi to monitor the current prices and do 
the transfer automatically.  (language warning: 
https://www.youtube.com/watch?v=gz7IPTf1uts 
)


Protip: If you're blacked out, it doesn't matter what the price of 
power is.


We just run extension cords and don't have a transfer switch. It's 
pretty surprising what you can run on about a kw. A gallon propane tank 
lasts close to 24 for us.


Mike



Re: DoD IP Space

2021-02-11 Thread Michael Thomas



On 2/11/21 5:41 PM, Izaac wrote:



IPv6 restores that ability and RFC-1918 is a bandaid for an obsolete protocol.

So, in your mind, IPv4 was "obsolete" in 1996 -- almost three years
before IPv6 was even specified?  Fascinating.  I could be in no way
mistaken for an IPv4/NAT apologist, but that one's new on me.


ipv6 was on my radar in the early 90's. it was definitely at least 1993, 
maybe earlier.


Mike



Re: An update on the AfriNIC situation

2021-08-27 Thread Michael Thomas



On 8/27/21 2:58 PM, Sabri Berisha wrote:

- On Aug 27, 2021, at 8:36 AM, Bill Woodcock wo...@pch.net wrote:

Hi,


If, like me, you feel like chipping in a little bit of money to help AfriNIC
make payroll despite Heng having gotten their bank accounts frozen, some of the
African ISP associations have put together a fund, which you can donate to
here:

   https://www.tespok.co.ke/?page_id=14001

Top Gear Top Tip: set a "travel notification" on your credit card prior to
donating. It took me 3 failed attempts and 2 fraud notifications to get a
payment through. The fraud notifications were delayed as well. Chase credit 
card.

"Verified by VISA". Right.

And yes Mel, you're right about NANOG's AUP but this is not a legal matter,
this is to keep AfriNIC in business...

Yeah, I'd think that not being able to do business allocating blocks is 
definitionally an operator issue.


Mike



Re: The great Netflix vpn debacle! (geofeeds)

2021-08-31 Thread Michael Thomas



On 8/31/21 4:40 PM, Owen DeLong via NANOG wrote:

On the other hand, the last time I went looking for a 27” monitor, I ended up 
buying a 44” smart television because it was a cheaper HDMI 4K monitor than the 
27” alternatives that weren’t televisions. (It also ended up being cheaper than 
the 27” televisions which didn’t do 4K only 1080p, but I digress).


Back when 4k just came out and they were really expensive, I found a 
"TV" by an obscure brand called Seiki which was super cheap. It was a 
39" model. It's just a monitor to me, but I have gotten really used to 
its size and not needing two different monitors (and the gfx card to 
support it). What's distressing is that I was looking at what would 
happen if I needed to replace it and there is this gigantic gap where 
there are 30" monitors (= expensive) and 50" TV's which are relatively 
cheap. The problem is that 40" is sort of Goldielocks with 4k where 50" 
is way too big and 30" is too small. Thankfully it's going on 10 years 
old and still working fine.


Mike




Re: The great Netflix vpn debacle! (geofeeds)

2021-08-31 Thread Michael Thomas



On 8/31/21 5:13 PM, Jay Hennigan wrote:

On 8/31/21 16:32, Jeroen Massar via NANOG wrote:

Fun part being that it is hard to get a Dumb TV... though that is 
primarily simply because of all the tracking non-sense in them that 
makes them 'cheaper'... (still wonder how well that tracking stuff 
complies with GDPR, I am thinking it does not ... Schrems anyone? :) )


Just get a "smart" TV, don't connect it to the Internet, and use its 
HDMI ports for your cable box, Apple TV, etc. and/or antenna input for 
local off-air reception.




Yeah, until TV manufacturers actually start incorporating, oh say, 
Google tv (which is just a form of Android) they are always going to be 
inferior. Having the TV just be a monitor is a feature, not a bug. It's 
a lot cheaper to upgrade a $50 hdmi based dongle than the whole TV, 
doubly so since manufacturers have a bad reputation  for not supporting 
upgrades beyond the sell date. I have no idea whether any of the 
external ones support v6 though.


One thing that might be nice is for routers to internally number using 
v6 in preference to v4 and NAT that (if needed). Then you can easily 
tell what is still a laggard. My wifi cams might be poorly supported, 
but they don't need to interoperate with much on the Internet.


Mike, Google TV has been pretty nice since the Amazon feud finally ended 
though I hate that the protocol is still pretty proprietary




Re: Reminder: Never connect a generator to home wiring without transfer switch

2021-08-25 Thread Michael Thomas



On 8/25/21 11:11 AM, Jay Hennigan wrote:


The question that Ethan raised makes sense, however. If power to 
several blocks is out and I connect my little 2KW Honda to my house 
wiring without a transfer switch, because transformers work in both 
directions my generator will see the load of the whole neighborhood. 
This will immediately and severely overload the generator and at best 
cause it to stall out or trip its output breaker, at worst to fail 
catastrophically.


I know that a nearby house burned down and was blamed on back feeding 
the grid. I assume that it failed catastrophically. Bad idea all around.


Mike



Re: The great Netflix vpn debacle! (geofeeds)

2021-09-03 Thread Michael Thomas


On 9/1/21 7:58 PM, Lady Benjamin Cannon of Glencoe, ASCE wrote:
At the risk of going off-topic, there must be an over-representation 
of network engineers as their customer: because I bought the same TV 
to also use as a 4k monitor.


And the power supply on it just died.  Samsung makes a 39” 4k and I 
haven’t been able to find it.


How’s this relevant?  We’ve been using them as 4k desktop monitors 
visualizing fiber routing for years now.


Haha I'm not a network engineer, much more of a software engineer with 
lots of networking. the ability to get three browser windows up side by 
side is really nice for writing and testing code. There's probably more 
of a market out there then they realize. If you build it, we will come...


Mike





—L.B.

Ms. Lady Benjamin PD Cannon of Glencoe, ASCE
6x7 Networks & 6x7 Telecom, LLC
CEO
l...@6by7.net <mailto:l...@6by7.net>
"The only fully end-to-end encrypted global telecommunications company 
in the world.”

FCC License KJ6FJJ

On Aug 31, 2021, at 6:01 PM, Michael Thomas <mailto:m...@mtcc.com>> wrote:



On 8/31/21 4:40 PM, Owen DeLong via NANOG wrote:
On the other hand, the last time I went looking for a 27” monitor, I 
ended up buying a 44” smart television because it was a cheaper HDMI 
4K monitor than the 27” alternatives that weren’t televisions. (It 
also ended up being cheaper than the 27” televisions which didn’t do 
4K only 1080p, but I digress).


Back when 4k just came out and they were really expensive, I found a 
"TV" by an obscure brand called Seiki which was super cheap. It was a 
39" model. It's just a monitor to me, but I have gotten really used 
to its size and not needing two different monitors (and the gfx card 
to support it). What's distressing is that I was looking at what 
would happen if I needed to replace it and there is this gigantic gap 
where there are 30" monitors (= expensive) and 50" TV's which are 
relatively cheap. The problem is that 40" is sort of Goldielocks with 
4k where 50" is way too big and 30" is too small. Thankfully it's 
going on 10 years old and still working fine.


Mike






Re: The great Netflix vpn debacle! (geofeeds)

2021-09-01 Thread Michael Thomas



On 9/1/21 11:49 AM, Matthew Huff wrote:

IPv6 tunnels work great for network geeks, but rather poorly for home users 
with streaming, gaming etc...It's not necessarily the performance, it's either 
the geolocation, latency, or the very issue that started this thread - VPN 
banning.

Remember, the streaming services couldn't care less about geolocation or VPN 
banning, it's the contractual obligations with the content providers. The 
content providers care about vpn banning because it gets around geolocation, 
which interferes with their business models (different release schedules to 
different regions, etc..)

Been there, done that...Stuck on Fios with no IPv6. Ran into rather 
"interesting" problems with various streaming services with IPv6 configured.

Well, my point is that a properly pre-configured home router could 
probably make this plug and play. Openwrt can probably do what I'm 
thinking. Streaming should not be a problem but gaming/latency 
definitely is.


I frankly don't understand why these home router vendors don't just 
adopt Openwrt and the like instead of maintaining their own code. They 
are extremely cost sensitive so you'd think that it would be a big win 
(yes, I know some do but, say, Linksys doesn't and their software is 
complete shit and I know this first hand). Why can't I have router 
distos just like Linux distos where somebody with clue does the work to 
customize distos with various features. My ISP could then just point at 
the ones they like too. It's really sad that home routers are completely 
treated like black boxes where people and their devices have no problem 
customizing them to their taste. My suspicion is this all a 
self-fulfilling prophecy.


Mike



Re: The great Netflix vpn debacle! (geofeeds)

2021-09-01 Thread Michael Thomas


On 9/1/21 3:17 PM, Warren Kumari wrote:



On Wed, Sep 1, 2021 at 2:28 PM > wrote:



Every time I've read a thread about using TVs for monitors several
people who'd tried would say don't do it.


And everytime I see an email thread about the difference or not 
between monitors and TVs I'm taken over by an all consuming rage...
I have a **monitor** I purchased it from Dell, and it clearly said 
"monitor" on the box, it identifies itself somewhere display settings 
as a "monitor", and even says "monitor" in small letters somewhere on 
the back It's a MONITOR dagnabit... but, for some unfathomable 
reason it has some tiny little speakers in it, and every time I 
connect it via HDMI to my Mac laptop, the machine decides to 
completely ignore the fact that I've told it that I want to use a 
specific sound output, and starts playing all audio though the 
monitors speakers. Oh, and because this is HDMI, and Apple apparently 
follows the HDMI spec, the Mac volume controls won't work ("This 
device has no audio level control" or something...) and I have to go 
scrummaging around in some horrendous on-screen monitor menu to make 
it less obnoxiously loud...


Huh. I have a Mac and my monitor was definitely marketed as a TV and all 
I do is just turn the volume down on the TV remote and don't have issues 
with the Mac not honoring where its audio output is. So there is 
obviously something different between our two setups. It does like you 
say not have the ability to control volume which I don't understand 
because my chromecast can do that and its only cable is HDMI so 
obviously the Mac can too.





All attempts to get this less stupid result in Apple pointing at the 
HDMI spec and saying that if a device advertises audio capabilites 
they list it as an output device, and Dell pointing out that they 
simply advirtise the fact that the device has a speaker, and, well, 
shrug, not thier issue if things try and use it.


I can understand why they have speakers and all of that even if it's 
just a monitor because it's probably cheaper to just have one model to 
manufacture and just rebrand it. There was some device -- gad I want to 
think it was an old DEC terminal server -- that just filled in the 
serial ports with glue or something so that you couldn't use them. That 
was pretty shameless.


Mike




Re: Happy 40th anniversary RFC 791!

2021-09-01 Thread Michael Thomas



On 9/1/21 11:42 AM, Mel Beckman wrote:

For anyone unaware, Jon Postel, a good friend and mentor to many of us at the 
dawn of the Internet, was the primary editor of this landmark document.

Those were the days we thought ARPAnet would never be allowed to go commercial. 
Thanks to Jon’s tireless campaigning (among others), not to mention meticulous 
documentation, it did.

He was taken from us too soon.

https://www.ietfjournal.org/in-memory-of-jon-postel/

RFC 791 and 793 are remarkably easy to read and follow what's going on. 
And that's not just hindsight speaking. We implemented both a few years 
later purely from the RFC's with no connection to anybody else and when 
we actually had kit that was connected to the Internet it just worked. 
One of the most remarkable things in my life was that I wrote debugger 
for the OS I wrote for the Lantronix terminal servers which I ended up 
remotely debugging in New Zealand. That was just mind blowing. Of course 
you'd call that a backdoor these days but they were very happy that I 
could figure out what was going on.


I never had the pleasure of meeting Jon, though I saw him at IETF meetings.

Mike



Re: Happy 40th anniversary RFC 791!

2021-09-01 Thread Michael Thomas



On 9/1/21 12:26 PM, Mel Beckman wrote:

I still have a slew on Lantronix terminal servers :)


A few years back I was shocked to hear that the original OS that I wrote 
-- called whimsically Punix for Puny Unix -- which was used by Lantronix 
was still being sold. I mean, that's over 30 years ago. I doubt our IP 
and TCP drivers changed much in that time. One of the cute things I did 
was pump out most of the character IO in the null job so that it didn't 
have to do interrupt based IO for the most part, which kept the costs 
low (= no DMA silicon, cheaper processors).


It still amazes me that we built the internet basically without the 
internet. I came >.< close to driving up to ISI to attend an IETF 
meeting around 1987 or maybe 88 to complain about the shitty LPR 
interface on Unix and what could be done about it. I'm not sure whether 
it would have fallen on receptive ears, but I was totally naive about 
how new this all was.


Mike




if not v6, what?

2021-09-05 Thread Michael Thomas



On 9/4/21 10:43 PM, Saku Ytti wrote:


I view IPv6 as the biggest mistake of my career and feel responsible
for this horrible outcome and I do apologise to Internet users for
it. This dual-stack is the worst possible outcome, and we've been here
over two decades, increasing cost and reducing service quality. We
should have performed better, we should have been IPv6 only years ago.

I wish 20 years ago big SPs would have signed a contract to drop IPv4
at the edge 20 years from now, so that we'd given everyone a 20 year
deadline to migrate away.
20 years ago was the best time to do it, the 2nd best time is today.
If we don't do it, 20 years from now, we are in the same position,
inflating costs and reducing quality and transferring those costs to
our end users who have to suffer from our incompetence.

I can't see how an "end of the tunnel" clause would be helpful. As with 
everything, nothing would be done until the very and then they'd just 
extend the tunnel again which is functionally no different than running 
out of IP address.


I looked up CGN's this morning and the thing that struck me the most was 
losing port forwarding. It's probably a small thing to most people but 
losing it means to get an incoming session it always has to be mediated 
by something on the outside. Yuck. So I hope that is not what the future 
hold, though it probably does.


So is there anything we could have done different? Was this ever really 
a technical issue? Even if we bolted two more bytes onto an IPv4 address 
and nothing more, would that have  been adopted either?


Mike



Happy 40th anniversary RFC 791!

2021-09-01 Thread Michael Thomas



aka IPv4. The RFC doesn't have the exact date it was published, but the 
internet as we know it was being born. What a journey it's been.


https://datatracker.ietf.org/doc/html/rfc791

Mike



Re: The great Netflix vpn debacle! (geofeeds)

2021-09-01 Thread Michael Thomas



On 9/1/21 10:59 AM, Nimrod Levy wrote:
All this chatter about IPv6 support on devices is fun and all, but 
there are providers still not on board.

They operate in my neighborhood and they know who they are...

This is about inside your premise before any NAT's enter the picture. 
What would be nice is if home routers offered up v6 as the default way 
to number and v6 tunnels past ISP's that don't have v6. Home routers 
could make that all rather seamless where users wouldn't need to know 
that was happening. It's really a pity that home routers are a race to 
the bottom where everything else with networking is expected to evolve 
over time.


Mike


Re: The great Netflix vpn debacle! (geofeeds)

2021-09-01 Thread Michael Thomas



On 9/1/21 11:25 AM, b...@theworld.com wrote:

Every time I've read a thread about using TVs for monitors several
people who'd tried would say don't do it. I think the gist was that
the image processors in the TVs would fuzz text or something like
that. That it was usable but they were unhappy with their attempts, it
was tiring on the eyes.

Maybe that's changed or maybe people happy with this don't do a lot of
text? Or maybe there are settings involved they weren't aware of, or
some TVs (other than superficial specs like 4K vs 720p) are better for
this than others so some will say they're happy and others not so
much?


It's been a while but there was a setting for mine that I had to futz 
with so that didn't happen. You're right that you should definitely check.


Mike



Re: IPv6 woes - RFC

2021-09-12 Thread Michael Thomas



On 9/12/21 1:08 PM, Randy Bush wrote:

If the mid size eyeballs knew ipv4 is going away in 10, 15, 20 years
whichever it is, then they'd of course have to start moving too,
because no upstream.

And they would fight it tooth and nail, just like they do now

you speak as if it was religion or they are bad people.  neither is the
case.  it is what they see as their best business interest.

being from the pacific northwest, i have learned not to try to push
water uphill.  so, until we can tilt the hill, it's probably bast for
one's health if one gets over it.

If vendors actually cared they could make the CGNAT's and other hacks 
ridiculously buggy and really expensive to deploy and maintain. I doubt 
many vendors were chomping at the bit to support CGNAT and are probably 
wondering what fresh hell is next to keep ipv4 limping along.


Mike



Re: IPv6 woes - RFC

2021-09-12 Thread Michael Thomas



On 9/12/21 4:59 PM, Randy Bush wrote:

I doubt many vendors were chomping at the bit to support CGNAT

definitely.  they hate to sell big expensive boxes.


Back in the early 2000's the first rumblings of what would eventually 
turn into CGN started popping up at Cablelabs. I went to the EVP of 
Service Provider and basically told him that he had a choice between 
that mess or developing ipv6. I doubt he was interested in doing 
anything at all, but he chose ipv6, at least in the abstract. Steve 
Deering and I then went around to all of the BU's trying to figure out 
what it would take for them to implement ipv6 in the routing plane. 
Cablelabs was also pretty ipv6-focused too making a similar calculation.


So no, they weren't interested in it either. They were completely driven 
by what the providers wanted and what a large group of providers have 
since made pretty clear is that horrible hacks are fine by them if it 
gets them out of a short term bind. But it's hardly uniform across the 
industry. This is a classic reverse-tragedy of the commons.


Mike



Re: IPv6 woes - RFC

2021-09-14 Thread Michael Thomas


On 9/14/21 1:06 PM, Owen DeLong wrote:



On Sep 14, 2021, at 12:58 , Michael Thomas <mailto:m...@mtcc.com>> wrote:



On 9/14/21 5:37 AM, Eliot Lear wrote:


8+8 came *MUCH* later than that, and really wasn't ready for prime 
time.  The reason we know that is that work was the basis of LISP 
and ILNP.  Yes, standing on the shoulders of giants. And there 
certainly were poor design decisions in IPv6, bundling IPsec being 
one.  But the idea that operators were ignored?  Feh.


I wasn't there at actual meetings at the time but I find the notion 
that operators were ignored pretty preposterous too. There was a 
significant amount of bleed over between the two as I recall from 
going to Interop's. What incentive do vendors have to ignore their 
customers? Vendors have incentive to listen to customer requirements 
and abstract them to take into account things can't see on the 
outside, but to actually give the finger to them? And given how small 
the internet community was back while this was happening, I find it 
even more unlikely.




You’d be surprised… Vendors often get well down a path before exposing 
enough information to the community to get the negative feedback their 
solution so richly deserves. At that point, they have rather strong 
incentives to push for the IETF adopting their solution over customer 
objections because of entrenched code-base and a desire not to go back 
and explain to management that the idea they’ve been working on for 
the last 6 months is stillborn.


But we're talking almost 30 years ago when the internet was tiny. And 
it's not like operators were some fount of experience and wisdom back 
then: everybody was making it up along the way including operators which 
barely even existed back then. I mean, we're talking about the netcom 
days here. That's why this stinks of revisionist history to me.


Mike



Re: IPv6 woes - RFC

2021-09-14 Thread Michael Thomas



On 9/14/21 2:17 PM, Randy Bush wrote:

Just because I didn't attend IETF meetings doesn't mean that I didn't
read drafts, etc. Lurkers are a thing and lurkers are allowed opinions
too.

i missed the rfc where the chair of the v6 wg said the ops did not
understand the h ratio because we did not understand logarithms.


So in order to have an opinion, I have to know all of the politics along 
with the documents produced. Got it. Thank goodness we didn't know that 
when we were implementing IPv4 in the mid 80's.


Frankly I had always regretted not going to an IETF meeting at ISI 
around 88 or so, but given the silliness of IETF politics maybe that was 
a blessing.


Mike



Re: IPv6 woes - RFC

2021-09-14 Thread Michael Thomas



On 9/14/21 2:08 PM, Randy Bush wrote:

I wasn't there at actual meetings at the time

but your opinion was?


Just because I didn't attend IETF meetings doesn't mean that I didn't 
read drafts, etc. Lurkers are a thing and lurkers are allowed opinions too.


Mike




Re: IPv6 woes - RFC

2021-09-14 Thread Michael Thomas



On 9/14/21 2:07 PM, Owen DeLong wrote:
You’d be surprised… Vendors often get well down a path before exposing 
enough information to the community to get the negative feedback their 
solution so richly deserves. At that point, they have rather strong 
incentives to push for the IETF adopting their solution over customer 
objections because of entrenched code-base and a desire not to go back 
and explain to management that the idea they’ve been working on for 
the last 6 months is stillborn.


But we're talking almost 30 years ago when the internet was tiny. And 
it's not like operators were some fount of experience and wisdom back 
then: everybody was making it up along the way including operators 
which barely even existed back then. I mean, we're talking about the 
netcom days here. That's why this stinks of revisionist history to me.




I was there for parts of it. Even then, the vendors were entrenched in 
their views and dominated many of the conversations.


Vendors have to be able to implement things so of course there is going 
to be push back when it's not technically feasible or far more 
expensive. Nobody expects customers putting out the actual $$$ to be up 
to speed on the intricacies of TCAM's and other such considerations so 
there is always going to be back and forth.


And none of this alters that nobody has given a scenario where their 
$SOLUTION would have fared any better than ipv6.


Mike



Re: IPv6 woes - RFC

2021-09-15 Thread Michael Thomas


On 9/15/21 4:26 PM, Owen DeLong wrote:



On Sep 15, 2021, at 16:20 , Michael Thomas <mailto:m...@mtcc.com>> wrote:



On 9/14/21 12:44 AM, Eliot Lear wrote:


There were four proposals for the IPng:

  * NIMROD, PIP, SIP, and TUBA

SIP was the one that was chosen, supported by endpoint manufacturers 
such as Sun and SGI, and it was the MOST compatible.  Operators and 
router manufacturers at the time pushed TUBA, which was considerably 
less compatible with the concepts used in v4 because of variable 
length addressing.   If we endpoints had some notion that v6 would 
take as long as it has to diffuse, perhaps we all might have thought 
differently.  I don't know.



So I'm beginning to think that the reason ipv6 didn't take off is one 
simple thing: time. All of the infighting took years and by then that 
ship had long sailed. The basic mechanisms for v6 for hosts were not 
complicated and all of the second system syndrome fluff could be 
mostly be ignored or implemented when it actually made sense. If this 
had been settled within a year instead of five, there may have been a 
chance especially since specialized hardware was either nonexistent 
or just coming on the scene. I mean, Kalpana was still pretty new 
when a lot of this was being first discussed from what I can tell. 
Maybe somebody else knows when hardware routing came on the scene but 
there was still lots of software forwarding planes when I started at 
Cisco in 1998 just as broadband was starting to flow.


Most of it was settled fairly quickly, actually. The bigger delays 
were software vendors, network infrastructure product vendors (DSLAMs 
and the like), etc. who even after it was well settled simply didn’t 
feel a need to incorporate it into their products until about a year 
after IANA runout.


I was aware of ipv6 -- or at least what would become ipv6 -- in either 
1992 and absolutely no later than 1993. The internet was still tiny then 
and I don't even think that CIDR was even a thing yet, and broadband was 
still 5 years away. Had something simple like ipv6 headers, , and 
the silliness of SLAC vs. DHCP wars had been resolved quickly there was 
a very good chance that vendors would have adopted it if customers 
wanted it or could be cajoled into wanting it. At that time IP was still 
"the path to OSI" iirc so none of this was in cement from a vendor 
standpoint (and sorry, there are lots more vendors than just router 
vendors). But the entire thing dragged on and on and on and by then it 
was far too late. Then the invention of NAT sealed that fate.



The IETF was a victim of its own dysfunction, film at 11 and now 
we're having a 30 year reunion.


I’m not sure we can put all (or even most) of the blame on IETF 
dysfunction here. Don’t get me wrong, IMHO there’s plenty of IETF 
dysfunction and it is partially responsible. However, I suspect that 
if IETF had rolled out the model of perfection and an ideal protocol 1 
month after the IPNG working group started, we’d still be pretty much 
where we are today because of the procrastination model of addressing 
major transitions that is baked into human nature.


No I think the best was the enemy of the good. As it ever were. Nobody 
seems to have appreciated what the real problem was -- time. Hindsight 
is 20/20 but it wasn't hard to see that the internet was exploding and 
wasn't in fact the "path to OSI" and by ignoring time there wasn't going 
to be a path to anything.


Mike



Re: IPv6 woes - RFC

2021-09-15 Thread Michael Thomas


On 9/14/21 12:44 AM, Eliot Lear wrote:


There were four proposals for the IPng:

  * NIMROD, PIP, SIP, and TUBA

SIP was the one that was chosen, supported by endpoint manufacturers 
such as Sun and SGI, and it was the MOST compatible.  Operators and 
router manufacturers at the time pushed TUBA, which was considerably 
less compatible with the concepts used in v4 because of variable 
length addressing.   If we endpoints had some notion that v6 would 
take as long as it has to diffuse, perhaps we all might have thought 
differently. I don't know.



So I'm beginning to think that the reason ipv6 didn't take off is one 
simple thing: time. All of the infighting took years and by then that 
ship had long sailed. The basic mechanisms for v6 for hosts were not 
complicated and all of the second system syndrome fluff could be mostly 
be ignored or implemented when it actually made sense. If this had been 
settled within a year instead of five, there may have been a chance 
especially since specialized hardware was either nonexistent or just 
coming on the scene. I mean, Kalpana was still pretty new when a lot of 
this was being first discussed from what I can tell. Maybe somebody else 
knows when hardware routing came on the scene but there was still lots 
of software forwarding planes when I started at Cisco in 1998 just as 
broadband was starting to flow.


The IETF was a victim of its own dysfunction, film at 11 and now we're 
having a 30 year reunion.


Mike



Re: IPv6 woes - RFC

2021-09-14 Thread Michael Thomas


On 9/14/21 5:37 AM, Eliot Lear wrote:


8+8 came *MUCH* later than that, and really wasn't ready for prime 
time.  The reason we know that is that work was the basis of LISP and 
ILNP.  Yes, standing on the shoulders of giants.  And there certainly 
were poor design decisions in IPv6, bundling IPsec being one.  But the 
idea that operators were ignored?  Feh.


I wasn't there at actual meetings at the time but I find the notion that 
operators were ignored pretty preposterous too. There was a significant 
amount of bleed over between the two as I recall from going to 
Interop's. What incentive do vendors have to ignore their customers? 
Vendors have incentive to listen to customer requirements and abstract 
them to take into account things can't see on the outside, but to 
actually give the finger to them? And given how small the internet 
community was back while this was happening, I find it even more unlikely.


But Randy still hasn't told us what would have worked and why it would 
have succeeded.


Mike



On 14.09.21 14:10, Randy Bush wrote:

and 8+8, variable length, ... just didn't happen, eh?

the nice thing about revisionist history is that anybody can play.

randy



Re: IPv6 woes - RFC

2021-09-13 Thread Michael Thomas



On 9/13/21 11:22 AM, Randy Bush wrote:

< rant >

ipv6 was designed at a time where the internet futurists/idealists had
disdain for operators and vendors, and thought we were evil money
grabbers who had to be brought under control.

the specs as originally RFCed by the ietf is very telling.  for your
amusement, take a look at rfc 2450.  it took five years of war to get
rid of the tla/sla crap.  and look at the /64 religion today[0].

real compatibility with ipv4 was disdained.  the transition plan was
dual stack and v4 would go away in a handful of years.  the 93
transition mechanisms were desperate add-ons when v4 did not go away.
and dual stack does not scale, as it requires v4 space proportional to
deployed v6 space.

we are left to make the mess work for the users, while being excoriated
for not doing it quickly or well enough, and for trying to make ends
meet financially.

This is really easy to say in hindsight. 30 years ago it wasn't even 
vaguely a given that the Internet would even win and the size of the IP 
universe was still tiny. The main problem is that the internet was a 
classic success disaster where you're going as fast as possible and 
falling farther and farther behind. All of the gripes about particulars 
strike me as utterly irrelevant in the global scheme of things. As I 
mentioned, if they did nothing more than bolted on two more address 
bytes it still would have been just as impossible to get vendors and 
providers to care because everybody was heads down trying to deal with 
the success disaster. It's really easy to say that ipv6 suffers from 
second system syndrome -- which it does -- but that doesn't provide any 
concrete strategy for what would have been "better" in both getting 
vendors and providers to care. None of them wanted to do anything other 
than crank out kit that could be sold in the here and now that providers 
were willing to buy. That was certainly my experience at Cisco. As I 
said, the exec I talked to didn't actually want to do anything at all 
but was willing to let a couple of engineers navel gaze if it gave him 
something to talk about were the subject to actually come up with 
customers and a bludgeon against 3COM (iirc) at the time.


None of this is technical. It was which short term hack is going to keep 
the gravy train flowing? I was a developer at the time keeping an eye on 
the drafts as they were coming out. They didn't strike me was overly 
difficult to implement nor did they strike me as particularly 
overwrought. From a host standpoint, i didn't think it would take too 
much effort to get something up and running but I waited until somebody 
started asking for it. That never came. Nothing ever came. Then NAT's 
came and kicked the can down the roads some more. Now we have mega-yacht 
NAT's to kick it down the road even farther. Tell us what else would 
have prevented that?


Mike




Re: IPv6 woes - RFC

2021-09-13 Thread Michael Thomas


On 9/13/21 2:52 PM, Baldur Norddahl wrote:



On Mon, Sep 13, 2021 at 8:22 PM Randy Bush > wrote:


real compatibility with ipv4 was disdained.  the transition plan was
dual stack and v4 would go away in a handful of years.  the 93
transition mechanisms were desperate add-ons when v4 did not go away.
and dual stack does not scale, as it requires v4 space proportional to
deployed v6 space.


What I find most peculiar about this whole rant (not just yours but 
the whole thread) is that I may be the only one who found implementing 
IPv6 with dual stack completely trivial and a non issue? There is no 
scale issue nor any of the other rubbish.


I agree on the host side. It didn't even occur to me at the time I was 
looking at it that it would be any sort of issue -- we had all kinds of 
other protocols on our boxes like SMB, Netware, DEC LAT, etc. We would 
have done it if customers told us they wanted it, just like we 
implemented ACL's not realizing why they were especially important. Back 
in the early days all routing was done in software so it wouldn't have 
been hard to squeeze v6 in. All of that changed when the forwarding 
plane got cast in silicon though which made it far, far more difficult 
to get anybody to stick their necks out vs a skunk works software 
project. But before that it would have been completely doable if 
somebody was willing to throw money at it.


Mike


Re: Where to get IPv4 block these day

2021-08-06 Thread Michael Thomas



On 8/6/21 8:35 AM, Fred Baker wrote:



On Aug 6, 2021, at 8:22 AM, Noah  wrote:

Do majority of smart handsets OS today support v6?

Majority of people I know (due to economic factors) own lowend android handsets 
with no support for v6. This group forms majority of eyeballs that contribute 
revenue to local Telecoms whose network is heavily CGNAT.

Handsets - Cameron would be in a better place than I to discuss this, but 
certainly anything used to connect to his network (T-Mobile) does, and enables 
access with IPv4 turned off. That includes at least iPhone (the handset I use 
to access his network), and Android. 
https://thirdinternet.com/ipv6-on-mobile-devices/

As to other systems, Apple and Linux platforms, and more recently Windows, 
supports IPv6, and has for quite a while. Issues there tend to be in specific 
applications (due to the socket interface).


I thought I had heard that there were carriers out there that are mainly 
(always?) using v6 to the phones? i assume they just nat somewhere for 
v4 sites?


And wouldn't it take effort to *disable* v6 capability for iphones and 
android? with happy eyeballs it just sort of works.


Mike



Re: IPv6 woes - RFC

2021-09-24 Thread Michael Thomas



On 9/24/21 10:53 AM, b...@uu3.net wrote:

Well, I see IPv6 as double failure really. First, IPv6 itself is too
different from IPv4. What Internet wanted is IPv4+ (aka IPv4 with
bigger address space, likely 64bit). Of course we could not extend IPv4,
so having new protocol is fine. It should just fix problem (do we have other
problems I am not aware of with IPv4?) of address space and thats it.
Im happy with IPv4, after 30+ years of usage we pretty much fixed all
problems we had.

But that is what ipv6 delivers -- a 64 bit routing prefix. Am I to take 
it that a whopping 16 bytes of extra addressing information breaks the 
internet? And all of the second system syndrome stuff was always 
separable just like any other IETF protocol. you implement what is 
needed and ignore all of the rest -- there is no IETF police after all.


I can understand the sound and fury when people were trying to make this 
work on 56kb modems, but with speeds well over 1G it seems sort of archaic.


Mike




Re: Rack rails on network equipment

2021-09-25 Thread Michael Thomas



On 9/25/21 2:08 PM, Jay Hennigan wrote:

On 9/25/21 13:55, Baldur Norddahl wrote:

My personal itch is how new equipment seems to have even worse boot 
time than previous generations. I am currently installing juniper 
acx710 and while they are nice, they also make me wait 15 minutes to 
boot. This is a tremendous waste of time during installation. I can 
not leave the site without verification and typically I also have 
some tasks to do after boot.


Besides if you have a crash or power interruption, the customers are 
not happy to wait additionally 15 minutes to get online again.


Switches in particular have a lot of ASICs that need to be loaded on 
boot. This takes time and they're really not optimized for speed on a 
process that occurs once.


It doesn't seem like it would take too many reboots to really mess with 
your reliability numbers for uptime. And what on earth are the 
developers doing with that kind of debug cycle time?


Mike



Re: S.Korea broadband firm sues Netflix after traffic surge

2021-10-11 Thread Michael Thomas


On 10/11/21 12:49 AM, Matthew Petach wrote:


Instead of a 4K stream, drop it to 480 or 240; the eyeball network
should be happy at the reduced strain the resulting stream puts
on their network.


As a consumer paying for my 4k stream, I know who I'm calling when it 
drops to 480 and it ain't Netflix. The eyeballs are most definitely not 
happy.


Mike




Re: S.Korea broadband firm sues Netflix after traffic surge

2021-10-10 Thread Michael Thomas


On 10/10/21 12:57 PM, Mark Tinka wrote:



On 10/10/21 21:33, Matthew Petach wrote:


If you sell a service for less than it costs to provide, simply
based on the hopes that people won't actually *use* it, that's
called "gambling", and I have very little sympathy for businesses
that gamble and lose.


You arrived at the crux of the issue, quickly, which was the basis of 
my initial response last week - infrastructure is dying. And we simply 
aren't motivated enough to figure it out.


When you spend 25+ years sitting in a chair waiting for the phone to 
ring or the door to open, for someone to ask, "How much for 5Mbps?", 
your misfortune will never be your own fault.


Isn't that what Erlang numbers are all about? My suspicion is that after 
about 100Mbs most people wouldn't notice the difference in most cases. 
My ISP is about 25Mbs on a good day (DSL) and it serves our needs fine 
and have never run into bandwidth constraints. Maybe if we were 
streaming 4k all of the time it might be different, but frankly the 
difference for 4k isn't all that big. It's sort of like phone screen 
resolution: at some point it just doesn't matter and becomes marketing hype.


Mike



Re: Network visibility

2021-10-20 Thread Michael Thomas


On 10/20/21 8:26 AM, Mel Beckman wrote:

Mark,

As long as we’re being pedantic, January 1, 1983 is considered the 
official birthday of the Internet, when TCP/IP first let different 
kinds of computers on different networks talk to each other.


It’s 2021, hence the Internet is /less/ than, not more than, 40 years 
old.  Given your mathematical skills, I put no stock in your claim 
that we still can’t “buy an NMS that just works.” :)


Pedantically, IP is 40 years old as of last month. What you're talking 
about is the flag day. People including myself were looking into 
internet protocols well before the flag day.


Mike


Re: Network visibility

2021-10-20 Thread Michael Thomas


On 10/20/21 12:38 PM, james.cut...@consultant.com wrote:
I don’t remember hearing about IP for VAX/VMS 2.4, but I was part of a 
group at Intel in 1981 looking at ARPAnet for moving designer tools 
and design files as an alternate to leased bandwidth from $TELCOs 
using DECnet and BiSync HASP. The costs of switching from 56 Kbps to 
ARPAnet’s 50 Kbps convinced us to wait. Clearly, private demand drove 
the subsequent transition as the TCP/IP stack became effectively free.


I'm not sure how we heard and got a copy of the CMU IP stack, but it was 
probably Mark Reinhold who now owns Java. It was definitely after 1981 
and definitely before 1985, probably somewhere in the middle. Just the 
fact that we could get such a thing was sort of remarkable in those 
early days, and especially for VMS which was, I won't say hostile, but 
had their own ideas. I don't know when early routing came about but DEC 
charged extra for routing for DECNet, so that was yet another reason IP 
was interesting is that it took little investment to check it all out.





 I miss DECUS, but not DELNIs.


Yeah, I miss DECUS too. I remember one plenary when somebody asked when 
the VAX would support the full 4G address space to laughs and guffaws 
from panel.


Mike




-
James R. Cutler - james.cut...@consultant.com
GPG keys: hkps://hkps.pool.sks-keyservers.net
cell 734-673-5462


On Oct 20, 2021, at 3:09 PM, Michael Thomas  wrote:

I think the issuing of rfc 791 was much more important than the flag 
day. ARPAnet was a tiny, tiny universe but there were a lot of people 
interested in networking at the time wondering what to do with our 
neat new DEUNA and DEQNA adapters. There was tons of interest in all 
of the various protocols coming out around then because nobody knew 
what was going to win, or whether there would be *a* winner at all. 
Being able to get a spec to write to was pretty novel at the time 
because all of the rest of them were proprietary so you had to 
reverse engineer them for the most part. It may be that alone that 
pushed IP along well before the public could hook up to the Internet. 
We had lots of customers asking for IP protocols in the mid to late 
80's and I can guarantee you most weren't part of the Internet. They 
were using IP as the interoperating system glue on their own networks.


Also: the flag day was pretty much an example of how not to do a 
transition. as in, let's not do that again.


Mike, trying to remember when CMU shipped their first version of 
their IP stack for VMS



On 10/20/21 11:47 AM, Miles Fidelman wrote:

Since we seem to be getting pedantic...

There's "The (capital I) Internet" - which, most date to the flag 
day, and the "Public Internet" (the Internet after policies changed 
and allowed commercial & public use over the NSFnet backbone - in 
1992f, as I recall).


Then there's the more general notion of "internetworking" - of which 
there was a considerable amount of experimental work going on, in 
parallel with TCP/IP.  And of (small i) "internets" - essentially 
any Catenet style network-of-networks.


Miles Fidelman

Mel Beckman wrote:

Michael,

“Looking into” isn’t “is” :)

 -mel


On Oct 20, 2021, at 10:39 AM, Michael Thomas  wrote:




On 10/20/21 8:26 AM, Mel Beckman wrote:

Mark,

As long as we’re being pedantic, January 1, 1983 is considered 
the official birthday of the Internet, when TCP/IP first let 
different kinds of computers on different networks talk to each 
other.


It’s 2021, hence the Internet is /less/ than, not more than, 40 
years old.  Given your mathematical skills, I put no stock in 
your claim that we still can’t “buy an NMS that just works.” :)


Pedantically, IP is 40 years old as of last month. What you're 
talking about is the flag day. People including myself were 
looking into internet protocols well before the flag day.


Mike




--
In theory, there is no difference between theory and practice.
In practice, there is.   Yogi Berra

Theory is when you know everything but nothing works.
Practice is when everything works but no one knows why.
In our lab, theory and practice are combined:
nothing works and no one knows why.  ... unknown


Re: Network visibility

2021-10-20 Thread Michael Thomas
I think the issuing of rfc 791 was much more important than the flag 
day. ARPAnet was a tiny, tiny universe but there were a lot of people 
interested in networking at the time wondering what to do with our neat 
new DEUNA and DEQNA adapters. There was tons of interest in all of the 
various protocols coming out around then because nobody knew what was 
going to win, or whether there would be *a* winner at all. Being able to 
get a spec to write to was pretty novel at the time because all of the 
rest of them were proprietary so you had to reverse engineer them for 
the most part. It may be that alone that pushed IP along well before the 
public could hook up to the Internet. We had lots of customers asking 
for IP protocols in the mid to late 80's and I can guarantee you most 
weren't part of the Internet. They were using IP as the interoperating 
system glue on their own networks.


Also: the flag day was pretty much an example of how not to do a 
transition. as in, let's not do that again.


Mike, trying to remember when CMU shipped their first version of their 
IP stack for VMS



On 10/20/21 11:47 AM, Miles Fidelman wrote:

Since we seem to be getting pedantic...

There's "The (capital I) Internet" - which, most date to the flag day, 
and the "Public Internet" (the Internet after policies changed and 
allowed commercial & public use over the NSFnet backbone - in 1992f, 
as I recall).


Then there's the more general notion of "internetworking" - of which 
there was a considerable amount of experimental work going on, in 
parallel with TCP/IP.  And of (small i) "internets" - essentially any 
Catenet style network-of-networks.


Miles Fidelman

Mel Beckman wrote:

Michael,

“Looking into” isn’t “is” :)

 -mel


On Oct 20, 2021, at 10:39 AM, Michael Thomas  wrote:




On 10/20/21 8:26 AM, Mel Beckman wrote:

Mark,

As long as we’re being pedantic, January 1, 1983 is considered the 
official birthday of the Internet, when TCP/IP first let different 
kinds of computers on different networks talk to each other.


It’s 2021, hence the Internet is /less/ than, not more than, 40 
years old.  Given your mathematical skills, I put no stock in your 
claim that we still can’t “buy an NMS that just works.” :)


Pedantically, IP is 40 years old as of last month. What you're 
talking about is the flag day. People including myself were looking 
into internet protocols well before the flag day.


Mike




--
In theory, there is no difference between theory and practice.
In practice, there is.   Yogi Berra

Theory is when you know everything but nothing works.
Practice is when everything works but no one knows why.
In our lab, theory and practice are combined:
nothing works and no one knows why.  ... unknown

Re:

2021-10-20 Thread Michael Thomas
Just as an interesting aside if you're interested in the history of 
networking, When Wizards Stayed Up Late is quite elucidating.


Mike

On 10/20/21 2:16 PM, Miles Fidelman wrote:
*Mel Beckman*mel at beckman.org 
 
wrote:

Mark,

Before 1983, the ARPANET wasn’t an internet, let alone The Internet. Each 
ARPANET connection required a host-specific interface (the “IMP”) and simplex 
Network Control Protocol (NCP). NCP used users' email addresses, and routing 
had to be specified in advance within each NCP message.


This is just so completely wrong as to be ludicrous.

First of all, the IMP was the box.  Computers connected using the 
protocols specified in BBN Report 1822 
(https://walden-family.com/impcode/BBN1822_Jan1976.pdf)


NCP was alternately referred to as the Network Control PROGRAM and the 
Network Control Protocol.  It essentially played the role of TCP, 
managing pairs of simplex connections.


Routing was completely dynamic - that was the whole point of the IMP 
software. And routing did NOT require email addresses - those operated 
much further up the protocol stack.


Perhaps you're confusing this with UUCP mail addressing ("bang" 
paths).  Or possibly BITNET or FidoNet - which I believe also were 
source routed (but memory fails on that.


re.

Even so, the Internet as a platform open to anyone didn’t start until 1992. I 
know you joined late, in 1999, so you probably missed out on this history. :)
You know, there are people on this list who were back there in 1969, 
and actually wrote some of that code - so you might want to stop 
spouting nonsense.  (Not me, I was a user, starting in 1971, didn't 
get to BBN until 1985 - when we were still dealing with stragglers who 
didn't quite manage to cutover to TCP/IP on the Flag Day.)


--
In theory, there is no difference between theory and practice.
In practice, there is.   Yogi Berra

Theory is when you know everything but nothing works.
Practice is when everything works but no one knows why.
In our lab, theory and practice are combined:
nothing works and no one knows why.  ... unknown

Internet history

2021-10-21 Thread Michael Thomas

[changed to a more appropriate subject]

On 10/20/21 3:52 PM, Grant Taylor via NANOG wrote:

On 10/20/21 3:26 PM, Michael Thomas wrote:
Just as an interesting aside if you're interested in the history of 
networking, When Wizards Stayed Up Late is quite elucidating.


+10 to Where Wizards Stay Up Late.

I recently re-acquired (multiple copies of) it.  (Multiple because I 
wanted the same edition that I couldn't locate after multiple moves.)



One of the things about the book was that it finally confirmed for me 
what I had heard but thought might be apocryphal which was that one of 
my co-workers at Cisco (Charlie Klein) was the first one to receive a 
packet on ARPAnet. I guess it sent an "l" and then immediately crashed. 
They fixed the problem and the next time they got "login:". It also 
casts shade on another early well known person which gives me some 
amount of schadenfreude.


Mike



Re: Internet history

2021-10-21 Thread Michael Thomas



On 10/21/21 11:52 AM, Patrick W. Gilmore wrote:

On Oct 21, 2021, at 2:37 PM, Michael Thomas  wrote:

[changed to a more appropriate subject]

On 10/20/21 3:52 PM, Grant Taylor via NANOG wrote:

On 10/20/21 3:26 PM, Michael Thomas wrote:

Just as an interesting aside if you're interested in the history of networking, 
When Wizards Stayed Up Late is quite elucidating.

+10 to Where Wizards Stay Up Late.

I recently re-acquired (multiple copies of) it.  (Multiple because I wanted the 
same edition that I couldn't locate after multiple moves.)



One of the things about the book was that it finally confirmed for me what I had heard but thought 
might be apocryphal which was that one of my co-workers at Cisco (Charlie Klein) was the first one 
to receive a packet on ARPAnet. I guess it sent an "l" and then immediately crashed. They 
fixed the problem and the next time they got "login:". It also casts shade on another 
early well known person which gives me some amount of schadenfreude.

It was “LO”, and Mr. Kline sent the packets, but you got it essentially right.

Source: https://uclaconnectionlab.org/internet-museum/

The last picture confirms Mr. Kline sent the LO and crashed the WHOLE INTERENT 
(FSVO “Internet”) just a couple seconds after it started. I wonder if he will 
ever live it down. :-) Apparently at the time it was not that big a deal. He 
did the test at 10:30 PM. He did not call and wake anyone up, everyone had to 
read about it in the notes the next day.

My understanding is that really is IMP No. 1. Someone found it in the “to be scrapped” 
pile & rescued it, then they closed off room 3420 & made it a micro-museum. I 
believe the teletype is not the original, but is a real ASR-33. The Sigma 7 is a prop, 
I believe.

Anyone can visit it for free (other than parking, which is expensive near 
UCLA!). If you are near UClA, you should stop by. To be honest, it is both 
overwhelming and underwhelming. Overwhelming because of what it was and 
represents. Underwhelming because it is a tiny classroom with a half-glass 
locked door and a plaque in the basement of the mathematics department at a 
public university that looks like it was built in the 40s. I went to UCLA for 
mathematics, and spent quite a bit of time in that hallway without even 
realizing what that room was. (It was not a museum at the time.)

The destination was Stanford, iirc. I've always thought it was pretty 
odd that BBN essentially threw the IMP over the wall clear to the west 
coast. You would think they'd want something a lot closer to Boston to 
make for easier debugging. But I knew that I should have looked up the 
details :)


Mike



Re: DOJ files suit to enforce FCC penalty for robocalls

2021-10-21 Thread Michael Thomas



On 10/21/21 10:57 AM, Sean Donelan wrote:


The multi-million dollar fines announced with great fanfaire by the 
Federal Communication Commission are almost never collected. The FCC 
doesn't have enforcement authority to collect fines. The FCC usually 
withholds license renewals until penalties are paid. If the violator 
doesn't have any FCC licenses (or doesn't care), the FCC is powerless.


The FCC refers uncollected penalties to the Department of Justice. In 
the past, DOJ didn't prioritize uncollected penalties and most fines 
were never enforced.



The Department of Justice Files Suit to Recover $9.9 Million 
Forfeiture Penalty for Nearly 5,000 Illegally Spoofed Robocalls


https://www.justice.gov/opa/pr/department-justice-files-suit-recover-forfeiture-penalty-nearly-5000-illegally-spoofed 



So has any of the STIR/SHAKEN stuff that was mandated made any 
difference on the ground yet? I assume this is different than what you 
posted about though.


Mike



Re: DNS pulling BGP routes?

2021-10-18 Thread Michael Thomas



On 10/18/21 11:09 AM, Sabri Berisha wrote:


The term "network neutrality" was invented by people who want to control
a network owned and paid for by someone else.

Your version of "unreasonable" and my version of "unreasonable" are on the
opposite end of the spectrum. I think it is unreasonable for you to tell me
how to run configure my routers, and you think it is unreasonable for me
to configure my routers that I pay for the way that I want to.


Yeahbut, for the last mile that network is often a monopoly or maybe a 
duopoly if you're lucky. If streaming provider 1 pays ISP to give 
priority over streaming provider 2 -- maybe by severely rate limiting 
provider 2 -- the people who get screwed are end users without a way to 
vote with their feet. That sort of monopolistic behavior is bad for end 
users. Mostly I want ISP's to be dumb bit providers and stay out of 
shady deals that enrich ISP's at my expense. And if it takes regulation 
to do that, bring it.


Mike




Re: DNS pulling BGP routes?

2021-10-18 Thread Michael Thomas



On 10/18/21 12:22 PM, Sabri Berisha wrote:

- On Oct 18, 2021, at 11:51 AM, Michael Thomas m...@mtcc.com wrote:

Hi,


On 10/18/21 11:09 AM, Sabri Berisha wrote:

The term "network neutrality" was invented by people who want to control
a network owned and paid for by someone else.

Your version of "unreasonable" and my version of "unreasonable" are on the
opposite end of the spectrum. I think it is unreasonable for you to tell me
how to run configure my routers, and you think it is unreasonable for me
to configure my routers that I pay for the way that I want to.

Yeahbut, for the last mile that network is often a monopoly or maybe a
duopoly if you're lucky. If streaming provider 1 pays ISP to give
priority over streaming provider 2 -- maybe by severely rate limiting
provider 2 -- the people who get screwed are end users without a way to
vote with their feet. That sort of monopolistic behavior is bad for end
users. Mostly I want ISP's to be dumb bit providers and stay out of
shady deals that enrich ISP's at my expense. And if it takes regulation
to do that, bring it.

I totally agree. 100%. Now we just have to agree on the regulation that
we're talking about.

My idea of regulation in this context is to get rid of the monopoly/duopoly
so that users actually do have a way out and can vote with their feet. From
that perspective, the NBN model isn't that bad (not trying to start an NBN
flamewar here).

But, I would be opposed to regulation that prevents a network operator from
going into enable mode.

There are more reasons than "government intervention into a privately owned
network" / "network neutrality" to want more competition. Lower prices and
better service, for example. Have you ever tried calling Comcast/Spectrum?

I'd love to get involved (privately, not professionally) in a municipal
broadband project where I live. We have 1 fiber duct for the entire town.
That got cut last year, and literally everyone was without internet access
for many hours. We don't need net neutrality. We need competition. The FCC
sucks, and so does the CPUC.


I know that there are a lot of risks with hamfisted gubbermint 
regulations. But even when StarLink turns the sky into perpetual 
daylight and we get another provider, there are going to still be 
painfully few choices, and too often the response to $EVIL is not "oh 
great, more customers for us!" but "oh great, let's do that too!". 
Witness airlines and the race to the bottom with various fees -- and 
that's in a field where there is plenty of competition.


This is obviously complicated and one of the complications is QoS in the 
last mile. DOCSIS has a lot of QoS machinery so that MSO's could get CBR 
like flows for voice back in the day. I'm not sure whether this ever got 
deployed because as is often the case, brute force and ignorance (ie, 
make the wire faster) wins, mooting the need. Is there even a 
constructive use of QoS in the last mile these days that isn't niche? 
Maybe gaming? Would any sizable set of customers buy it if it were offered?


If there isn't, a regulation that just says "don't cut deals to 
prioritize one traffic source at the expense of others" seems pretty 
reasonable, and probably reflects the status quo anyway.


Mike




Re: DNS pulling BGP routes?

2021-10-18 Thread Michael Thomas



On 10/18/21 1:51 PM, Sabri Berisha wrote:

I know that there are a lot of risks with hamfisted gubbermint

regulations. But even when StarLink turns the sky into perpetual
daylight and we get another provider, there are going to still be
painfully few choices, and too often the response to $EVIL is not "oh
great, more customers for us!" but "oh great, let's do that too!".

That's the point where MBAs take over from engineering to squeeze every last
penny out of the customer. And that usually happens when a company gets large.


So what's the counter? I mean, MSO's already pull that kind of shitty 
behavior with their "fees" cloaked as taxes.


Maybe a better argument is that this is all theoretical since to my 
knowledge it's not being done on any large scale, so let's not fix 
theoretical problems.






This is obviously complicated and one of the complications is QoS in the
last mile. DOCSIS has a lot of QoS machinery so that MSO's could get CBR
like flows for voice back in the day. I'm not sure whether this ever got
deployed because as is often the case, brute force and ignorance (ie,
make the wire faster) wins, mooting the need. Is there even a
constructive use of QoS in the last mile these days that isn't niche?
Maybe gaming? Would any sizable set of customers buy it if it were offered?

It's been a few years since I've worked for a residential service provider,
but to the best of my memory, congestion was rarely found in the last mile.


That's what I figured. I remember talking to some Sprint architect types 
around the same time when I told them all of their insistence on AAL2 
was useless because voice was going to be drop in the bucket. They 
looked at me as if I was completely insane.


Mike


Re: IPv6 woes - RFC

2021-09-28 Thread Michael Thomas


On 9/28/21 1:06 PM, Christopher Morrow wrote:



On Tue, Sep 28, 2021 at 3:02 PM Randy Bush > wrote:


> Heh, NAT is not that evil after all. Do you expect that all the home
> people will get routable public IPs for all they toys inside house?

in ipv6 they can.  and it can have consequences, see

    NATting Else Matters: Evaluating IPv6 Access Control Policies in
    Residential Networks;
    Karl Olson, Jack Wampler, Fan Shen, and Nolen Scaife

https://link.springer.com/content/pdf/10.1007%2F978-3-030-72582-2_22.pdf


the ietf did not give guidance to cpe vendors to protect toys inside
your LAN


guidance aside... 'Time To Market' (or "Minimum Viable Product - MVP!) 
is likely to impact all of our security 'requirements'. :(
I also thought 'homenet' (https://datatracker.ietf.org/wg/homenet 
) was supposed to have 
provided the

guidance you seek here?



What I wonder is which string the IETF has to push on to get CPE vendors 
to... anything.


Anecdotally, I've seen firewall controls on all of the CPE I've had and 
no IPv6 (at least commercially).


Mike



Re: IPv6 woes - RFC

2021-09-29 Thread Michael Thomas


On 9/29/21 1:09 PM, Victor Kuarsingh wrote:



On Wed, Sep 29, 2021 at 3:22 PM Owen DeLong > wrote:





On Sep 29, 2021, at 09:25, Victor Kuarsingh mailto:vic...@jvknet.com>> wrote:




On Wed, Sep 29, 2021 at 10:55 AM Owen DeLong via NANOG
mailto:nanog@nanog.org>> wrote:

Use SLAAC, allocate prefixes from both providers. If you are
using multiple routers, set the priority of the preferred
router to high in the RAs. If you’re using one router, set
the preferred prefix as desired in the RAs.

Owen


I agree this works, but I assume that we would not consider this
a consumer level solution (requires an administrator to make it
work).  It also assumes the local network policy allows for
auto-addressing vs. requirement for DHCP.


It shouldn’t require an administrator if there’s just one router.
If there are two routers, I’d say we’re beyond the average consumer.


In the consumer world (Where a consumer has no idea who we are, what 
IP is and the Internet is a wireless thing they attach to).


I am only considering one router (consumer level stuff). Here is my 
example:
- Mr/Ms/Ze. Smith is a consumer (lawyer) wants to work from home and 
buy a local cable service and/or DSL service, and/or xPON service


Isn't the easier (and cheaper) thing to do here is just use a VPN to get 
behind the corpro firewall? Or as is probably happening more and more 
there is no corpro network at all since everything is outsourced on the 
net for smaller companies like your law firm.


The use cases that stuck in my mind for the justification for the need 
for routing was for things like Zigbee and other low power networks 
where you want them isolated from the chatter of the local lan. Not 
saying that I agree with the justification, but that was it iirc.


Mike



Re: IPv6 woes - RFC

2021-09-29 Thread Michael Thomas


On 9/29/21 12:22 PM, Owen DeLong via NANOG wrote:




On Sep 29, 2021, at 09:25, Victor Kuarsingh  wrote:




On Wed, Sep 29, 2021 at 10:55 AM Owen DeLong via NANOG 
mailto:nanog@nanog.org>> wrote:


Use SLAAC, allocate prefixes from both providers. If you are
using multiple routers, set the priority of the preferred router
to high in the RAs. If you’re using one router, set the preferred
prefix as desired in the RAs.

Owen


I agree this works, but I assume that we would not consider this a 
consumer level solution (requires an administrator to make it work).  
It also assumes the local network policy allows for auto-addressing 
vs. requirement for DHCP.


It shouldn’t require an administrator if there’s just one router. If 
there are two routers, I’d say we’re beyond the average consumer.



I think the multiple router problem is one of the things that homenet 
was supposed to be solving for such that it is plug and play. But I 
share some of your skepticism.


I wonder if anybody has run an experiment wider than one or two people 
where the home router implements a 6-4 NAT and the default numbering is 
v6 instead of v4. That is, run everything that can run on v6 and NAT it 
to v4 on the wan side (assuming there isn't v6 there). There are lots of 
v6 stacks out there for all of the common OS's and supposedly they 
prefer v6 in a happy eyeballs race. I mean, if we have to NAT why not v6 
NAT the devices that support it and v4 NAT the ones that can't.


I'm not sure if Cablelabs is active with v6 -- last I heard they were 
pushing v6, but that's been ages -- but that would really put their 
money where their mouth is if it really worked well at scale. It would 
also give some incentive to have v6 in the last mile so you don't even 
need the 6-4 NAT. Didn't somebody like Comcast go to a complete v6 
network internally to simplify their network? That sounds like it would 
push the simplification even farther.


Mike



Re: IPv6 woes - RFC

2021-09-29 Thread Michael Thomas


On 9/29/21 2:23 PM, Victor Kuarsingh wrote:



On Wed, Sep 29, 2021 at 4:51 PM Michael Thomas <mailto:m...@mtcc.com>> wrote:



On 9/29/21 1:09 PM, Victor Kuarsingh wrote:



On Wed, Sep 29, 2021 at 3:22 PM Owen DeLong mailto:o...@delong.com>> wrote:




On Sep 29, 2021, at 09:25, Victor Kuarsingh
mailto:vic...@jvknet.com>> wrote:




On Wed, Sep 29, 2021 at 10:55 AM Owen DeLong via NANOG
mailto:nanog@nanog.org>> wrote:

Use SLAAC, allocate prefixes from both providers. If you
are using multiple routers, set the priority of the
preferred router to high in the RAs. If you’re using one
router, set the preferred prefix as desired in the RAs.

Owen


I agree this works, but I assume that we would not consider
this a consumer level solution (requires an administrator to
make it work).  It also assumes the local network policy
allows for auto-addressing vs. requirement for DHCP.


It shouldn’t require an administrator if there’s just one
router. If there are two routers, I’d say we’re beyond the
average consumer.


In the consumer world (Where a consumer has no idea who we are,
what IP is and the Internet is a wireless thing they attach to).

I am only considering one router (consumer level stuff).  Here is
my example:
- Mr/Ms/Ze. Smith is a consumer (lawyer) wants to work from home
and buy a local cable service and/or DSL service, and/or xPON service


Isn't the easier (and cheaper) thing to do here is just use a VPN
to get behind the corpro firewall? Or as is probably happening
more and more there is no corpro network at all since everything
is outsourced on the net for smaller companies like your law firm.


For shops with IT departments, sure that can make sense. For many 
mom/pop setups, maybe less likely.  The challenge for us (in this 
industry) is that we need to address not just the top use cases, but 
the long tail as well (especially in this new climate of more WFH).


The last startup I worked for a customer wanted audit info on our 
corporate network. We didn't have one. We just used various cloud based 
services to get our jobs done and rented cloud based vm's for the 
customer facing services. I would imagine that a mom/pop setup would do 
the same thing these days. Having a corpro network in the small probably 
doesn't make much sense anymore let alone the fancy multihoming 
scenarios to access it. There are security implications with all of 
this, of course, but that's probably the path of least resistance.


Mike



DNS pulling BGP routes?

2021-10-06 Thread Michael Thomas
So if I understand their post correctly, their DNS servers have the 
ability to withdraw routes if they determine are sub-optimal (fsvo). I 
can certainly understand for the DNS servers to not give answers they 
think are unreachable but there is always the problem that they may be 
partitioned and not the routes themselves. At a minimum, I would think 
they'd need some consensus protocol that says that it's broken across 
multiple servers.


But I just don't understand why this is a good idea at all. Network 
topology is not DNS's bailiwick so using it as a trigger to withdraw 
routes seems really strange and fraught with unintended consequences. 
Why is it a good idea to withdraw the route if it doesn't seem reachable 
from the DNS server? Give answers that are reachable, sure, but to 
actually make a topology decision? Yikes. And what happens to the cached 
answers that still point to the supposedly dead route? They're going to 
fail until the TTL expires anyway so why is it preferable withdraw the 
route too?


My guess is that their post while more clear that most doesn't go into 
enough detail, but is it me or does it seem like this is a really weird 
thing to do?


Mike


On 10/5/21 11:56 PM, Bjørn Mork wrote:

Masataka Ohta  writes:


As long as name servers with expired zone data won't serve
request from outside of facebook, whether BGP routes to the
name servers are announced or not is unimportant.

I am not convinced this is true.  You'd normally serve some semi-static
content, especially wrt stuff you need yourself to manage your network.
Removing all DNS servers at the same time is never a good idea, even in
the situation where you believe they are all failing.

The problem is of course that you can't let the servers take the
decision to withdraw from anycast if you want to prevent this
catastrophe.  The servers have no knowledge of the rest of the network.
They only know that they've lost contact with it.  So they all make the
same stupid decision.

But if the servers can't withdraw, then they will serve stale content if
the data center loses backbone access. And with a large enough network
then that is probably something which happens on a regular basis.

This is a very hard problem to solve.

Thanks a lot to facebook for making the detailed explanation available
to the public.  I'm crossing my fingers hoping they follow up with
details about the solutions they come up with.  The problem affects any
critical anycast DNS service. And it doesn't have to be as big as
facebook to be locally critical to an enterprise, ISP or whatever.



Bjørn


Re: DNS pulling BGP routes?

2021-10-06 Thread Michael Thomas



On 10/6/21 2:33 PM, William Herrin wrote:

On Wed, Oct 6, 2021 at 10:43 AM Michael Thomas  wrote:

So if I understand their post correctly, their DNS servers have the
ability to withdraw routes if they determine are sub-optimal (fsvo).

The servers' IP addresses are anycasted. When one data center
determines itself to be malfunctioning, it withdraws the routes so
that users will reach a different data center that is, in theory,
still functioning.

Ah, I was wondering if the anycast part was the relevant bit. But 
doesn't it seem odd that it would be intertwined with the DNS 
infrastructure?


Mike



Re: DNS pulling BGP routes?

2021-10-06 Thread Michael Thomas



On 10/6/21 2:58 PM, Jon Lewis wrote:

On Wed, 6 Oct 2021, Michael Thomas wrote:



On 10/6/21 2:33 PM, William Herrin wrote:

 On Wed, Oct 6, 2021 at 10:43 AM Michael Thomas  wrote:

 So if I understand their post correctly, their DNS servers have the
 ability to withdraw routes if they determine are sub-optimal (fsvo).

 The servers' IP addresses are anycasted. When one data center
 determines itself to be malfunctioning, it withdraws the routes so
 that users will reach a different data center that is, in theory,
 still functioning.

Ah, I was wondering if the anycast part was the relevant bit. But 
doesn't it seem odd that it would be intertwined with the DNS 
infrastructure?


People have been anycasting DNS server IPs for years (decades?). So, no.

But it wasn't just their DNS subnets that were pulled, I thought. I'm 
obviously really confused. Anycast to a DNS server makes sense that 
they'd pull out if they couldn't contact the backend. But I thought that 
almost all of their routes to the backend were pulled? That is, the DFZ 
was emptied of FB routes.


Mike



Re: DNS pulling BGP routes?

2021-10-06 Thread Michael Thomas



On 10/6/21 3:33 PM, Jon Lewis wrote:

On Wed, 6 Oct 2021, Michael Thomas wrote:

 People have been anycasting DNS server IPs for years (decades?). 
So, no.


But it wasn't just their DNS subnets that were pulled, I thought. I'm 
obviously really confused. Anycast to a DNS server makes sense that 
they'd pull out if they couldn't contact the backend. But I thought 
that almost all of their routes to the backend were pulled? That is, 
the DFZ was emptied of FB routes.


Well, as someone else said, DNS wasn't the problem...it was just one 
of the more noticeable casualties.  Whatever they did broke the 
network rather completely, and that took out all of their DNS, which 
broke lots of other things that depend on DNS.


Maybe the problem here is that two things happened and the article 
conflated the two: the DNS infrastructure pulled its routes from the 
anycast address and something else pulled all of the other routes but 
wasn't mentioned in the article.


Mike



Re: VoIP Provider DDoSes

2021-09-21 Thread Michael Thomas



On 9/21/21 4:09 PM, Eric Kuhnke wrote:
Unlike http based services which can be placed behind cloudflare or 
similar, harder to protect sip trunking servers.


The provider in question makes use of third party hosting services for 
each of their cities' POPs. It is my understanding that for the most 
part they do not run their own infrastructure but either rent 
dedicated servers or a few rack units of Colo in each city.


I question whether some or any of those hosting companies have 
sufficient inbound (200-400Gbps) capacity to weather a moderately 
sized DDoS.



Which makes SIPoHTTP an inevitability.

Mike



Re: VoIP Provider DDoSes

2021-09-21 Thread Michael Thomas



On 9/21/21 6:46 PM, Brandon Svec via NANOG wrote:

Never heard of that one. WebRTC is maybe easier to protect from DDOS?


I was just kidding/2. But webrtc don't have a signaling protocol. It can 
be SIP but it can be completely home brewed too.


Mike




Brandon


On Sep 21, 2021, at 5:37 PM, Michael Thomas  wrote:

Which makes SIPoHTTP an inevitability.

Mike


Re: update - Re: Facebook post-mortems...

2021-10-04 Thread Michael Thomas


On 10/4/21 6:07 PM, jcur...@istaff.org wrote:
On 4 Oct 2021, at 8:58 PM, jcur...@istaff.org 
 wrote:


Fairly abstract - Facebook Engineering - 
https://m.facebook.com/nt/screen/?params=%7B%22note_id%22%3A10158791436142200%7D=%2Fnotes%2Fnote%2F&_rdr 



My bad - might be best to ignore the above post as it is a 
unconfirmed/undated post-mortem that may reference a different event.


One of the replies say it's from February, so year.

Mike




Re: facebook outage

2021-10-04 Thread Michael Thomas


On 10/4/21 2:41 PM, Baldur Norddahl wrote:



man. 4. okt. 2021 23.33 skrev Bill Woodcock >:




> On Oct 4, 2021, at 11:21 PM, Bill Woodcock mailto:wo...@pch.net>> wrote:
>
>
>
>> On Oct 4, 2021, at 11:10 PM, Bill Woodcock mailto:wo...@pch.net>> wrote:
>>
>> They’re starting to pick themselves back up off the floor in
the last two or three minutes.  A few answers getting out.  I
imagine it’ll take a while before things stabilize, though.
>
> nd we’re back:
>
> WoodyNet-2:.ssh woody$ dig www.facebook.com
 @9.9.9.9 

So that was, what…  15:50 UTC to 21:05 UTC, more or less… five
hours and fifteen minutes.

That’s a lot of hair burnt all the way to the scalp, and some
third-degree burns beyond that.

Maybe they’ll get one or two independent secondary authoritatives,
so this doesn’t happen again.  :-)



We have had dns back for a while here but the site is still down. Not 
counting this as over yet.





I got a page to load. Probably trickling out.

Mike



Re: Facebook post-mortems...

2021-10-04 Thread Michael Thomas


On 10/4/21 5:58 PM, jcur...@istaff.org wrote:
Fairly abstract - Facebook Engineering - 
https://m.facebook.com/nt/screen/?params=%7B%22note_id%22%3A10158791436142200%7D=%2Fnotes%2Fnote%2F&_rdr 



Also, Cloudflare’s take on the outage - 
https://blog.cloudflare.com/october-2021-facebook-outage/ 





They have a monkey patch subsystem. Lol.

Mike



Re: Facebook post-mortems...

2021-10-05 Thread Michael Thomas



On 10/4/21 10:42 PM, William Herrin wrote:

On Mon, Oct 4, 2021 at 6:15 PM Michael Thomas  wrote:

They have a monkey patch subsystem. Lol.

Yes, actually, they do. They use Chef extensively to configure
operating systems. Chef is written in Ruby. Ruby has something called
Monkey Patches. This is where at an arbitrary location in the code you
re-open an object defined elsewhere and change its methods.

Chef doesn't always do the right thing. You tell Chef to remove an RPM
and it does. Even if it has to remove half the operating system to
satisfy the dependencies. If you want it to do something reasonable,
say throw an error because you didn't actually tell it to remove half
the operating system, you have a choice: spin up a fork of chef with a
couple patches to the chef-rpm interaction or just monkey-patch it in
one of your chef recipes.


Just because a language allows monkey patching doesn't mean that you 
should use it. In that particular outage they said that they fix up 
errant looking config files rather than throw an error and make somebody 
fix it. That is an extremely bad practice and frankly looks like amateur 
hour to me.


Mike



Re: Facebook post-mortems...

2021-10-05 Thread Michael Thomas



On 10/5/21 12:17 AM, Carsten Bormann wrote:

On 5. Oct 2021, at 07:42, William Herrin  wrote:

On Mon, Oct 4, 2021 at 6:15 PM Michael Thomas  wrote:

They have a monkey patch subsystem. Lol.

Yes, actually, they do. They use Chef extensively to configure
operating systems. Chef is written in Ruby. Ruby has something called
Monkey Patches.

While Ruby indeed has a chain-saw (read: powerful, dangerous, still the tool of 
choice in certain cases) in its toolkit that is generally called 
“monkey-patching”, I think Michael was actually thinking about the “chaos 
monkey”,
https://en.wikipedia.org/wiki/Chaos_engineering#Chaos_Monkey
https://netflix.github.io/chaosmonkey/


No, chaos monkey is a purposeful thing to induce corner case errors so 
they can be fixed. The earlier outage involved a config sanitizer that 
screwed up and then pushed it out. I can't get my head around why 
anybody thought that was a good idea vs rejecting it and making somebody 
fix the config.


Mike




Re: Facebook post-mortems...

2021-10-05 Thread Michael Thomas
Actually for card readers, the offline verification nature of 
certificates is probably a nice property. But client certs pose all 
sorts of other problems like their scalability, ease of making changes 
(roles, etc), and other kinds of considerations that make you want to 
fetch more information online... which completely negates the advantages 
of offline verification. Just the CRL problem would probably sink you 
since when you fire an employee you want access to be cut off immediately.


The other thing that would scare me in general with expecting offline 
verification is the *reason* it's being used is for offline might get 
forgotten and back comes the online dependencies while nobody is looking.


BTW: you don't need to reach the trust anchor, though you almost 
certainly need to run OCSP or something like it if you have client certs.


Mike

On 10/5/21 1:34 PM, Matthew Petach wrote:



On Tue, Oct 5, 2021 at 8:57 AM Kain, Becki (.) > wrote:


Why ever would have a card reader on your external facing network,
if that was really the case why they couldn't get in to fix it?


Let's hypothesize for a moment.

Let's suppose you've decided that certificate-based
authentication is the cat's meow, and so you've got
dot1x authentication on every network port in your
corporate environment, all your users are authenticated
via certificates, all properly signed all the way up the
chain to the root trust anchor.

Life is good.

But then you have a bad network day.  Suddenly,
you can't talk to upstream registries/registrars,
you can't reach the trust anchor for your certificates,
and you discover that all the laptops plugged into
your network switches are failing to validate their
authenticity; sure, you're on the network, but you're
in a guest vlan, with no access.  Your user credentials
aren't able to be validated, so you're stuck with the
base level of access, which doesn't let you into the
OOB network.

Turns out your card readers were all counting on
dot1x authentication to get them into the right vlan
as well, and with the network buggered up, the
switches can't validate *their* certificates either,
so the door badge card readers just flash their
LEDs impotently when you wave your badge at
them.

Remember, one attribute of certificates is that they are
designated as valid for a particular domain, or set of
subdomains with a wildcard; that is, an authenticator needs
to know where the certificate is being presented to know if
it is valid within that scope or not.   You can do that scope
validation through several different mechanisms,
such as through a chain of trust to a certificate authority,
or through DNSSEC with DANE--but fundamentally,
all certificates have a scope within which they are valid,
and a means to identify in which scope they are being
used.  And wether your certificate chain of trust is
being determined by certificate authorities or DANE,
they all require that trust to be validated by something
other than the client and server alone--which generally
makes them dependent on some level of external
network connectivity being present in order to properly
function.   [yes, yes, we can have a side discussion about
having every authentication server self-sign certificates
as its own CA, and thus eliminate external network
connectivity dependencies--but that's an administrative
nightmare that I don't think any large organization would
sign up for.]

So, all of the client certificates and authorization servers
we're talking about exist on your internal network, but they
all counted on reachability to your infrastructure
servers in order to properly authenticate and grant
access to devices and people.  If your BGP update
made your infrastructure servers, such as DNS servers,
become unreachable, then suddenly you might well
find yourself locked out both physically and logically
from your own network.

Again, this is purely hypothetical, but it's one scenario
in which a routing-level "oops" could end up causing
physical-entry denial, as well as logical network access
level denial, without actually having those authentication
systems on external facing networks.

Certificate-based authentication is scalable and cool, but
it's really important to think about even generally "that'll
never happen" failure scenarios when deploying it into
critical systems.  It's always good to have the "break glass
in case of emergency" network that doesn't rely on dot1x,
that works without DNS, without NTP, without RADIUS,
or any other external system, with a binder with printouts
of the IP addresses of all your really critical servers and
routers in it which gets updated a few times a year, so that
when the SHTF, a person sitting at a laptop plugged into
that network with the binder next to them can get into the
emergency-only local account on each router to fix things.

And yes, you want every command that local emergency-only
user types into a router to be logged, because someone

Better description of what happened

2021-10-05 Thread Michael Thomas
This bit posted by Randy might get lost in the other thread, but it 
appears that their DNS withdraws BGP routes for prefixes that they can't 
reach or are flaky it seems. Apparently that goes for the prefixes that 
the name servers are on too. This caused internal outages too as it 
seems they use their front facing DNS just like everybody else.


Sounds like they might consider having at least one split horizon server 
internally. Lots of fodder here.


Mike

On 10/5/21 11:11 AM, Randy Monroe wrote:
Updated: 
https://engineering.fb.com/2021/10/05/networking-traffic/outage-details/ 
<https://engineering.fb.com/2021/10/05/networking-traffic/outage-details/>


On Tue, Oct 5, 2021 at 1:26 PM Michael Thomas <mailto:m...@mtcc.com>> wrote:



On 10/5/21 12:17 AM, Carsten Bormann wrote:
> On 5. Oct 2021, at 07:42, William Herrin mailto:b...@herrin.us>> wrote:
>> On Mon, Oct 4, 2021 at 6:15 PM Michael Thomas mailto:m...@mtcc.com>> wrote:
>>> They have a monkey patch subsystem. Lol.
>> Yes, actually, they do. They use Chef extensively to configure
>> operating systems. Chef is written in Ruby. Ruby has something
called
>> Monkey Patches.
> While Ruby indeed has a chain-saw (read: powerful, dangerous,
still the tool of choice in certain cases) in its toolkit that is
generally called “monkey-patching”, I think Michael was actually
thinking about the “chaos monkey”,
> https://en.wikipedia.org/wiki/Chaos_engineering#Chaos_Monkey
<https://en.wikipedia.org/wiki/Chaos_engineering#Chaos_Monkey>
> https://netflix.github.io/chaosmonkey/
<https://netflix.github.io/chaosmonkey/>

No, chaos monkey is a purposeful thing to induce corner case
errors so
they can be fixed. The earlier outage involved a config sanitizer
that
screwed up and then pushed it out. I can't get my head around why
anybody thought that was a good idea vs rejecting it and making
somebody
fix the config.

Mike




--

Randy Monroe

Network Engineering

Uber <https://uber.com/>








Re: massive facebook outage presently

2021-10-04 Thread Michael Thomas


On 10/4/21 11:48 AM, Luke Guillory wrote:


I believe the original change was 'automatic' (as in configuration 
done via a web interface). However, now that connection to the outside 
world is down, remote access to those tools don't exist anymore, so 
the emergency procedure is to gain physical access to the peering 
routers and do all the configuration locally.


Assuming that this is what actually happened, what should fb have done 
different (beyond the obvious of not screwing up the immediate issue)? 
This seems like it's a single point of failure. Should all of the BGP 
speakers have been dual homed or something like that? Or should they not 
have been mixing ops and production networks? Sorry if this sounds dumb.


Mike



Re: Better description of what happened

2021-10-05 Thread Michael Thomas


On 10/5/21 3:09 PM, Andy Brezinsky wrote:


It's a few years old, but Facebook has talked a little bit about their 
DNS infrastructure before.  Here's a little clip that talks about 
Cartographer: https://youtu.be/bxhYNfFeVF4?t=2073


From their outage report, it sounds like their authoritative DNS 
servers withdraw their anycast announcements when they're unhealthy.  
The health check from those servers must have relied on something 
upstream.  Maybe they couldn't talk to Cartographer for a few minutes 
so they thought they might be isolated from the rest of the network 
and they decided to withdraw their routes instead of serving stale 
data.  Makes sense when a single node does it, not so much when the 
entire fleet thinks that they're out on their own.


A performance issue in Cartographer (or whatever manages this fleet 
these days) could have been the ticking time bomb that set the whole 
thing in motion.


Rereading it is said that their internal (?) backbone went down so 
pulling the routes was arguably the right thing to do. Or at least not 
flat out wrong. Taking out their nameserver subnets was clearly a 
problem though, though a fix is probably tricky since you clearly want 
to take down errant nameservers too.



Mike









Re: Powerline Broadband Usecases

2021-09-27 Thread Michael Thomas


On 9/27/21 1:35 PM, JORDI PALET MARTINEZ via NANOG wrote:


It may be still interesting in some remote areas if you can bring 
fiber to the nearest medium to low voltage transformer.


The company that designed the chip set (DS2, a Spanish company), was 
acquired by Marvell 
(https://www.marvell.com/company/newsroom/marvell-acquires-ds2-technology.html 
). 
I’m sure that they have many products (semiconductors) as a result of 
what we did in the project, but I don’t know the status, which are 
used in in-home PLC for sure. However, I’m not sure if there are many 
device providers using them for PLC services, CPEs, etc.





Very topically for California is whether there would be enough bandwidth 
to provide remote sensors on power poles, like heat, IR shots, circuit 
open, etc. A couple thousand every pole or two would probably be way 
cheaper than undergrounding which has its own set of issue.


Mike




Re: Class D addresses? was: Redploying most of 127/8 as unicast public

2021-11-20 Thread Michael Thomas



On 11/20/21 11:41 AM, Jay Hennigan wrote:

On 11/20/21 11:01, Michael Thomas wrote:

There is just as big a block of addresses with class D addresses for 
broadcast. Is broadcast really even a thing these days? I know tons 
of work went into it, but it always seemed that brute force and 
ignorance won out using unicast. Even if it has some niche uses, I 
seriously doubt that it needs 400M addresses. If you wanted to 
reclaim ipv4 addresses it seems that class D and class E would be a 
much better target than loopback.


It's multicast, not broadcast. A very small chunk is used by some 
routing protocols and it has uses in several streaming applications, 
but indeed it's much larger than it practically needs to be.


However, IMNSHO, all of these proposals if adopted are really just 
going to make a few people richer in the short term after their 
adoption and will not do anything significant to solve the problem of 
IPv4 exhaustion long-term.


Yeah, sorry brain fart. I'm mostly in the camp of just getting on with 
it with ipv6, but starving the beast doesn't have a great track record. 
We are talking about 20% of the address space that's being wasted so 
it's not nothing.


Mike



Re: Class D addresses? was: Redploying most of 127/8 as unicast public

2021-11-20 Thread Michael Thomas



On 11/20/21 12:37 PM, William Herrin wrote:

On Sat, Nov 20, 2021 at 12:03 PM Michael Thomas  wrote:

Was it the politics of ipv6 that
this didn't get resolved in the 90's when it was a lot more tractable?

No, in the '90s we didn't have nearly the basis for looking ahead. We
might still have invented a new way to use IP addresses that required
a block that wasn't unicast. It was politics in the 2000's and the
2010's, as it is today.


In the early to mid 90's it was still a crap shoot of whether IP was 
going to win (though it was really the only game in town for non-lan) 
but by when I started at Cisco in 1998 it was the clear winner with 
broadband starting to roll out. It was also obvious that v4 address 
space was going to run out which of course was the core reason for v6. 
So I don't understand why this didn't get done then when it was a *lot* 
easier. It sure smacks of politics.


Mike



Re: Class D addresses? was: Redploying most of 127/8 as unicast public

2021-11-20 Thread Michael Thomas



On 11/20/21 11:51 AM, William Herrin wrote:


If I had to guess, changing 224/4 is probably the biggest lift. The
other proposals mainly involve altering configuration, removing some
possibly hardcoded filters and in a few cases waiting for silicon to
age out of the system. Changing 224/4 means following a different code
path which does something fundamentally different with the packets --
unicast instead of multicast.


Yes, I agree it's the hardest. But if you're going to make changes at 
all you might as well get all of them. Was it the politics of ipv6 that 
this didn't get resolved in the 90's when it was a lot more tractable?


Mike





Class D addresses? was: Redploying most of 127/8 as unicast public

2021-11-20 Thread Michael Thomas



On 11/20/21 10:44 AM, Chris Adams wrote:

[]

There is just as big a block of addresses with class D addresses for 
broadcast. Is broadcast really even a thing these days? I know tons of 
work went into it, but it always seemed that brute force and ignorance 
won out using unicast. Even if it has some niche uses, I seriously doubt 
that it needs 400M addresses. If you wanted to reclaim ipv4 addresses it 
seems that class D and class E would be a much better target than loopback.


Mike, not that I have any stake in this



Re: Redeploying most of 127/8, 0/8, 240/4 and *.0 as unicast

2021-11-21 Thread Michael Thomas



On 11/20/21 9:29 PM, Jay Hennigan wrote:

On 11/19/21 10:27, William Herrin wrote:

Howdy,

That depends on your timeline. Do you know many non-technical people
still using their Pentium III computers with circa 2001 software
versions? Connected to the Internet?


There are lots of very old networked industrial machines with embedded 
computers operated by non-network-savvy people that are still very 
much in use.


Think CNC machines in machine shops, SCADA systems, etc. I wouldn't be 
a bit surprised to find quite a few 2001-era boxes still in service.


At some level I think there's a good chance that they'd just work. I 
wrote a significant amount of the Lantronix terminal server code and it 
never occurred to me that I should enforce rules about 127.0.0.0 or 
Class D or Class E. It really didn't have much bearing on a terminal 
server or the other host-like things we built. If you typed it in, it 
would work, if you  listened on a port it wouldn't care what the address 
was. I would imagine that lots of stacks from back in the day were just 
like that.


Mike



Re: Redeploying most of 127/8, 0/8, 240/4 and *.0 as unicast

2021-11-19 Thread Michael Thomas



On 11/19/21 8:27 AM, Randy Bush wrote:

these measurements would be great if there could be a full research-
style paper, with methodology artifacts, and reproducible results.
otherwise it disappears in the gossip stream of mailimg lists.

Maybe an experimental rfc making it a rfc 1918-like subnet and 
implementing it on openwrt or something like that to see what happens. 
how many ip cameras and the like roll over and die? same for class E 
addresses too, I suppose. The question with anything that asks about 
legacy is how long the long tail actually is.


Mike, not that have any position on whether this is a good idea or not



Re: Redeploying most of 127/8, 0/8, 240/4 and *.0 as unicast

2021-11-19 Thread Michael Thomas



On 11/19/21 7:38 AM, Owen DeLong via NANOG wrote:

Actually, CIDR didn’t require upgrading every end-node, just some of them.

That’s what made it doable… Updating only routers, not end-nodes.

Another thing that made it doable is that there were a LOT fewer end-nodes
and a much smaller vendor space when it came to the source of routers
that needed to be updated.

Further, in the CIDR deployment days, routers were almost entirely still
CPU-switched rather than ASIC or even line-card switched. Heck, the
workhorse backbone router that stimulated the development of CIDR
was built on an open-standard Mutlibus backplane with a MIPS CPU
IIRC. That also made widespread software updates a much simpler
proposition. Hardly anyone had a backbone router that was older than
an AGS (in fact, even the AGS was relatively rare in favor of the AGS+).


I don't think you can overstate how ASIC's made changing anything pretty 
much impossible. I'm not sure exactly then the cut over to ASIC's 
started to happen in the 90's, but once it did it was pretty much game 
over for ipv6. Instead of slipping an implementation into a release 
train and see what happens, it was getting buy in from a product manager 
that had absolutely no interest in respinning silicon. I remember when 
Deering and I were talking to the GSR folks (iirc) and it was hopeless 
since it would have to use the software path and nobody was going to buy 
a GSR for its software path.


It's why all of the pissing and moaning about what ipv6 looked like 
completely missed the point. There was a fuse lit in 1992 to when the 
first hardware based routing was done. *Anything* that extended the 
address space would have been better.


Mike



Re: Redeploying most of 127/8, 0/8, 240/4 and *.0 as unicast

2021-11-19 Thread Michael Thomas



On 11/19/21 10:15 AM, William Herrin wrote:

On Fri, Nov 19, 2021 at 10:04 AM Michael Thomas  wrote:

I don't think you can overstate how ASIC's made changing anything pretty
much impossible.
It's why all of the pissing and moaning about what ipv6 looked like
completely missed the point. There was a fuse lit in 1992 to when the
first hardware based routing was done. *Anything* that extended the
address space would have been better.

Obligatory 2007 plug: https://bill.herrin.us/network/ipxl.html


And just as impossible since it would pop it out of the fast path. Does 
big iron support ipv6 these days?


Mike



Re: IPv6 and CDN's

2021-11-27 Thread Michael Thomas


On 11/27/21 7:46 AM, Scott Morizot wrote:
On Fri, Nov 26, 2021 at 6:51 PM Oliver O'Boyle 
 wrote:


They're getting better at it, at least. They also recently added
v6 support in their NLBs and you can get a /56 for every VPC for
direct access. I don't think they offer BYO v6 yet, as they do for
v4, but it will come.


Since we are deploying BYO IPv6 in AWS, I can assure you they do offer 
it now. That was a blocker for us.


I thought it had to be some virtual private cloud setup? To get the long 
tail it needs to be a lot more simple. Like "here is the  record" 
after autoconf.


Mike


Re: IPv6 and CDN's

2021-11-27 Thread Michael Thomas



On 11/27/21 12:16 PM, William Herrin wrote:

On Fri, Nov 26, 2021 at 3:07 PM Michael Thomas  wrote:

On 11/26/21 1:44 PM, Jean St-Laurent via NANOG wrote:
Here are some maths and 1 argument kicking ass pitch for CFO’s that use iphones.
Apple tells app devs to use IPv6 as it's 1.4 times faster than IPv4

This really hits my bs meter big time.

If I had to guess, this is an example of correlation is not causation.
Folks with IPv6 tend to be on savvier service providers who have
better performance for both IPv4 and IPv6.  To find out for sure,
you'd have to do an experiment where same-user-same-server connections
are split between IPv4 and IPv6 and then measure the performance
difference. I don't know if anyone has done that but these particular
articles look like someone is just looking at the high-level metrics.
Those won't hold any statistical validity because they're not actually
random samples.


I agree. It's pretty suspect that they didn't give the reason it was 
happening. I mean, why the incuriosity?


Mike



Re: IPv6 and CDN's

2021-11-27 Thread Michael Thomas



On 11/27/21 2:22 PM, Owen DeLong via NANOG wrote:

Actually, I think it’s in the fine print here…

“Connection setup is 1.4 times faster”. I can believe that NAT adds almost 40% 
overhead to the connection setup (3-way handshake) and some
of the differences in packet handling in the fast path between v4 and v6 could 
contribute the small remaining difference.

I doubt it is due to different connections, since we’re talking about 
measurements against dual-stack sites reached from dual-stack end-users,
very likely traversing similar paths.

40% in isolation is pretty meaningless. If it's 40% of .1% overall it's 
called a rounding error.


Mike



Re: AWS and IPv6

2021-11-28 Thread Michael Thomas



On 11/28/21 1:17 PM, Karl Auer wrote:

On Sun, 2021-11-28 at 12:53 -0800, Michael Thomas wrote:

I was reading their howto yesterday and it seems they are only
allocating a /64? Why?

That's a /64 *per subnet*...

But the size of a VPC's IPv6 CIDR block does seem to be fixed at /56.
Would have been nice to see /48 instead.


Ah ok, I must have missed that.

Mike



Re: AWS and IPv6

2021-11-28 Thread Michael Thomas


On 11/27/21 2:44 PM, Fletcher Kittredge wrote:


The Register  says: AWS claims 
'monumental step forward' with optional IPv6-only networks 




I was reading their howto yesterday and it seems they are only 
allocating a /64? Why?


I guess I just don't get the point of the VPC in the first place. I get 
the firewall aspect but it seem to be more than that.


Mike


Re: AWS and IPv6

2021-11-28 Thread Michael Thomas



On 11/28/21 3:50 PM, Matt Palmer wrote:

On Sun, Nov 28, 2021 at 02:10:40PM -0800, William Herrin wrote:

On Sun, Nov 28, 2021 at 1:18 PM Karl Auer  wrote:

On Sun, 2021-11-28 at 12:53 -0800, Michael Thomas wrote:

I was reading their howto yesterday and it seems they are only
allocating a /64? Why?

That's a /64 *per subnet*...

But the size of a VPC's IPv6 CIDR block does seem to be fixed at /56.
Would have been nice to see /48 instead.

To what purpose? You can't alter the VPC routing of any of the IP
addresses (v4 or v6) assigned to an AWS VPC.

Which is, fundamentally, half the problem with IPv6 in AWS.  I'd have much
preferred that they'd added the ability to do actually-useful IPv6 routing
rather than IPv6-only subnets, which strikes me as more of a toy than
something *actually* useful.

Maybe they're future proofing themselves until they can figure out how 
to put a meter on it for more $$$?


Mike



Re: IPv6 and CDN's

2021-11-26 Thread Michael Thomas


On 11/26/21 1:44 PM, Jean St-Laurent via NANOG wrote:


Here are some maths and 1 argument kicking ass pitch for CFO’s that 
use iphones.


*Apple tells app devs to use IPv6 as it's 1.4 times faster than IPv4*

https://www.zdnet.com/article/apple-tells-app-devs-to-use-ipv6-as-its-1-4-times-faster-than-ipv4/

Build around that maybe?


This really hits my bs meter big time. I can't see how nat'ing is going 
to cause a 40% performance hit during connections. The article also 
mentions http2 (and later v3) which definitely make big improvements so 
I'm suspecting that the author is conflating them.


Mike


Re: IPv6 and CDN's

2021-11-26 Thread Michael Thomas


On 11/26/21 4:39 PM, Jean St-Laurent wrote:


But CFOs like monetization. Was that thread about IPv6 or CFO?



Amazon's in this case. They are monetizing their lack of v6 support 
requiring you go through all kinds of expensive hoops instead of doing 
the obvious and routing v6 packets.


Mike


*From:*Michael Thomas 
*Sent:* November 26, 2021 7:37 PM
*To:* Oliver O'Boyle 
*Cc:* Jean St-Laurent ; Ca By ; 
North American Network Operators' Group 

*Subject:* Re: IPv6 and CDN's

That's a start, I guess. Before all they had was some weird VPN 
something or other. Let me guess though: they are monetizing their 
market failure.


Re: IPv6 and CDN's

2021-11-26 Thread Michael Thomas


On 11/26/21 3:11 PM, Ca By wrote:



On Fri, Nov 26, 2021 at 6:07 PM Michael Thomas  wrote:


On 11/26/21 1:44 PM, Jean St-Laurent via NANOG wrote:


Here are some maths and 1 argument kicking ass pitch for CFO’s
that use iphones.

*Apple tells app devs to use IPv6 as it's 1.4 times faster than IPv4*


https://www.zdnet.com/article/apple-tells-app-devs-to-use-ipv6-as-its-1-4-times-faster-than-ipv4/

Build around that maybe?



This really hits my bs meter big time. I can't see how nat'ing is
going to cause a 40% performance hit during connections. The
article also mentions http2 (and later v3) which definitely make
big improvements so I'm suspecting that the author is conflating them.

Mike


Ok, take the same ipv6 is faster claim from facebook

https://www.internetsociety.org/blog/2015/04/facebook-news-feeds-load-20-40-faster-over-ipv6/


Still really thin with details of why. At least this says that they are 
NAT'ing v4 at *their* edge. But 99% of the lag of filling your newsfeed 
is their backend and transport, not connection times so who knows what 
they are actually measuring. Most NAT'ing is done at the consumer end by 
your home router in any case.


Mike


Re: IPv6 and CDN's

2021-11-26 Thread Michael Thomas


On 11/26/21 4:15 PM, Jean St-Laurent wrote:


We now have apple and fb saying ipv6 is faster than ipv4.

If we can onboard Amazon, Netflix, Google and some others, then it is 
a done deal that ipv6 is indeed faster than ipv4.


Hence, an easy argument to tell your CFO that you need IPv6 for your CDN.

Netflix is already v6 ready. The biggest obstacle is probably aws 
because that's where a lot of the long tail of the internet resides. 
Lobbying them would get the most bang for the buck.


Mike



Re: IPv6 and CDN's

2021-11-26 Thread Michael Thomas


On 11/26/21 4:30 PM, Oliver O'Boyle wrote:
AWS has been gradually improving support and adding features. They 
just announced this service, which might help with adoption:


https://aws.amazon.com/about-aws/whats-new/2021/11/aws-nat64-dns64-communication-ipv6-ipv4-services/


That's a start, I guess. Before all they had was some weird VPN 
something or other. Let me guess though: they are monetizing their 
market failure.


Mike





On Fri., Nov. 26, 2021, 19:28 Michael Thomas,  wrote:


On 11/26/21 4:15 PM, Jean St-Laurent wrote:


We now have apple and fb saying ipv6 is faster than ipv4.

If we can onboard Amazon, Netflix, Google and some others, then
it is a done deal that ipv6 is indeed faster than ipv4.

Hence, an easy argument to tell your CFO that you need IPv6 for
your CDN.


Netflix is already v6 ready. The biggest obstacle is probably aws
because that's where a lot of the long tail of the internet
resides. Lobbying them would get the most bang for the buck.

Mike



Re: is ipv6 fast, was silly Redeploying

2021-11-19 Thread Michael Thomas



On 11/19/21 2:44 PM, John Levine wrote:

It appears that Michael Thomas  said:

And just as impossible since it would pop it out of the fast path. Does
big iron support ipv6 these days?

My research associate Ms. Google advises me that Juniper does:

https://www.juniper.net/documentation/us/en/software/junos/routing-overview/topics/concept/ipv6-technology-overview.html

As does Cisco:

https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-9600-series-switches/nb-06-cat9600-ser-sup-eng-data-sheet-cte-en.pdf

R's,


Both have sprawling product lines though even with fsvo big iron. It 
would be nice to hear that they can build out big networks, but given 
the use of ipv6 in mobile I assume they can. I wonder what the situation 
is for enterprise which doesn't have any direct drivers that I know of.


Mike



Re: multihoming

2021-11-25 Thread Michael Thomas



On 11/25/21 11:54 AM, Bjørn Mork wrote:

Christopher Morrow  writes:


Also, for completeness, MP-TCP clearly does not help UDP or ICMP flows...
nor IPSEC nor GRE nor...
unless you HTTP over MP-TCP and encap UDP/ICMP/GRE/IPSEC over that!

IP over DNS has been a thing forever.  IP over DoH should work just
fine.


Talk about layer violations! talk about fun!

Yes, fun...


Feh. I've written transistors over http. Beat that.

Mike



Re: ipv4 on mobile networks

2021-10-23 Thread Michael Thomas


On 10/23/21 11:52 AM, Ca By wrote:



On Sat, Oct 23, 2021 at 10:33 AM Michael Thomas  wrote:

So I'm curious how the mobile operators deploying ipv6 to the
handsets are dealing with ipv4. The simplest would be to get the
phone a routable ipv4 address, but that would seemingly exacerbate
the reason they went to v6 in the first place.

First, consider that the 3  major cell carriers in the usa each have 
100 million customers.  Also, consider they all now have a home 
broadband angle. Where do 100 million ipv4 addresses come from?  Not 
rfc 1918, not arin, … and we are just talking about customer ip 
addresses, not considering towers, backend systems, call centers, 
retail ….


So the genesis of 464xlat / rfc 6877 is that ipv4 cannot go where we 
need to go, the mobile architecture must be ipv6 to be comply with the 
e2e principle and not constrain the scaling of the customers / edge. 
Other cell carriers believe in operating many unique ipv4 networks … 
like a 10.0.0.0/8 <http://10.0.0.0/8> per metro, but even that breaks 
down and cannot scale… and you end up with proxies / nats / sbcs 
everywhere just to make internal apps like ims work, which is a lot of 
state.


464, that's what i was looking for... there are so many transition 
schemes i wasn't sure which one they chose. So it's essentially double 
NAT'ing. Does that require TURN too for streaming? I can't remember what 
the limitations of STUN are.




Are carriers NAT'ing somewhere along the line? If so, where? Like
does the phone encapsulate v4 in 4-in-6? Or does the phone get a
net 10 address and it gets NAT'd by the carrier?


~80% of traffic goes to fb, goog, yt, netflix, bing, o364, hbomax, 
apple tv, … all of which are ipv6. So, only 20% of traffic requires 
nat, when you have ipv6. I am hoping tiktoc and aws move to be default 
on for ipv6 soon.


Yeah, aws is the most glaring since it probably hosts a significant 
portion of the long tail. it appears that aws only supports v6 with 
vpn's. Google only appears to support v6 if you use their load balancer. 
Sad.


Mike

<    1   2   3   4   5   6   7   8   >