Re: Stealthy Overlay Network Re: 202401100645.AYC Re: IPv4 address block

2024-01-18 Thread Christopher Morrow
Why is this conversation even still going on?
It's been established ~100 messages ago that the plan here is nonsense.
it's been established ~80 messages ago that the 'lemme swap subjects to
confuse the issue' is nonsense.

stop feeding the troll.

On Thu, Jan 18, 2024 at 11:20 PM Christopher Hawker 
wrote:

> According to the diagram on page 8 of the presentation on your website at
> https://www.avinta.com/phoenix-1/home/EzIPenhancedInternet.pdf, it simply
> identifies 240/4 as CGNAT space. Routing between regional access networks
> typically doesn't take place when using such space on an ISP network, and
> most ISPs (that I know of) will offer public addressing when it is
> required. Further, if you think the need for DHCP will be eliminated
> through the use of your solution, I hate to say it, but ISPs will not
> statically configure WAN addressing on CPE for residential services. It
> would simply increase the workload of their support and provisioning teams.
> Right now, in cases where ISPs use DHCP, they can simply ship a router to
> an end-user, the user plugs it in, turns it on, and away they go.
> Connectivity to the internet.
>
> If an end-user has a router that does not support OpenWRT, it will require
> the end-user to replace their router with one that does in order to connect
> to an EzIP-enabled network. This is not reasonably practical. This would
> also require router vendors to support connectivity to a proprietary
> "semi-public router".
>
> Again, for the sake of completeness, this solution is a waste of time and
> resources. A carrier would not have a need for more than ~4.1m devices on a
> single regional access network and some may run more than one in a single
> region, so as not to put all of their proverbial eggs into the same basket.
>
> Regards,
> Christopher Hawker
>
> On Fri, 19 Jan 2024 at 14:49, Abraham Y. Chen  wrote:
>
>> Hi, Christopher:
>>
>> 1)" If "EzIP" is about using 240/4 as CGNAT space, ...   ":
>>
>> This correlation is just the starting point for EzIP deployment, so
>> that it would not be regarded as a base-less crazy dream. Once a 240/4
>> enabled RAN is established as a new network overlaying on the CG-NAT
>> infrastructure, the benefits of making use of the 240/4 resources can begin
>> to be considered. For example, with sufficient addresses, static address
>> administration can be practiced within a RAN which will remove the need for
>> DHCP service. From this, related consequences may be discussed.
>>
>> 2)" I don't think you quite grasp the concept that OpenWRT is not
>> compatible with devices that do not support it.  it would not be
>> appropriate to expect every device vendor to support it.  ...   ":
>>
>> Perhaps we have some offset about the terminology of "who supports
>> whom?" My understanding of the OpenWrt project is that it is an open-source
>> program code that supports a long list (but not all) of primarily
>> commercial RGs (Residential/Routing Gateways) and WiFi routers that serve /
>> support CPE devices (on-premises IoTs). Its basic purpose is to let private
>> network owners to replace the firmware code in the RGs with the OpenWrt
>> equivalent so that they will have full control of their RGs and then modify
>> them if desired. Thus, the basic release of each OpenWrt code maintains
>> most of the original functionalities in the OEM device. So, neither the
>> original RG nor any IoT manufacturers need be involved with the OpenWrt,
>> let alone supporting it. My reference to its V19.07.3 was the version that
>> expanded its usable address pool to include 240/4. That was all.
>>
>> For sure, OpenWrt does not run on all RGs in the field. But, this
>> does not restrict an overlay network like RAN from starting to network only
>> those premises with RGs that run on OpenWrt (plus those RGs compatible with
>> 240/4 from the factories). Since the existing CG-NAT is not disturbed and
>> daily Internet services are going normally, RAN growth can take its time.
>> 3)" You've provided a link to a D-Link managed switch, not a router.
>> Just because it can support L2 routing, doesn't make it a router.   ":
>>
>> Correct, this is just a basic example for networking the RGs to
>> experiment the RAN configuration. It is not intended to be a full-fledged
>> router which will have other considerations that are way beyond what EzIP
>> should be involved with.
>>
>>
>> Regards,
>>
>>
>> Abe (2024-01-18 22:48)
>>
>>
>> On 2024-01-15 18:33, Christopher Hawker wrote:
>>
>> If "EzIP" is about using 240/4 as CGNAT space, let's call it what it is,
>> not rename something that already exists and attempt to claim it as a new
>> idea.
>>
>> It is completely unnecessary to use 240/4 as CGNAT space. Here are a few
>> reasons why:
>>
>>1. There are 4,194,304 IPv4 addresses in a /10 prefix. Allowing for a
>>/24 from this to be used for CGNAT gateways, load balancing, etc. this
>>still allows for 4,194,048 usable addresses for 

Re: Stealthy Overlay Network Re: 202401100645.AYC Re: IPv4 address block

2024-01-18 Thread Christopher Hawker
According to the diagram on page 8 of the presentation on your website at
https://www.avinta.com/phoenix-1/home/EzIPenhancedInternet.pdf, it simply
identifies 240/4 as CGNAT space. Routing between regional access networks
typically doesn't take place when using such space on an ISP network, and
most ISPs (that I know of) will offer public addressing when it is
required. Further, if you think the need for DHCP will be eliminated
through the use of your solution, I hate to say it, but ISPs will not
statically configure WAN addressing on CPE for residential services. It
would simply increase the workload of their support and provisioning teams.
Right now, in cases where ISPs use DHCP, they can simply ship a router to
an end-user, the user plugs it in, turns it on, and away they go.
Connectivity to the internet.

If an end-user has a router that does not support OpenWRT, it will require
the end-user to replace their router with one that does in order to connect
to an EzIP-enabled network. This is not reasonably practical. This would
also require router vendors to support connectivity to a proprietary
"semi-public router".

Again, for the sake of completeness, this solution is a waste of time and
resources. A carrier would not have a need for more than ~4.1m devices on a
single regional access network and some may run more than one in a single
region, so as not to put all of their proverbial eggs into the same basket.

Regards,
Christopher Hawker

On Fri, 19 Jan 2024 at 14:49, Abraham Y. Chen  wrote:

> Hi, Christopher:
>
> 1)" If "EzIP" is about using 240/4 as CGNAT space, ...   ":
>
> This correlation is just the starting point for EzIP deployment, so
> that it would not be regarded as a base-less crazy dream. Once a 240/4
> enabled RAN is established as a new network overlaying on the CG-NAT
> infrastructure, the benefits of making use of the 240/4 resources can begin
> to be considered. For example, with sufficient addresses, static address
> administration can be practiced within a RAN which will remove the need for
> DHCP service. From this, related consequences may be discussed.
>
> 2)" I don't think you quite grasp the concept that OpenWRT is not
> compatible with devices that do not support it.  it would not be
> appropriate to expect every device vendor to support it.  ...   ":
>
> Perhaps we have some offset about the terminology of "who supports
> whom?" My understanding of the OpenWrt project is that it is an open-source
> program code that supports a long list (but not all) of primarily
> commercial RGs (Residential/Routing Gateways) and WiFi routers that serve /
> support CPE devices (on-premises IoTs). Its basic purpose is to let private
> network owners to replace the firmware code in the RGs with the OpenWrt
> equivalent so that they will have full control of their RGs and then modify
> them if desired. Thus, the basic release of each OpenWrt code maintains
> most of the original functionalities in the OEM device. So, neither the
> original RG nor any IoT manufacturers need be involved with the OpenWrt,
> let alone supporting it. My reference to its V19.07.3 was the version that
> expanded its usable address pool to include 240/4. That was all.
>
> For sure, OpenWrt does not run on all RGs in the field. But, this does
> not restrict an overlay network like RAN from starting to network only
> those premises with RGs that run on OpenWrt (plus those RGs compatible with
> 240/4 from the factories). Since the existing CG-NAT is not disturbed and
> daily Internet services are going normally, RAN growth can take its time.
> 3)" You've provided a link to a D-Link managed switch, not a router.
> Just because it can support L2 routing, doesn't make it a router.   ":
>
> Correct, this is just a basic example for networking the RGs to
> experiment the RAN configuration. It is not intended to be a full-fledged
> router which will have other considerations that are way beyond what EzIP
> should be involved with.
>
>
> Regards,
>
>
> Abe (2024-01-18 22:48)
>
>
> On 2024-01-15 18:33, Christopher Hawker wrote:
>
> If "EzIP" is about using 240/4 as CGNAT space, let's call it what it is,
> not rename something that already exists and attempt to claim it as a new
> idea.
>
> It is completely unnecessary to use 240/4 as CGNAT space. Here are a few
> reasons why:
>
>1. There are 4,194,304 IPv4 addresses in a /10 prefix. Allowing for a
>/24 from this to be used for CGNAT gateways, load balancing, etc. this
>still allows for 4,194,048 usable addresses for CPE. When performing NAT,
>you would need to allocate each subscriber approximately 1000 ports for NAT
>to work successfully. The entire /10 (less the /24) would require the
>equivalent of a /16 public IPv4 prefix to use the entire 100.64/10 space in
>one region. To put this into comparison, you would use the entire 100.64/10
>space in a city the size of New York or Los Angeles allowing for one
>

Re: One Can't Have It Both Ways Re: Streamline the CG-NAT Re: EzIP Re: IPv4 address block

2024-01-18 Thread Abraham Y. Chen

Hi, Forrest:

1) "  if you have IPv6 service and I have IPv6 service, our IPv6 devices 
can talk directly to each other without needing any VPN or similar. ":


Thanks. So, is it true that the reason IPv4 could not do so is solely 
because it does not have enough static addresses for every subscriber?


2)    " ...  taking other security/safety steps.  (Like the PSTN, the 
internet can be tapped).  ":


Agreed. However, the extra steps should be for those who have some 
secret to hide. In the PSTN days, most traffic was voice and no 
encryption. For the Internet, everything is digitized. Distinguishing 
among voice and data becomes extra work. So, I see the tendency to 
encrypt everything.



Regards,


Abe (2024-01-18 23:15)


On 2024-01-16 01:38, Forrest Christian (List Account) wrote:



On Mon, Jan 15, 2024, 1:21 PM Abraham Y. Chen  wrote:

    If I subscribe to IPv6, can I contact another similar
subscriber to communicate (voice and data) directly between two
homes in private like the dial-up modem operations in the PSTN? If
so, is it available anywhere right now?


Yes,  if you have IPv6 service and I have IPv6 service, our IPv6 
devices can talk directly to each other without needing any VPN or 
similar.  And yes, this is available today.


Note that just like the PSTN you might not want to do this without 
encryption and taking other security/safety steps.   (Like the PSTN, 
the internet can be tapped).





--
This email has been checked for viruses by Avast antivirus software.
www.avast.com

Re: Stealthy Overlay Network Re: 202401100645.AYC Re: IPv4 address block

2024-01-18 Thread Abraham Y. Chen

Hi, Christopher:

1) " If "EzIP" is about using 240/4 as CGNAT space, ...   ":

    This correlation is just the starting point for EzIP deployment, so 
that it would not be regarded as a base-less crazy dream. Once a 240/4 
enabled RAN is established as a new network overlaying on the CG-NAT 
infrastructure, the benefits of making use of the 240/4 resources can 
begin to be considered. For example, with sufficient addresses, static 
address administration can be practiced within a RAN which will remove 
the need for DHCP service. From this, related consequences may be 
discussed.



2)    " I don't think you quite grasp the concept that OpenWRT is not 
compatible with devices that do not support it.  it would not be 
appropriate to expect every device vendor to support it. ...   ":


    Perhaps we have some offset about the terminology of "who supports 
whom?" My understanding of the OpenWrt project is that it is an 
open-source program code that supports a long list (but not all) of 
primarily commercial RGs (Residential/Routing Gateways) and WiFi routers 
that serve / support CPE devices (on-premises IoTs). Its basic purpose 
is to let private network owners to replace the firmware code in the RGs 
with the OpenWrt equivalent so that they will have full control of their 
RGs and then modify them if desired. Thus, the basic release of each 
OpenWrt code maintains most of the original functionalities in the OEM 
device. So, neither the original RG nor any IoT manufacturers need be 
involved with the OpenWrt, let alone supporting it. My reference to its 
V19.07.3 was the version that expanded its usable address pool to 
include 240/4. That was all.


    For sure, OpenWrt does not run on all RGs in the field. But, this 
does not restrict an overlay network like RAN from starting to network 
only those premises with RGs that run on OpenWrt (plus those RGs 
compatible with 240/4 from the factories). Since the existing CG-NAT is 
not disturbed and daily Internet services are going normally, RAN growth 
can take its time.


3)    " You've provided a link to a D-Link managed switch, not a router. 
Just because it can support L2 routing, doesn't make it a router.   ":


    Correct, this is just a basic example for networking the RGs to 
experiment the RAN configuration. It is not intended to be a 
full-fledged router which will have other considerations that are way 
beyond what EzIP should be involved with.




Regards,


Abe (2024-01-18 22:48)


On 2024-01-15 18:33, Christopher Hawker wrote:
If "EzIP" is about using 240/4 as CGNAT space, let's call it what it 
is, not rename something that already exists and attempt to claim it 
as a new idea.


It is completely unnecessary to use 240/4 as CGNAT space. Here are a 
few reasons why:


 1. There are 4,194,304 IPv4 addresses in a /10 prefix. Allowing for a
/24 from this to be used for CGNAT gateways, load balancing, etc.
this still allows for 4,194,048 usable addresses for CPE. When
performing NAT, you would need to allocate each subscriber
approximately 1000 ports for NAT to work successfully. The entire
/10 (less the /24) would require the equivalent of a /16 public
IPv4 prefix to use the entire 100.64/10 space in one region. To
put this into comparison, you would use the entire 100.64/10 space
in a city the size of New York or Los Angeles allowing for one
internet service per 4 or 2 people respectively. It's not practical.
 2. Multiple CGNAT regions that are at capacity would not have a need
for uniquely routable IP space between them. It's heavily designed
for traffic from the user to the wider internet, not for
inter-region routing. Carriers already have systems in place where
subscribers can request a public address if they need it (such as
working from home with advanced corporate networks, etc).

100.64/10 is not public IP space, because it is not usable in the DFZ. 
I don't believe there is any confusion or ambiguity about this space 
because if you do a Whois lookup on 100.64.0.0/10 
 at any one of the five RIRs, it reflects that 
it is IANA shared address space for service providers. Footnote 6 on 
the page you referenced reads "100.64.0.0/10  
reserved for Shared Address Space". It has not been delegated to ARIN. 
Rather clear as to its use case.


I don't think you quite grasp the concept that OpenWRT is not 
compatible with devices that do not support it. It would only work on 
routers for which it is compatible and it would not be appropriate to 
expect every device vendor to support it. To add-on to this, why would 
vendors need to enable 240/4 CGNAT support when their customers don't 
have a need for it?


You've provided a link to a D-Link managed switch, not a router. Just 
because it can support L2 routing, doesn't make it a router.


I'm all for discussing ideas and suggestions and working towards 
proper IPv6 deployment. It certainly 

Re: "Hypothetical" Datacenter Overheating

2024-01-18 Thread Glenn McGurrin via NANOG
I'm actually referring to something like, I've not yet had a system 
where they have made sense, I mostly deal with either places where I 
have no say in the hvac or very small server rooms, but I've thought 
these were an interesting concept since I first saw them years ago.


https://www.chiltrix.com/server-room-chiller.html

quoting from the page:

The Chiltrix SE Server Room Edition adds a "free cooling" option to CX34.

Server rooms need cooling all year, even when it is cold outside. If you 
operate in a northern area with cold winters, this option is for you.


When outdoor temperatures drop below 38F, the CX34 glycol-water loop is 
automatically extended through a special water-to-air heat exchanger to 
harvest outdoor cold ambient conditions to pre-cool the glycol-water 
loop so that the CX34 variable speed compressor can drop to a very slow 
speed and consume less power. This can save about 50% off of it's 
already low power consumption without lowering capacity.


At and below 28F, the CX34 chiller with Free Cooling SE add-on will turn 
off the compressor entirely and still be able to maintain its rated 
cooling capacity using only the variable speed pump and fan motors. At 
this point, the CX34 achieves a COP of of >41 and EER of >141.


Enjoy the savings of 2 tons of cooling for less than 75 watts. The 
colder it gets, the less water flow rate is needed, allowing the VSD 
pump power draw to drop under 20 watts.


Depending on location, for some customers free cooling mode can be 
active up to 3 months per year during the daytime and up to 5 months per 
year at night.


On 1/17/2024 3:10 PM, Izaac wrote:

On Wed, Jan 17, 2024 at 12:07:42AM -0500, Glenn McGurrin via NANOG wrote:

Free air cooling loops maybe? (Not direct free air cooling with air
exchange, the version with something much like an air handler outside with a
coil and an fan running cold outside air over the coil with the water/glycol
that would normally be the loop off of the chiller) the primary use of them
is cost savings by using less energy to cool when it's fairly cold out, but
it can also prevent low temperature issues on compressors by not running
them when it's cold.  I'd expect it would not require the same sort of
facade changes as it could be on the roof and depending only need
water/glycol lines into the space, depending on cooling tower vs air cooled
and chiller location it could also potentially use the same piping (which I
think is the traditional use).


You're looking for these: https://en.wikipedia.org/wiki/Thermal_wheel

Basically, an aluminum honeycomb wheel.  One half of its housing is an
air duct "outside" while the other half is an air duct that's "inside."
Cold outside air blows through the straws and cools the metal.  Wheel
rotates slowly.  That straw is now "inside."  Inside air blows through
it and deposits heat onto the metal.  Turn turn turn.

A surprisingly effective way to lower heating/cooling costs.  Basically
"free," as you just need to turn it on the bearing.  Do you get deposits
in the comb?  Yes, if you don't filter properly.  Do you get
condensation in the comb?  Yeah.  Treat it with desiccants.



[NANOG-announce] “Exploring the Internet History of North Carolina,” Hackathon, Sneak Preview of N90 + More

2024-01-18 Thread Nanog News
*“Exploring the Internet History of North Carolina”*
*Keynote Investigates Tech Legacy of N90 Meeting Location*

One of the two NANOG 90 Keynote speakers, Mark Johnson, will present
Exploring the Internet History of North Carolina at NANOG's 90th
community-wide meeting in Charlotte, NC, 12 – 14 Feb.

"It's always good to have a strong sense of where you came from to move
forward," Johnson said about his talk. "This is a unique community I
probably can't over-geek out on. You must often dumb things down for more
general audiences, but this is a technologically knowledgeable crowd, so
that'll add some fun to it," he continued.

*READ MORE *


*Register Now for N90 Hackathon!*
*Network, Learn Hands-On, + Have Fun!*

*Theme: *New Year - New Hack Format!

The NANOG 90 Hackathon will focus on "Problem Solving/Troubleshooting"
competitions. During this Hackathon, teams will collaborate to solve the
posed problems.

Scoring will be based on network reachability and how fast participants
solve the problems. Prizes will be provided to the top finishers!

*REGISTER NOW* 

*Sneak Preview of NANOG 90!*
Please note - standard registration rates will end on 21, January.*

+ Automating Internet2's Nationwide Network with Cisco NSO
+ Measuring RPKI ROV deployment and edge cases
+ BGP in 2023
+ AI Data Center networks
+ Using NetFlow to fight DDoS at the source
+ Much, much more!

*MORE INFO * 

*Sponsorships Still Available for N90!*
*Invest in the Community We Have Built!*

Building the Internet of Tomorrow takes a village. Our sponsors make these
meetings possible, and we appreciate their support. Get your company logo
in front of your target audience and become a difference-maker today!

Email swinst...@nanog.org for more information.

*Deadline Approaching for Peering Forum*
*90-Minute Session, to be Held on 12-Feb During NANOG 90 Conference*

Meet and network with others in the peering community! Applications will
remain open until 15 applications are received or 2-Feb-2024, whichever is
first.

*MORE INFO * 
___
NANOG-announce mailing list
NANOG-announce@nanog.org
https://mailman.nanog.org/mailman/listinfo/nanog-announce


Re: Shared cache servers on an island's IXP

2024-01-18 Thread Jérôme Nicolle

Hi Nick,

Thanks for your remarks. It's actually an ongoing discussion.

Le 18/01/2024 à 18:24, Nick Hilliard a écrit :
two issues here: the smaller issue is that CDNs sometimes want their own 
routable IP address blocks, especially if they're connecting directly to 
the IXP, which usually means /24 in practice. It doesn't always happen, 
and sometimes the CDN is happy to use provider address space (i.e. IXP), 
or smaller address blocks. But it's something to note.


I'd rather have CDN use some of their anycast /24 to peer with the IX, 
with a back-end connectivity for their control-plane and back-feeding.


The bigger issue is: who pays the transit costs for the CDN's cache-fill 
requirements? CDNs typically won't pay for cache-fill for installations 
like this, and if one local ISP is pulling disproportionate quantities 
of data compared to other ISPs at the IXP, then this can cause problems 
unless there's an shared billing mechanism built in.


We're willing to provide a dedicated LAN, with routed access, to fill 
caches and administer the machines. It would be fully dissociated from 
the IXP though, unless we could find a way to make it work and as to 
meet extra requirements upon redundancy.


Best regards,

--
Jérôme Nicolle
+33 6 19 31 27 14


Re: Shared cache servers on an island's IXP

2024-01-18 Thread Nick Hilliard

Jérôme Nicolle wrote on 18/01/2024 14:38:
Those I'm nearly sure I could get, if I can pool caches amongst ISPs. 
The current constraints are issues to any content provider, not just for 
local ISPs.


two issues here: the smaller issue is that CDNs sometimes want their own 
routable IP address blocks, especially if they're connecting directly to 
the IXP, which usually means /24 in practice. It doesn't always happen, 
and sometimes the CDN is happy to use provider address space (i.e. IXP), 
or smaller address blocks. But it's something to note.


The bigger issue is: who pays the transit costs for the CDN's cache-fill 
requirements? CDNs typically won't pay for cache-fill for installations 
like this, and if one local ISP is pulling disproportionate quantities 
of data compared to other ISPs at the IXP, then this can cause problems 
unless there's an shared billing mechanism built in.


Nick


Re: "Hypothetical" Datacenter Overheating

2024-01-18 Thread Lamar Owen

On 1/15/24 10:14, sro...@ronan-online.com wrote:

I’m more interested in how you lose six chillers all at once.

According to a post on a support forum for one of the clients in that 
space: "We understand the issue is due to snow on the roof affecting the 
cooling equipment."


Never overlook the simplest single points of failure.  Snow on cooling 
tower fan bladesfailed fan motors are possible or even likely at 
that point.  Assuming the airflow won't be clogged; conceptually much 
like the issue in having multiple providers for redundancy but they're 
all in the same cable or conduit.




Re: "Hypothetical" Datacenter Overheating

2024-01-18 Thread Lamar Owen

On 1/17/24 20:06, Tom Beecher wrote:


If these chillers are connected to BACnet or similar network, then
I wouldn't rule out the possibility of an attack.


Don't insinuate something like this without evidence. Completely 
unreasonable and inappropriate.


I wasn't meaning to insinuate anything; it's as much of a reasonable 
possibility as any other these days.


Perhaps I should have worded it differently: "if my small data centers' 
chillers were connected to some building management network such as 
BACnet and all of them went down concurrently I would be investigating 
my building management network for signs of intrusion in addition to 
checking other items, such as shared points of failure in things like 
chilled water pumps, electrical supply, emergency shut-off circuits, 
chiller/closed-loop configurations for various temperature, pressure, 
and flow set points, etc."  Bit more wordy, but doesn't have the same 
implication.  But I would think it unreasonable, if I were to find 
myself in this situation in my own operations, to rule any possibility 
out that can explain simultaneous shutdowns.


And this week we did have a chiller go out on freeze warning, but the DC 
temp never made it quite up to 80F before the temperature raised back 
into double digits and the chiller restarted.


Re: Shared cache servers on an island's IXP

2024-01-18 Thread Gael Hernandez
Hosting authoritative and recursive dns servers at the IXP would
drastically improve the experience of users most of the time.

Of course, Stephane considerations are correct and there’s no solution for
when global connectivity is lost and responses will stop being sent.

Gaël


On Thu 18 Jan 2024 at 14:42, Jérôme Nicolle  wrote:

> Hi Gael,
>
> Le 18/01/2024 à 13:48, Gael Hernandez a écrit :
> > Friends from PCH (www.pch.net ) operate backend
> > services for DNS authoritative ccTLDs and the Quad9 DNS resolver. They
> > would be very happy to help.
>
> I'm sure they would, I'm a big fan of their work BTW. Though hosting
> them in a densely connected area isn't the same as it will in remote
> locations, I guess there could be some work to be done to get it running
> properly, as Stephane wrote.
>
> How would you think we could work on that ? I mean, disconnected or
> extremely high latency scenarii should be on a research roadmap by
> SpaceX' standards, right ? ;-)
>
> Best regards,
>
> --
> Jérôme Nicolle
> +33 6 19 31 27 14
>


Re: "Hypothetical" Datacenter Overheating

2024-01-18 Thread Mike Hammett





- 
Mike Hammett 
Intelligent Computing Solutions 

Midwest Internet Exchange 

The Brothers WISP 

- Original Message -

From: "Tom Beecher"  
To: "Mike Hammett"  
Cc: sro...@ronan-online.com, "NANOG"  
Sent: Thursday, January 18, 2024 9:19:09 AM 
Subject: Re: "Hypothetical" Datacenter Overheating 




Well right, which came well after the question was posited here. 




Wasn't poo pooing the question, just sharing the information as I didn't see 
that cited otherwise in this thread. 


On Thu, Jan 18, 2024 at 10:15 AM Mike Hammett < na...@ics-il.net > wrote: 





Well right, which came well after the question was posited here. 




- 
Mike Hammett 
Intelligent Computing Solutions 

Midwest Internet Exchange 

The Brothers WISP 



From: "Tom Beecher" < beec...@beecher.cc > 
To: "Mike Hammett" < na...@ics-il.net > 
Cc: sro...@ronan-online.com , "NANOG" < nanog@nanog.org > 
Sent: Thursday, January 18, 2024 9:00:34 AM 
Subject: Re: "Hypothetical" Datacenter Overheating 




and none in the other two facilities you operate in that same building had any 
failures. 




Quoting directly from their outage ticket updates : 



CH2 does not have chillers, cooling arrangement is DX CRACs manufactured by 
another company. CH3 has Smart chillers but are water cooled not air cooled so 
not susceptible to cold ambient air temps as they are indoor chillers. 







On Mon, Jan 15, 2024 at 10:19 AM Mike Hammett < na...@ics-il.net > wrote: 





and none in the other two facilities you operate in that same building had any 
failures. 




- 
Mike Hammett 
Intelligent Computing Solutions 

Midwest Internet Exchange 

The Brothers WISP 



From: sro...@ronan-online.com 
To: "Mike Hammett" < na...@ics-il.net > 
Cc: "NANOG" < nanog@nanog.org > 
Sent: Monday, January 15, 2024 9:14:49 AM 
Subject: Re: "Hypothetical" Datacenter Overheating 



I’m more interested in how you lose six chillers all at once. 


Shane 



On Jan 15, 2024, at 9:11 AM, Mike Hammett < na...@ics-il.net > wrote: 







Let's say that hypothetically, a datacenter you're in had a cooling failure and 
escalated to an average of 120 degrees before mitigations started having an 
effect. What are normal QA procedures on your behalf? What is the facility 
likely to be doing? What should be expected in the aftermath? 




- 
Mike Hammett 
Intelligent Computing Solutions 

Midwest Internet Exchange 

The Brothers WISP 












Re: "Hypothetical" Datacenter Overheating

2024-01-18 Thread Tom Beecher
>
> Well right, which came well after the question was posited here.


Wasn't poo pooing the question, just sharing the information as I didn't
see that cited otherwise in this thread.

On Thu, Jan 18, 2024 at 10:15 AM Mike Hammett  wrote:

> Well right, which came well after the question was posited here.
>
>
>
> -
> Mike Hammett
> Intelligent Computing Solutions 
> 
> 
> 
> 
> Midwest Internet Exchange 
> 
> 
> 
> The Brothers WISP 
> 
> 
> --
> *From: *"Tom Beecher" 
> *To: *"Mike Hammett" 
> *Cc: *sro...@ronan-online.com, "NANOG" 
> *Sent: *Thursday, January 18, 2024 9:00:34 AM
> *Subject: *Re: "Hypothetical" Datacenter Overheating
>
> and none in the other two facilities you operate in that same building had
>> any failures.
>
>
> Quoting directly from their outage ticket updates :
>
> CH2 does not have chillers, cooling arrangement is DX CRACs manufactured
>> by another company. CH3 has Smart chillers but are water cooled not air
>> cooled so not susceptible to cold ambient air temps as they are indoor
>> chillers.
>
>
>
>
> On Mon, Jan 15, 2024 at 10:19 AM Mike Hammett  wrote:
>
>> and none in the other two facilities you operate in that same building
>> had any failures.
>>
>>
>>
>> -
>> Mike Hammett
>> Intelligent Computing Solutions 
>> 
>> 
>> 
>> 
>> Midwest Internet Exchange 
>> 
>> 
>> 
>> The Brothers WISP 
>> 
>> 
>> --
>> *From: *sro...@ronan-online.com
>> *To: *"Mike Hammett" 
>> *Cc: *"NANOG" 
>> *Sent: *Monday, January 15, 2024 9:14:49 AM
>> *Subject: *Re: "Hypothetical" Datacenter Overheating
>>
>> I’m more interested in how you lose six chillers all at once.
>>
>> Shane
>>
>> On Jan 15, 2024, at 9:11 AM, Mike Hammett  wrote:
>>
>> 
>> Let's say that hypothetically, a datacenter you're in had a cooling
>> failure and escalated to an average of 120 degrees before mitigations
>> started having an effect. What are normal QA procedures on your behalf?
>> What is the facility likely to be doing? What  should be expected in the
>> aftermath?
>>
>>
>>
>> -
>> Mike Hammett
>> Intelligent Computing Solutions 
>> 
>> 
>> 
>> 
>> Midwest Internet Exchange 
>> 
>> 
>> 
>> The Brothers WISP 
>> 
>> 
>>
>>
>>
>


Re: "Hypothetical" Datacenter Overheating

2024-01-18 Thread Mike Hammett
Well right, which came well after the question was posited here. 




- 
Mike Hammett 
Intelligent Computing Solutions 

Midwest Internet Exchange 

The Brothers WISP 

- Original Message -

From: "Tom Beecher"  
To: "Mike Hammett"  
Cc: sro...@ronan-online.com, "NANOG"  
Sent: Thursday, January 18, 2024 9:00:34 AM 
Subject: Re: "Hypothetical" Datacenter Overheating 




and none in the other two facilities you operate in that same building had any 
failures. 




Quoting directly from their outage ticket updates : 



CH2 does not have chillers, cooling arrangement is DX CRACs manufactured by 
another company. CH3 has Smart chillers but are water cooled not air cooled so 
not susceptible to cold ambient air temps as they are indoor chillers. 







On Mon, Jan 15, 2024 at 10:19 AM Mike Hammett < na...@ics-il.net > wrote: 





and none in the other two facilities you operate in that same building had any 
failures. 




- 
Mike Hammett 
Intelligent Computing Solutions 

Midwest Internet Exchange 

The Brothers WISP 



From: sro...@ronan-online.com 
To: "Mike Hammett" < na...@ics-il.net > 
Cc: "NANOG" < nanog@nanog.org > 
Sent: Monday, January 15, 2024 9:14:49 AM 
Subject: Re: "Hypothetical" Datacenter Overheating 



I’m more interested in how you lose six chillers all at once. 


Shane 



On Jan 15, 2024, at 9:11 AM, Mike Hammett < na...@ics-il.net > wrote: 







Let's say that hypothetically, a datacenter you're in had a cooling failure and 
escalated to an average of 120 degrees before mitigations started having an 
effect. What are normal QA procedures on your behalf? What is the facility 
likely to be doing? What should be expected in the aftermath? 




- 
Mike Hammett 
Intelligent Computing Solutions 

Midwest Internet Exchange 

The Brothers WISP 









Re: "Hypothetical" Datacenter Overheating

2024-01-18 Thread Tom Beecher
>
> and none in the other two facilities you operate in that same building had
> any failures.


Quoting directly from their outage ticket updates :

CH2 does not have chillers, cooling arrangement is DX CRACs manufactured by
> another company. CH3 has Smart chillers but are water cooled not air cooled
> so not susceptible to cold ambient air temps as they are indoor chillers.




On Mon, Jan 15, 2024 at 10:19 AM Mike Hammett  wrote:

> and none in the other two facilities you operate in that same building had
> any failures.
>
>
>
> -
> Mike Hammett
> Intelligent Computing Solutions 
> 
> 
> 
> 
> Midwest Internet Exchange 
> 
> 
> 
> The Brothers WISP 
> 
> 
> --
> *From: *sro...@ronan-online.com
> *To: *"Mike Hammett" 
> *Cc: *"NANOG" 
> *Sent: *Monday, January 15, 2024 9:14:49 AM
> *Subject: *Re: "Hypothetical" Datacenter Overheating
>
> I’m more interested in how you lose six chillers all at once.
>
> Shane
>
> On Jan 15, 2024, at 9:11 AM, Mike Hammett  wrote:
>
> 
> Let's say that hypothetically, a datacenter you're in had a cooling
> failure and escalated to an average of 120 degrees before mitigations
> started having an effect. What are normal QA procedures on your behalf?
> What is the facility likely to be doing? What  should be expected in the
> aftermath?
>
>
>
> -
> Mike Hammett
> Intelligent Computing Solutions 
> 
> 
> 
> 
> Midwest Internet Exchange 
> 
> 
> 
> The Brothers WISP 
> 
> 
>
>
>


Re: Shared cache servers on an island's IXP

2024-01-18 Thread Jérôme Nicolle

Hi Tom,

Le 18/01/2024 à 15:20, Tom Beecher a écrit :
Many CDNs have hardware options for self hosted caches. I think there 
would likely be concerns about <20G of connectivity to those caches 
though. With an incorrect setup, you could end up maxing out those links 
just with cache fill traffic itself.


In a case where these servers are on a dedicated network peering with 
the ISPs, I think it would be safe to throttle the sync feeds to not 
saturate actual uplinks.


At least, that we can do, but throttling uncached content to customers 
is forbidden (net neutrality).


Though Netflix is supposedly sending pre-loaded servers, and I think 
that - in this location - it's gonna mean a lot already. The quastion is 
: how would the servers peer with local ISPs.


Best regards,

--
Jérôme Nicolle
+33 6 19 31 27 14


Re: Shared cache servers on an island's IXP

2024-01-18 Thread Mike Hammett
Some will work directly on the IX via BGP. Others have to go behind a member of 
the IX. 




- 
Mike Hammett 
Intelligent Computing Solutions 

Midwest Internet Exchange 

The Brothers WISP 

- Original Message -

From: "Jérôme Nicolle"  
To: "Mehmet"  
Cc: nanog@nanog.org 
Sent: Thursday, January 18, 2024 8:38:31 AM 
Subject: Re: Shared cache servers on an island's IXP 

Hello Mehmet, 

Le 18/01/2024 à 12:58, Mehmet a écrit : 
> VMs are no go for big content companies except Microsoft. You can run 
> Microsoft CDN on VM but rest of the content will need to be cached. You 
> can actually install this yourself 

I've already read most docs for caching servers provided by major 
actors. What I'm mostly concerned about is their ability to peer with 
multiple AS on the local IXP, as to not over-replicate them. 

Should I establish a dedicated network peering on the IXP ? Or will they 
come with their own ASNs ? The peering case is quite not documented on 
publicly available specs, if even possible. 

> Depending on how much traffic do you have , you may be able to get 
> facebook, youtube, netflix caches i think minimum bw requirement changes 
> per region 

Those I'm nearly sure I could get, if I can pool caches amongst ISPs. 
The current constraints are issues to any content provider, not just for 
local ISPs. 

Best regards, 

-- 
Jérôme Nicolle 
+33 6 19 31 27 14 



Re: Shared cache servers on an island's IXP

2024-01-18 Thread Jérôme Nicolle

Hi Gael,

Le 18/01/2024 à 13:48, Gael Hernandez a écrit :
Friends from PCH (www.pch.net ) operate backend 
services for DNS authoritative ccTLDs and the Quad9 DNS resolver. They 
would be very happy to help.


I'm sure they would, I'm a big fan of their work BTW. Though hosting 
them in a densely connected area isn't the same as it will in remote 
locations, I guess there could be some work to be done to get it running 
properly, as Stephane wrote.


How would you think we could work on that ? I mean, disconnected or 
extremely high latency scenarii should be on a research roadmap by 
SpaceX' standards, right ? ;-)


Best regards,

--
Jérôme Nicolle
+33 6 19 31 27 14


Re: Shared cache servers on an island's IXP

2024-01-18 Thread Jérôme Nicolle

Hello Mehmet,

Le 18/01/2024 à 12:58, Mehmet a écrit :
VMs are no go for big content companies except Microsoft. You can run 
Microsoft CDN on VM but rest of the content will need to be cached. You 
can actually install this yourself


I've already read most docs for caching servers provided by major 
actors. What I'm mostly concerned about is their ability to peer with 
multiple AS on the local IXP, as to not over-replicate them.


Should I establish a dedicated network peering on the IXP ? Or will they 
come with their own ASNs ? The peering case is quite not documented on 
publicly available specs, if even possible.


Depending on how much traffic do you have , you may be able to get 
facebook, youtube, netflix caches i think minimum bw requirement changes 
per region


Those I'm nearly sure I could get, if I can pool caches amongst ISPs. 
The current constraints are issues to any content provider, not just for 
local ISPs.


Best regards,

--
Jérôme Nicolle
+33 6 19 31 27 14


Re: Shared cache servers on an island's IXP

2024-01-18 Thread Tom Beecher
Many CDNs have hardware options for self hosted caches. I think there would
likely be concerns about <20G of connectivity to those caches though. With
an incorrect setup, you could end up maxing out those links just with cache
fill traffic itself.


On Thu, Jan 18, 2024 at 6:54 AM Jérôme Nicolle  wrote:

> Hello,
>
> I'm trying to find out the best way to consolidate connectivity on an
> island.
>
> The current issues are :
> - Low redundancy of old cables (2)
> - Low system capacity of said cables (<=20Gbps)
> - Total service loss when both cables are down because of congestion on
> satelite backups
> - Sheer price of bandwidth
>
> On the plus side, there are over 5 AS on the island, an IXP and
> small-ish collocation capacity (approx 10kW available, could be
> upgraded, second site available later this year).
>
> We'd like to host cache servers and/or VMs on the IXP, with an option to
> anycast many services - without hijacking them, that goes without saying
> - such as quad-whatever DNS resolvers, NTP servers and whatever else
> could be useful for weather-induced disaster-recovery and/or offload
> cables.
>
> Do you think most CDNs, stream services and CSPs could accommodate a
> scenario where we'd host their gear or provide VMs for them to announce
> on the local route-servers ? If not, what could be a reasonable
> technical arrangement ?
>
> Thanks !
>
> --
> Jérôme Nicolle
> +33 6 19 31 27 14
>


Re: Shared cache servers on an island's IXP

2024-01-18 Thread Stephane Bortzmeyer
On Thu, Jan 18, 2024 at 12:53:19PM +0100,
 Jérôme Nicolle  wrote 
 a message of 36 lines which said:

> - Low redundancy of old cables (2)
> - Total service loss when both cables are down because of congestion on
> satelite backups

A problem which is not often mentioned is that most (all?) "local
caches" (CDN, DNS resolvers, etc) do not have an "offline mode" (or
"disconnected-from-master mode"). During an outage, they continue to
work for some time then break suddenly, in a not-friendly way, serving
various error messages instead of old data and/or useful
messages. (For instance, the DNS resolver may not be able to serve
stale answers.)

The time during which they can continue to work when they are
disconnected from their master is typically undocumented (except for
the DNS), and discovered only when there is a long outage.

Making the Internet work better with sometimes-broken connectivity is
still an area of research.



Re: Shared cache servers on an island's IXP

2024-01-18 Thread Mehmet
Hi Jérôme

VMs are no go for big content companies except Microsoft. You can run
Microsoft CDN on VM but rest of the content will need to be cached. You can
actually install this yourself

Depending on how much traffic do you have , you may be able to get
facebook, youtube, netflix caches i think minimum bw requirement changes
per region

Good luck

On Thu, Jan 18, 2024 at 06:53 Jérôme Nicolle  wrote:

> Hello,
>
> I'm trying to find out the best way to consolidate connectivity on an
> island.
>
> The current issues are :
> - Low redundancy of old cables (2)
> - Low system capacity of said cables (<=20Gbps)
> - Total service loss when both cables are down because of congestion on
> satelite backups
> - Sheer price of bandwidth
>
> On the plus side, there are over 5 AS on the island, an IXP and
> small-ish collocation capacity (approx 10kW available, could be
> upgraded, second site available later this year).
>
> We'd like to host cache servers and/or VMs on the IXP, with an option to
> anycast many services - without hijacking them, that goes without saying
> - such as quad-whatever DNS resolvers, NTP servers and whatever else
> could be useful for weather-induced disaster-recovery and/or offload
> cables.
>
> Do you think most CDNs, stream services and CSPs could accommodate a
> scenario where we'd host their gear or provide VMs for them to announce
> on the local route-servers ? If not, what could be a reasonable
> technical arrangement ?
>
> Thanks !
>
> --
> Jérôme Nicolle
> +33 6 19 31 27 14
>


Shared cache servers on an island's IXP

2024-01-18 Thread Jérôme Nicolle

Hello,

I'm trying to find out the best way to consolidate connectivity on an 
island.


The current issues are :
- Low redundancy of old cables (2)
- Low system capacity of said cables (<=20Gbps)
- Total service loss when both cables are down because of congestion on 
satelite backups

- Sheer price of bandwidth

On the plus side, there are over 5 AS on the island, an IXP and 
small-ish collocation capacity (approx 10kW available, could be 
upgraded, second site available later this year).


We'd like to host cache servers and/or VMs on the IXP, with an option to 
anycast many services - without hijacking them, that goes without saying 
- such as quad-whatever DNS resolvers, NTP servers and whatever else 
could be useful for weather-induced disaster-recovery and/or offload cables.


Do you think most CDNs, stream services and CSPs could accommodate a 
scenario where we'd host their gear or provide VMs for them to announce 
on the local route-servers ? If not, what could be a reasonable 
technical arrangement ?


Thanks !

--
Jérôme Nicolle
+33 6 19 31 27 14