Re: Rack rails on network equipment

2021-09-26 Thread Lady Benjamin Cannon
I can install an entire 384lb 21U core router in 30 minutes.

Most of that time is removing every module to lighten the chassis, then 
re-installing every module. 

We can build an entire POP in a day with a crew of 3, so I’m not sure there’s 
worthwhile savings to be had here.   Also consider that network engineers 
babysitting it later cost more than the installers (usually) who don’t have to 
be terribly sophisticated at say BGP.

Those rapid-rails are indeed nice for servers and make quick work of putting 
~30+ 1U pizza boxes in a rack quickly.  We use them on 2U servers we like a 
lot.   

And these days everyone is just buying merchant silicon and throwing a UI 
around it, so there’s less of a reason to pick any particular vendor, however 
there still is differentiation that can dramatically increase the TCO.

I don’t think they’re needed for switches, and for onesie-twosie, they’ll 
probably slow things down compared with basic (good, bad ones exist) rack rails.

I write all of this from the perspective of a network engineer, businesswoman, 
and telecom carrier - not necessarily that of a hyperscale cloud compute 
provider, although we are becoming one of those too it seems, so this 
perspective may shift for that unique use-case.

-LB


> On Sep 24, 2021, at 11:27 AM, Mauricio Rodriguez via NANOG  
> wrote:
> 
> Andrey, hi.
> 
> The speed rails are nice, and are effective in optimizing the time it takes 
> to rack equipment.  It's pretty much par for the course on servers today 
> (thank goodness!), and not so much on network equipment.  I suppose the 
> reasons being what others have mentioned - longevity of service life, 
> frequency at which network gear is installed, etc.  As well, a typical server 
> to switch ratio, depending on number of switch ports and fault-tolerance 
> configurations, could be something like 38:1 in dense 1U server install.  So 
> taking a few more minutes on the switch installation isn't so impactful - 
> taking a few more minutes on each server installation can really become a 
> problem.
> 
> A 30-minute time to install a regular 1U ToR switch seems a bit excessive.  
> Maybe the very first time a tech installs any specific model switch with a 
> unique rail configuration.  After that one, it should be around 10 minutes 
> for most situations.  I am assuming some level of teamwork where there is an 
> installer at the front of the cabinet and another at the rear, and they work 
> in tandem to install cage nuts, install front/rear rails (depending on 
> switch), position the equipment, and affix to the cabinet.  I can see the 30 
> minutes if you have one person, it's a larger/heavier device (like the 
> EX4500) and the installer is forced to do some kind of crazy balancing act 
> with the switch (not recommended), or has to use a server lift to install it.
> 
> Those speed rails as well are a bit of a challenge to install if it's not a 
> team effort. So, I'm wondering if in addition to using speed rails, you may 
> have changed from a one-tech installation process to a two-tech team 
> installation process?
> 
> Best Regards,
> Mauricio Rodriguez
> Founder / Owner
> Fletnet Network Engineering (www.fletnet.com )
> Follow us on LinkedIn 
> 
> mauricio.rodrig...@fletnet.com 
> Office: +1 786-309-1082
> Direct: +1 786-309-5493
> 
> 
> 
> 
> On Fri, Sep 24, 2021 at 12:41 PM Andrey Khomyakov  > wrote:
> Hi folks,
> Happy Friday!
> 
> Would you, please, share your thoughts on the following matter?
> 
> Back some 5 years ago we pulled the trigger and started phasing out Cisco and 
> Juniper switching products out of our data centers (reasons for that are not 
> quite relevant to the topic). We selected Dell switches in part due to Dell 
> using "quick rails'' (sometimes known as speed rails or toolless rails).  
> This is where both the switch side rail and the rack side rail just snap in, 
> thus not requiring a screwdriver and hands of the size no bigger than a 
> hamster paw to hold those stupid proprietary screws (lookin at your, cisco) 
> to attach those rails.
> We went from taking 16hrs to build a row of compute (from just network 
> equipment racking pov) to maybe 1hr... (we estimated that on average it took 
> us 30 min to rack a switch from cut open the box with Juniper switches to 5 
> min with Dell switches)
> Interesting tidbit is that we actually used to manufacture custom rails for 
> our Juniper EX4500 switches so the switch can be actually inserted from the 
> back of the rack (you know, where most of your server ports are...) and not 
> be blocked by the zero-U PDUs and all the cabling in the rack. Stock rails 
> didn't work at all for us unless we used wider racks, which then, in turn, 
> reduced floor capacity.
> 
> As far as I know, Dell is the only switch vendor doing toolless rails so it's 
> a bit of a h

Re: IPv6 woes - RFC

2021-09-26 Thread Jim Young via NANOG
On Saturday, September 25, 2021 21:55 Chris Adams  wrote: 

> More than once, I've had to explain why zero-filling octets, like
> 127.000.000.001 (which still works) or 008.008.008.008 (which does not),
> is broken.

Zero filling IPv4 is just evil. How about this party trick?

> % ping -c 1 010.010.010.010
> PING 010.010.010.010 (8.8.8.8): 56 data bytes
> 64 bytes from 8.8.8.8: icmp_seq=0 ttl=116 time=27.496 ms
> 
> --- 010.010.010.010 ping statistics ---
> 1 packets transmitted, 1 packets received, 0.0% packet loss
> round-trip min/avg/max/stddev = 27.496/27.496/27.496/0.000 ms
% 


Re: IPv6 woes - RFC

2021-09-26 Thread Masataka Ohta

Originally, textual IPv4 addresses were maintained centrally
by ISI as a file format of HOSTS.TXT, when there was no DNS
and users are required to download the current most HOSTS.TXT
from ISI through ftp.

At that time, there can be, because of consistent central
management, just one way to represent them, which is described
in rfc810 as:

::=  "."  "."  "." 
::= <0 to 255 decimal>

Obviously, syntax was extended, at least beyond decimal, by a
BSD implementation with language C.

Masataka Ohta


Re: IPv6 woes - RFC

2021-09-26 Thread Nick Hilliard

Valdis Klētnieks wrote on 26/09/2021 01:44:

19:17:38  0 [~] ping 2130706433
PING 2130706433 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.075 ms
64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.063 ms
64 bytes from 127.0.0.1: icmp_seq=4 ttl=64 time=0.082 ms
^C
--- 2130706433 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time84ms
rtt min/avg/max/mdev = 0.063/0.086/0.126/0.025 ms

Works on Fedora Rawhide based on RedHat, Debian 10, and Android 9.


this is a good example of "might work but don't depend on it".  The fact 
that it works at all is a historical curiosity which happened because 
the text format for ipv4 addresses was never formally specified, so when 
some of the tcp/ip code was originally written for bsd, it accepted 
unusual formats in some places, including: integers, octal, hex, binary 
and assuming zeros when the address is incompletely specified, among 
other things.


The octal representation was a real problem because rfc790 specified 
decimal dotted quad notation with leading zeros, leading to a whole bag 
of pain for parsers because there is no way of knowing what a leading 
zero means in practice, and for 3-digit numbers where each digit is <= 
7, there is no a-priori way of determining whether it's octal 
representation or decimal.


Nick


Re: Rack rails on network equipment

2021-09-26 Thread Alan Buxey
> We operate over 1000 switches in our data centers, and hardware failures that 
> require a switch swap are common enough where the speed of swap starts to 
> matter to some extent. We probably swap a switch or two a month.

having operated a network of over 2000 switches, where we would see
maybe one die a year (and let me tell you, some of those switches were
not in nice places...no data centre air handled clean rack spaces etc)
this failure
rate is very high and would certainly be a factor in vendor choice.
for initial install, there are quicker ways of dealing with cage nut
installs... but when a switch die in service, the mounting isnt a
speed factor, its the cabling (and
as others have said, the startup time of some modern switches, you can
patch every cable back in before the thing has even booted these
days).

alan