Re: NANOG List Maintenance Complete

2020-06-19 Thread Bryan Fields
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


On 6/20/20 12:26 AM, ge...@nanog.org wrote:
> The archives for each list have been extended with all prior
> posts.  Please see: https://mailman.nanog.org/pipermail/nanog/

If anyone has an archive of messages prior to 1992 please hit up the geeks
with it.  It's my understanding this is prior to NANOG being a group under
Merit, and I'm unsure of the history or how any of the lists were handled at
that time, or what software was used.

Thanks,
- -- 
Bryan Fields

727-409-1194 - Voice
http://bryanfields.net
-BEGIN PGP SIGNATURE-

iQIzBAEBCAAdFiEEaESdNosUjpjcN/JhYTmgYVLGkUAFAl7tkh4ACgkQYTmgYVLG
kUBoNA//XD3aOcFwrjHNu3/eJj/i2lvTcL9kHusXtXbrXs24sxN6gKAKbQ74nKNj
uue30cqsGlTQwjkExwjrdn+LiDxi/Q9rgILkKzDPFDlwz5r8fsnDWV0HggpACaEp
zv1Fot+kfnXE2aqTEwD7QSH6ZFa3muwD+IdQuFH71N/m8oKsaKDWh3aSwj0jsX+G
5p6g1nPAYhCk1O/v98FVMZEOfUnb2j830fLYfWypMM9h53q2ScdEUwBKTmhiQqv7
eHVBb0l1yhnaOcJ3c5mIMNgyZK9BhI2MrJLNmS3R6z9XprCQ+I2KP2tyTv2tWAzO
DFgoaiUJTVfOzbWQSeiGdYSRI7cb2yYXe46KaTRJIQgOBKexoqgyaCMLgFHUY2jM
HALa/f6/NOLY8PFNGHXPU1AkbZKdF7FlK1bNqlIwJWe/kZAIMS5xWMjb4KKDL5zn
twytQqjfKAxHCD+D0m4UMu/H73GOlzF/26XGFc2Z7ILnzqn/5tHDO3mXzlxL5myR
a3TN5lDTd7khITcibb4wF3W0oWfu0/amGNDwBFyYrpOd44Cf9odbixZ9/OA3nYgf
Jwa08iFLdy+BzaMVcV1cebkf4NtGx69spST3MjISlQxiFXrM9foWexDi9drdszsT
T0mb5MgATJ2er2gyB3dOtr4MtmCsxfXlDAXnDF0EANmmbnLr45s=
=thzK
-END PGP SIGNATURE-


NANOG List Maintenance Complete

2020-06-19 Thread geeks
The maintenance was successful, and all mailing lists have been updated to
use VERP.  The archives for each list have been extended with all prior
posts.  Please see: https://mailman.nanog.org/pipermail/nanog/

The list is back in operation and all queued messages will be processing
over the next few minutes.  Please contact t...@nanog.org if you experience
any delivery problems or have issues.

Thank you,
NANOG Tech Committee


NANOG List Maintenance Start

2020-06-19 Thread geeks
Maintenance is starting in accordance with the prior maintenance notice.
The lists will be unavailable until the maintenance is concluded before or
about 0600 UTC.

Thank you,
NANOG Tech Committee


RE: [EXT] Xfinity (both ends) - can't ping from users home to office

2020-06-19 Thread Spencer Coplin
Kevin, you are amazing! This absolutely fixed our issue. Thank you so much!

Thank you,
Spencer

From: Kevin Kurpiel 
Sent: Friday, June 19, 2020 5:04 PM
To: Spencer Coplin 
Cc: North American Network Operators' Group 
Subject: Re: [EXT] Xfinity (both ends) - can't ping from users home to office

CAUTION: This email originated from an external source. Verify the sender 
before taking any actions.




On Jun 19, 2020, at 10:18 AM, Spencer Coplin 
mailto:scop...@f1techgroup.com>> wrote:

Thank you, he has the xfinity provided router/modem, so no cpe router to make a 
change on. I don't think we can change it's mac address. Good thought on 
pursuing it through the business side and I'll proceed with that.

Thank you,

Spencer

Spencer Coplin

We’re facing the same issue at my current job. We have many users at home who 
are unable to access sites in our datacenter. We found that if the user 
disables the "XFi Advanced Security feature" on their Comcast provided router, 
they can reach our sites again. We’ve been trying to work with Comcast to see 
if there is any way to work around this and find out why it suddenly decided to 
block our content, but don’t have a good solution yet. Might be worth having 
your user try to disable that feature to see if it helps.

Kevin


why am i in this handbasket? (was Devil's Advocate - Segment Routing, Why?)

2020-06-19 Thread Randy Bush
< ranting of a curmudgeonly old privileged white male >

>>> MPLS was since day one proposed as enabler for services originally
>>> L3VPNs and RSVP-TE.
>> MPLS day one was mike o'dell wanting to move his city/city traffic
>> matrix from ATM to tag switching and open cascade's hold on tags.
> And IIRC, Tag switching day one was Cisco overreacting to Ipsilon.

i had not thought of it as overreacting; more embrace and devour.  mo
and yakov, aided and abetted by sob and other ietf illuminati, helped
cisco take the ball away from Ipsilon, Force10, ...

but that is water over the damn, and my head is hurting a bit from
thinking on too many levels at once.

there is saku's point of distributing labels in IGP TLVs/LSAs.  i
suspect he is correct, but good luck getting that anywhere in the
internet vendor task force.  and that tells us a lot about whether we
can actually effect useful simplification and change.

is a significant part of the perception that there is a forwarding
problem the result of the vendors, 25 years later, still not
designing for v4/v6 parity?

there is the argument that switching MPLS is faster than IP; when the
pressure points i see are more at routing (BGP/LDP/RSVP/whatever),
recovery, and convergence.

did we really learn so little from IP routing that we need to
recreate analogous complexity and fragility in the MPLS control
plane?  ( sound of steam eminating from saku's ears :)

and then there is buffering; which seems more serious than simple
forwarding rate.  get it there faster so it can wait in a queue?  my
principal impression of the Stanford/Google workshops was the parable
of the blind men and the elephant.  though maybe Matt had the main
point: given scaling 4x, Moore's law can not save us and it will all
become paced protocols.  will we now have a decade+ of BBR evolution
and tuning?  if so, how do we engineer our networks for that?

and up 10,000m, we watch vendor software engineers hand crafting in
an assembler language with if/then/case/for, and running a chain of
checking software to look for horrors in their assembler programs.
it's the bleeping 21st century.  why are the protocol specs not
formal and verified, and the code formally generated and verified?
and don't give me too slow given that the hardware folk seem to be
able to do 10x in the time it takes to run valgrind a few dozen
times.

we're extracting ore with hammers and chisels, and then hammering it
into shiny objects rather than safe and securable network design and
construction tools.

apologies.  i hope you did not read this far.

randy


Re: Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Robert Raszuk
>
> One of the advantages cited for SRv6 over MPLS is that the packet contains
>> a record of where it has been.
>>
>
Not really ... packets are not tourists in a bus.

First there are real studies proving that most large production networks
for the goal of good TE only need to place 1, 2 or 3 hops to traverse
through. Rest is the shortest path between those hops.

Then even if you place those node SIDs you have no control which interfaces
are chosen as outbound. There is often more then one IGP ECMP path in
between. You would need to insert adj. SIDs which does require pretty fine
level of controller's capabilities to start with.

I just hope that no one sane proposes that now all packets should get
encapsulated in a new IPv6 header while entering a transit ISP network and
carry long list of hop by hop adjacencies it is to travel by. Besides even
if it would it would be valid only within given ASN and had no visibility
outside.

Thx,
R.


Re: Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Anoop Ghanwani
On Wed, Jun 17, 2020 at 11:40 AM Dave Bell  wrote:

>
>
> On Wed, 17 Jun 2020 at 18:42, Saku Ytti  wrote:
>
>> Hey,
>>
>> > Why do we really need SR? Be it SR-MPLS or SRv6 or SRv6+?
>>
>> I don't like this, SR-MPLS and SRv6 are just utterly different things
>> to me, and no answer meaningfully applies to both.
>>
>
> I don't understand the point of SRv6. What equipment can support IPv6
> routing, but can't support MPLS label switching?
>
> I'm a big fan of SR-MPLS however.
>
> One of the advantages cited for SRv6 over MPLS is that the packet contains
a record of where it has been.


Re: Microsoft AS8075 contact?

2020-06-19 Thread Bryan Holloway

I'm in good hands ... thanks to all who responded.


On 6/18/20 2:33 PM, Bryan Holloway wrote:

Hello ...

If anyone from Microsoft peering is lurking, I could use an assist.

We have a reachability issue in Chicago.

E-mail to their PeeringDB NOC contact have gone unanswered.

Thank you!


Re: Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Dorian Kim


> On Jun 19, 2020, at 11:34 AM, Randy Bush  wrote:
> 
> 
>> 
>> MPLS was since day one proposed as enabler for services originally
>> L3VPNs and RSVP-TE.
> 
> MPLS day one was mike o'dell wanting to move his city/city traffic
> matrix from ATM to tag switching and open cascade's hold on tags.

And IIRC, Tag switching day one was Cisco overreacting to Ipsilon.

-dorian 

Re: Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Robert Raszuk
>
> But, today, people are seems to be using, so called, MPLS, with
>
explicitly configured flows, administration of which does not
> scale and is annoying.
>

I am actually not sure what you are talking about here.

The only per flow action in any MPLS deployments I have seen was mapping
flow groups to specific TE-LSPs. In all other TDP or LDP cases flow == IP
destination so it is exact based on the destination reachability. And such
mapping is based on the LDP FEC to IGP (or BGP) match.

Even worse, if route near the destination expected to pop the label
> chain goes down, how can the source knows that the router goes down
> and choose alternative router near the destination?
>

In normal MPLS the src does not pick the transit paths. Transit is 100%
driven by IGP and if you loose a node local connectivity restoration
techniques (FRR or IGP convergence applies). If egress signalled
implicit NULL it would signal it to any IGP peer.

That is also possible with SR-MPLS too. No change ... no per flow state at
all more then per IP destination routing. If you want to control your
transit hops you can - but this is an option not w requirement.

MPLS with hierarchical routing just does not scale.


While I am not defending MPLS here and 100% agree that IP as transit is a
much better option today and tomorrow I also would like to make sure we
communicate true points. So when you say it does not scale - it could be
good to list what exactly does not scale by providing a real network
operational example.

Many thx,
R.


Re: Google captcha issue

2020-06-19 Thread Christopher Tyler
Pretty sure that we traced to a service DiviNetworks that uses "unused" IP 
space/bandwidth and tracks Invalid DNS queries. Thanks for all of the input and 
assistance, special thanks to William Herrin who pointed this out.

-- 
Christopher Tyler
Senior Network Engineer
MTCRE/MTCNA/MTCTCE/MTCWE

Total Highspeed Internet Solutions
1091 W. Kathryn Street
Nixa, MO 65714
(417) 851-1107 x. 9002
www.totalhighspeed.com

- Original Message -
> From: "Christopher Morrow" 
> To: "Sabri Berisha" 
> Cc: "nanog" 
> Sent: Friday, June 19, 2020 12:58:46 PM
> Subject: Re: Google captcha issue

> On Fri, Jun 19, 2020 at 1:44 PM Sabri Berisha  wrote:
>>
>> - On Jun 19, 2020, at 9:40 AM, William Herrin b...@herrin.us wrote:
>>
>> Hi,
>>
>> > Do you, or perhaps your upstream have such a contract?
>>
>> I'd be pretty unhappy if someone that I'm paying for transit spoofs traffic
>> with my IP space as the source.
> 
> I don't think william's description is 'spoofing', it's perhaps:
>  "Manufacturing hosts on the fly"
> 
> is still skeezy though ;(


Re: Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Mark Tinka



On 19/Jun/20 17:13, Robert Raszuk wrote:

>
> So I think Ohta-san's point is about scalability services not flat
> underlay RIB and FIB sizes. Many years ago we had requests to support
> 5M L3VPN routes while underlay was just 500K IPv4.

Ah, if the context, then, was l3vpn scaling, yes, that is a known issue.

Apart from the global table vs. VRF parity concerns I've always had (one
of which was illustrated earlier this week, on this list, with RPKI in a
VRF), the other reason I don't do Internet in a VRF is because it was
always a trade-off:

    - More routes per VRF = fewer VRF's.
    - More VRF's  = fewer routes per VRF.

Going forward, I believe the l3vpn pressures (for pure VPN services, not
Internet in a VRF) should begin to subside as businesses move on-prem
workloads to the cloud, bite into the SD-WAN train, and generally, do
more stuff over the public Internet than via inter-branch WAN links
formerly driven by l3vpn.

Time will tell, but in Africa, bar South Africa, l3vpn's were never a
big thing, mostly because Internet connectivity was best served from one
or two major cities, where most businesses had a branch that warranted
connectivity.

But even in South Africa (as the rest of our African market), 98% of our
business is plain IP. The other 2% is mostly l2vpn. l3vpn's don't really
feature, except for some in-house enterprise VoIP carriage + some
high-speed in-band management.

Even with the older South African operators that made a killing off
l3vpn's, these are falling away as their customers either move to the
cloud and/or accept SD-WAN thingies.


>
> Last - when I originally discussed just plain MPLS with customers with
> single application of hierarchical routing (no BGP in the core)
> frankly no one was interested. Till L3VPN arrived which was game
> changer and run for new revenue streams ...

The BGP-free core has always sounded like a dark art. More so in the
days when hardware was precious, core routers doubled as inline route
reflectors and the size of the IPv4 DFZ wasn't rapidly exploding like it
is today, and no one was even talking about the IPv6 DFZ.

Might be useful speaking with them again, in 2020 :-).

Mark.



Weekly Routing Table Report

2020-06-19 Thread Routing Analysis Role Account
This is an automated weekly mailing describing the state of the Internet
Routing Table as seen from APNIC's router in Japan.

The posting is sent to APOPS, NANOG, AfNOG, SANOG, PacNOG, SAFNOG
TZNOG, MENOG, BJNOG, SDNOG, CMNOG, LACNOG and the RIPE Routing WG.

Daily listings are sent to bgp-st...@lists.apnic.net

For historical data, please see http://thyme.rand.apnic.net.

If you have any comments please contact Philip Smith .

Routing Table Report   04:00 +10GMT Sat 20 Jun, 2020

Report Website: http://thyme.rand.apnic.net
Detailed Analysis:  http://thyme.rand.apnic.net/current/

Analysis Summary


BGP routing table entries examined:  815154
Prefixes after maximum aggregation (per Origin AS):  309453
Deaggregation factor:  2.63
Unique aggregates announced (without unneeded subnets):  397027
Total ASes present in the Internet Routing Table: 68353
Prefixes per ASN: 11.93
Origin-only ASes present in the Internet Routing Table:   58751
Origin ASes announcing only one prefix:   24470
Transit ASes present in the Internet Routing Table:9602
Transit-only ASes present in the Internet Routing Table:307
Average AS path length visible in the Internet Routing Table:   4.4
Max AS path length visible:  50
Max AS path prepend of ASN ( 22394)  38
Prefixes from unregistered ASNs in the Routing Table:   859
Number of instances of unregistered ASNs:   860
Number of 32-bit ASNs allocated by the RIRs:  32127
Number of 32-bit ASNs visible in the Routing Table:   26466
Prefixes from 32-bit ASNs in the Routing Table:  121955
Number of bogon 32-bit ASNs visible in the Routing Table:16
Special use prefixes present in the Routing Table:1
Prefixes being announced from unallocated address space:543
Number of addresses announced to Internet:   2843369216
Equivalent to 169 /8s, 122 /16s and 95 /24s
Percentage of available address space announced:   76.8
Percentage of allocated address space announced:   76.8
Percentage of available address space allocated:  100.0
Percentage of address space in use by end-sites:   99.5
Total number of prefixes smaller than registry allocations:  274961

APNIC Region Analysis Summary
-

Prefixes being announced by APNIC Region ASes:   214358
Total APNIC prefixes after maximum aggregation:   62454
APNIC Deaggregation factor:3.43
Prefixes being announced from the APNIC address blocks:  207950
Unique aggregates announced from the APNIC address blocks:86021
APNIC Region origin ASes present in the Internet Routing Table:   10638
APNIC Prefixes per ASN:   19.55
APNIC Region origin ASes announcing only one prefix:   2979
APNIC Region transit ASes present in the Internet Routing Table:   1588
Average APNIC Region AS path length visible:4.7
Max APNIC Region AS path length visible: 27
Number of APNIC region 32-bit ASNs visible in the Routing Table:   5710
Number of APNIC addresses announced to Internet:  769610624
Equivalent to 45 /8s, 223 /16s and 83 /24s
APNIC AS Blocks4608-4864, 7467-7722, 9216-10239, 17408-18431
(pre-ERX allocations)  23552-24575, 37888-38911, 45056-46079, 55296-56319,
   58368-59391, 63488-64098, 64297-64395, 131072-141625
APNIC Address Blocks 1/8,  14/8,  27/8,  36/8,  39/8,  42/8,  43/8,
49/8,  58/8,  59/8,  60/8,  61/8, 101/8, 103/8,
   106/8, 110/8, 111/8, 112/8, 113/8, 114/8, 115/8,
   116/8, 117/8, 118/8, 119/8, 120/8, 121/8, 122/8,
   123/8, 124/8, 125/8, 126/8, 133/8, 150/8, 153/8,
   163/8, 171/8, 175/8, 180/8, 182/8, 183/8, 202/8,
   203/8, 210/8, 211/8, 218/8, 219/8, 220/8, 221/8,
   222/8, 223/8,

ARIN Region Analysis Summary


Prefixes being announced by ARIN Region ASes:238135
Total ARIN prefixes after maximum aggregation:   109384
ARIN Deaggregation factor: 2.18
Prefixes being announced from the ARIN address blocks:   235455
Unique aggregates announced from the ARIN address blocks:119632
ARIN Region origin ASes present in the Internet Routing Table:18514
ARIN Prefixes per ASN:12.72
ARIN 

Re: Google captcha issue

2020-06-19 Thread Christopher Morrow
On Fri, Jun 19, 2020 at 1:44 PM Sabri Berisha  wrote:
>
> - On Jun 19, 2020, at 9:40 AM, William Herrin b...@herrin.us wrote:
>
> Hi,
>
> > Do you, or perhaps your upstream have such a contract?
>
> I'd be pretty unhappy if someone that I'm paying for transit spoofs traffic
> with my IP space as the source.

I don't think william's description is 'spoofing', it's perhaps:
  "Manufacturing hosts on the fly"

is still skeezy though ;(


Re: Google captcha issue

2020-06-19 Thread Sabri Berisha
- On Jun 19, 2020, at 9:40 AM, William Herrin b...@herrin.us wrote:

Hi,

> Do you, or perhaps your upstream have such a contract?

I'd be pretty unhappy if someone that I'm paying for transit spoofs traffic
with my IP space as the source.

Thanks,

Sabri


Re: Google captcha issue

2020-06-19 Thread William Herrin
On Fri, Jun 19, 2020 at 9:15 AM Christopher Tyler
 wrote:
> We run a smaller ISP of about 7.5k customers and the other day we got an 
> email (excerpt below) from one of Google's automated tools.
>
> We are seeing automated scraping of Google Web Search from a large
> number of your IPs.  Automated scraping violates our /robots.txt file
> and also our Terms of Service.  We request that you terminate this
> traffic immediately.  Failure to do so may cause your network to be
> blocked by our abuse systems.
>
> To allow you to identify the traffic, we are providing a list of
> your IPs they used today (Source field), as well as the most common
> destination (Google) IP and port and a timestamp of a recent request
> (in UTC) to aid in your identification.  Note that this list may not
> be exhaustive, and we request that you terminate all such traffic, not
> just traffic from IPs in this list.
>
> All of the destination ports are either 80 or 443, so they at least appear to 
> be legit web traffic on the surface. They are obviously spoofed IP address as 
> there are network addresses in the list and the IP belongs to a router that 
> doesn't appear to be compromised in any way. The initial letter included 700+ 
> IP addresses from our network.

Hi Christopher,

Presumably Google is smart enough to know the difference between
spoofed port scanning and completed TCP connections performing a web
search. If you take Google's report at face value, the addresses
aren't spoofed; something else is happening. The question is how.

There was a company revealed on Nanog earlier this year (or maybne
last year, I'm not great with dates) which contracts small ISPs and
virtual server providers to use their "spare bandwidth" to
pseudonymously originate web requests. They don't require you to
assign them IP addresses because they overload their activity on all
of your IP addresses. In theory they do this without disturbing your
customers and only access web sites whose owners have contracted them
to do so, generally to test connectivity. In practice, there's a
device inline with your traffic flow that injects TCP connections and
captures the associated return packets across your entire address
space. Including, for example, your routers' IP addresses.

Do you, or perhaps your upstream have such a contract?

Regards,
Bill Herrin

-- 
William Herrin
b...@herrin.us
https://bill.herrin.us/


RE: [EXT] Xfinity (both ends) - can't ping from users home to office

2020-06-19 Thread Spencer Coplin
Thank you, he has the xfinity provided router/modem, so no cpe router to make a 
change on. I don't think we can change it's mac address. Good thought on 
pursuing it through the business side and I'll proceed with that.

Thank you,

Spencer

Spencer Coplin

-Original Message-
From: Chuck Anderson  
Sent: Friday, June 19, 2020 10:13 AM
To: Spencer Coplin 
Cc: North American Network Operators' Group 
Subject: Re: [EXT] Xfinity (both ends) - can't ping from users home to office

CAUTION: This email originated from an external source. Verify the sender 
before taking any actions.



On Thu, Jun 18, 2020 at 11:57:12PM +, Spencer Coplin wrote:
> I have a client that is unable to ping his office Comcast Business connection 
> from his home Xfinity connection. It was working a month ago and we can 
> confirm that his connection works over his iphone's hotspot. I am able to 
> ping from my own xfinity residential (same city) without issues. I suspect 
> something in the routing from his home connection is messed up as 
> tracert/ping can't even resolve his office's IP. The client called Xfinity 
> residential support and they blamed the inability to ping the IP on his 
> office vpn connection.
>
> Has anyone else his this and if so, how was it resolved?

Sort of.  Try having him change the MAC address of his residential CPE router 
so the BNG sees it as a "new" device.  In my case that resolved connectivity 
issues after Comcast did some "network maintenance" and broke everything for me.

Alternatively, have you tried working this issue from the Business side?  You'd 
probably get more leverage that way.


Re: [c-nsp] Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread steve ulrich


> 
> On Jun 19, 2020, at 08:06, Mark Tinka  wrote:
> 
>> On 19/Jun/20 14:50, Tim Durack wrote:
>> 
>> If y'all can deal with the BU, the Cat9k family is looking
>> half-decent: MPLS PE/P, BGP L3VPN, BGP EVPN (VXLAN dataplane not MPLS)
>> etc.
>> UADP programmable pipeline ASIC, FIB ~200k, E-LLW, mandatory DNA
>> license now covers software support...
>> 
>> Of course you do have to deal with a BU that lives in a parallel
>> universe (SDA, LISP, NEAT etc) - but the hardware is the right
>> price-perf, and IOS-XE is tolerable.
>> 
>> No large FIB today, but Cisco appears to be headed towards "Silicon
>> One" for all of their platforms: RTC ASIC strapped over some HBM. The
>> strategy is interesting: sell it as a chip, sell it whitebox, sell it
>> fully packaged.
>> 
>> YMMV
> 
> I'd like to hear what Gert thinks, though. I'm sure he has a special
> place for the word "Catalyst" :-).
> 
> Oddly, if Silicon One is Cisco's future, that means IOS XE may be headed
> for the guillotine, in which case investing any further into an IOS XE
> platform could be dicey at best, egg-face at worst.
> 
> I could be wrong...

never underestimate the desire of product managers and engineering teams to 
have their own petri dishes to swim around in. 

-- 
steve ulrich (sulrich@botwerks.*)



Google captcha issue

2020-06-19 Thread Christopher Tyler
We run a smaller ISP of about 7.5k customers and the other day we got an email 
(excerpt below) from one of Google's automated tools.

We are seeing automated scraping of Google Web Search from a large
number of your IPs.  Automated scraping violates our /robots.txt file
and also our Terms of Service.  We request that you terminate this
traffic immediately.  Failure to do so may cause your network to be
blocked by our abuse systems.

To allow you to identify the traffic, we are providing a list of
your IPs they used today (Source field), as well as the most common
destination (Google) IP and port and a timestamp of a recent request
(in UTC) to aid in your identification.  Note that this list may not
be exhaustive, and we request that you terminate all such traffic, not
just traffic from IPs in this list.

All of the destination ports are either 80 or 443, so they at least appear to 
be legit web traffic on the surface. They are obviously spoofed IP address as 
there are network addresses in the list and the IP belongs to a router that 
doesn't appear to be compromised in any way. The initial letter included 700+ 
IP addresses from our network.

It's now affecting our customers as they are now getting Captcha's for every 
couple of Google searches that they perform.

Does anyone know of a good way to track the perpetrator(s) down and/or know of 
a way to mitigate this?

-- 
Christopher Tyler
Senior Network Engineer
MTCRE/MTCNA/MTCTCE/MTCWE

Total Highspeed Internet Solutions
1091 W. Kathryn Street
Nixa, MO 65714
(417) 851-1107 x. 9002
www.totalhighspeed.com


Re: Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Masataka Ohta

Robert Raszuk wrote:


MPLS was since day one proposed as enabler for services originally L3VPNs
and RSVP-TE.

There seems to be serious confusions between label switching
with explicit flows and MPLS, which was believed to scale
without detecting/configuring flows.

At the time I proposed label switching, there already was RSVP
but RSVP-TE was proposed long after MPLS was proposed.

But, today, people are seems to be using, so called, MPLS, with
explicitly configured flows, administration of which does not
scale and is annoying.

Remember that the original point of MPLS was that it should work
scalably without a lot of configuration, which is not the reality
recognized by people on this thread.


So I think Ohta-san's point is about scalability services not flat underlay
RIB and FIB sizes. Many years ago we had requests to support 5M L3VPN
routes while underlay was just 500K IPv4.


That is certainly a problem. However, worse problem is to know
label values nested deeply in MPLS label chain.

Even worse, if route near the destination expected to pop the label
chain goes down, how can the source knows that the router goes down
and choose alternative router near the destination?


Last - when I originally discussed just plain MPLS with customers with
single application of hierarchical routing (no BGP in the core) frankly no
one was interested.


MPLS with hierarchical routing just does not scale.

Masataka Ohta


Re: Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Masataka Ohta

Mark Tinka wrote:


I wouldn't agree.

MPLS is a purely forwarding paradigm, as is hop-by-hop IP.


As the first person to have proposed the forwarding paradigm of
label switching, I have been fully aware from the beginning that:

   https://tools.ietf.org/html/draft-ohta-ip-over-atm-01

   Conventional Communication over ATM in a Internetwork Layer

   The conventional communication, that is communication that does not
   assume connectivity, is no different from that of the existing IP, of
   course.

special, prioritized forwarding should be done only by special
request by end users (by properly designed signaling mechanism, for
which RSVP failed to be) or administration does not scale.


Even with
hop-by-hop IP, you need the edge to be routing-aware.


The edge to be routing-aware around itself does scale.

The edge to be routing-aware at the destinations of all the flows
over it does not scale, which is the problem of MPLS.

Though the lack of equipment scalability was unnoticed by many,
thanks to Moore' law, inscalable administration costs a lot.

As a result, administration of MPLS has been costing a lot.


I wasn't at the table when the MPLS spec. was being dreamed up,


I was there before poor MPLS was dreamed up.


If you can
tell me how NOT running MPLS affords you a "hierarchical, scalable"
routing table, I'm all ears.


Are you saying inter-domain routing table is not "hierarchical,
scalable" except for the reason of multihoming?

As for multihoming problem, see, for example:

   https://tools.ietf.org/html/draft-ohta-e2e-multihoming-03


Whether you forward in IP or in MPLS, scaling routing is an ever clear &
present concern.


Not. Even without MPLS, fine tuning of BGP does not scale.

However, just as using plain IP router costs less than using
MPLS capable IP routers, BGP-only administration costs less than
BGP and MPLS administration.

For better networking infrastructure, extra cost should be spent
for L1, not MPLS or very complicated technologies around it.

Masataka Ohta


Re: Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Randy Bush
> MPLS was since day one proposed as enabler for services originally
> L3VPNs and RSVP-TE.

MPLS day one was mike o'dell wanting to move his city/city traffic
matrix from ATM to tag switching and open cascade's hold on tags.

randy


Re: [c-nsp] Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Randy Bush
> Maybe this is fundamental and unavoidable, maybe some systematic bias
> in human thinking drives us towards simple software and complex
> hardware.

how has software progressed in the last 50-70 years as compared to
hardware?

my watch has how many orders of magnitude of compute vs the computer
which took a large room when i was but a youth?  and we have barely
avoided writing stacks in cobol; but we're close, assembler++.

randy


Re: Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Robert Raszuk
Hi Mark,

As actually someone who was at that table you are referring to - I must say
that MPLS was never proposed as replacement for IP.

MPLS was since day one proposed as enabler for services originally L3VPNs
and RSVP-TE. Then bunch of others jumped on the same encapsulation train.
If at that very time GSR would be able to do right GRE encapsulation at
line rate in all of its engines MPLS for transport would never take off. As
service demux - sure but this is completely separate.

But since at that time shipping hardware could not do the right
encapsulation and since SPs were looking for more revenue and new way to
move ATM and FR customers to IP backbones L3VPN was proposed which really
required to hide the service addresses from anyone's core. So some form of
encapsulation was a MUST. Hence tag switching then mpls switching was
rolled out.

So I think Ohta-san's point is about scalability services not flat underlay
RIB and FIB sizes. Many years ago we had requests to support 5M L3VPN
routes while underlay was just 500K IPv4.

Last - when I originally discussed just plain MPLS with customers with
single application of hierarchical routing (no BGP in the core) frankly no
one was interested. Till L3VPN arrived which was game changer and run for
new revenue streams ...

Best,
R.


On Fri, Jun 19, 2020 at 5:00 PM Mark Tinka  wrote:

>
>
> On 19/Jun/20 16:45, Masataka Ohta wrote:
>
> > The problem of MPLS, or label switching in general, is that, though
> > it was advertised to be topology driven to scale better than flow
> > driven, it is actually flow driven with poor scalability.
> >
> > Thus, it is impossible to deploy any technology scalably over MPLS.
> >
> > MPLS was considered to scale, because it supports nested labels
> > corresponding to hierarchical, thus, scalable, routing table.
> >
> > However, to assign nested labels at the source, the source
> > must know hierarchical routing table at the destination, even
> > though the source only knows hierarchical routing table at
> > the source itself.
> >
> > So, the routing table must be flat, which dose not scale, or
> > the source must detect flows to somehow request hierarchical
> > destination routing table on demand, which means MPLS is flow
> > driven.
> >
> > People, including some data center people, avoiding MPLS, know
> > network scalability better than those deploying MPLS.
> >
> > It is true that some performance improvement is possible with
> > label switching by flow driven ways, if flows are manually
> > detected. But, it means extra label-switching-capable equipment
> > and administrative effort to detect flows, neither of which do
> > not scale and cost a lot.
> >
> > It cost a lot less to have more plain IP routers than insisting
> > on having a little fewer MPLS routers.
>
> I wouldn't agree.
>
> MPLS is a purely forwarding paradigm, as is hop-by-hop IP. Even with
> hop-by-hop IP, you need the edge to be routing-aware.
>
> I wasn't at the table when the MPLS spec. was being dreamed up, but I'd
> find it very hard to accept that someone drafting the idea advertised it
> as being a replacement or alternative for end-to-end IP routing and
> forwarding.
>
> Whether you run MPLS or not, you will always have routing table scaling
> concerns. So I'm not quite sure how that is MPLS's problem. If you can
> tell me how NOT running MPLS affords you a "hierarchical, scalable"
> routing table, I'm all ears.
>
> Whether you forward in IP or in MPLS, scaling routing is an ever clear &
> present concern. Where MPLS can directly mitigate that particular
> concern is in the core, where you can remove BGP. But you still need
> routing in the edge, whether you forward in IP or MPLS.
>
> Mark.
>
>


Re: [EXT] Xfinity (both ends) - can't ping from users home to office

2020-06-19 Thread Chuck Anderson
On Thu, Jun 18, 2020 at 11:57:12PM +, Spencer Coplin wrote:
> I have a client that is unable to ping his office Comcast Business connection 
> from his home Xfinity connection. It was working a month ago and we can 
> confirm that his connection works over his iphone's hotspot. I am able to 
> ping from my own xfinity residential (same city) without issues. I suspect 
> something in the routing from his home connection is messed up as 
> tracert/ping can't even resolve his office's IP. The client called Xfinity 
> residential support and they blamed the inability to ping the IP on his 
> office vpn connection.
> 
> Has anyone else his this and if so, how was it resolved?

Sort of.  Try having him change the MAC address of his residential CPE router 
so the BNG sees it as a "new" device.  In my case that resolved connectivity 
issues after Comcast did some "network maintenance" and broke everything for me.

Alternatively, have you tried working this issue from the Business side?  You'd 
probably get more leverage that way.


Re: Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Mark Tinka



On 19/Jun/20 16:45, Masataka Ohta wrote:

> The problem of MPLS, or label switching in general, is that, though
> it was advertised to be topology driven to scale better than flow
> driven, it is actually flow driven with poor scalability.
>
> Thus, it is impossible to deploy any technology scalably over MPLS.
>
> MPLS was considered to scale, because it supports nested labels
> corresponding to hierarchical, thus, scalable, routing table.
>
> However, to assign nested labels at the source, the source
> must know hierarchical routing table at the destination, even
> though the source only knows hierarchical routing table at
> the source itself.
>
> So, the routing table must be flat, which dose not scale, or
> the source must detect flows to somehow request hierarchical
> destination routing table on demand, which means MPLS is flow
> driven.
>
> People, including some data center people, avoiding MPLS, know
> network scalability better than those deploying MPLS.
>
> It is true that some performance improvement is possible with
> label switching by flow driven ways, if flows are manually
> detected. But, it means extra label-switching-capable equipment
> and administrative effort to detect flows, neither of which do
> not scale and cost a lot.
>
> It cost a lot less to have more plain IP routers than insisting
> on having a little fewer MPLS routers.

I wouldn't agree.

MPLS is a purely forwarding paradigm, as is hop-by-hop IP. Even with
hop-by-hop IP, you need the edge to be routing-aware.

I wasn't at the table when the MPLS spec. was being dreamed up, but I'd
find it very hard to accept that someone drafting the idea advertised it
as being a replacement or alternative for end-to-end IP routing and
forwarding.

Whether you run MPLS or not, you will always have routing table scaling
concerns. So I'm not quite sure how that is MPLS's problem. If you can
tell me how NOT running MPLS affords you a "hierarchical, scalable"
routing table, I'm all ears.

Whether you forward in IP or in MPLS, scaling routing is an ever clear &
present concern. Where MPLS can directly mitigate that particular
concern is in the core, where you can remove BGP. But you still need
routing in the edge, whether you forward in IP or MPLS.

Mark.



Re: Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Masataka Ohta

Mark Tinka wrote:


MPLS has been around far too long, and if you post web site content
still talking about it or show up at conferences still talking about it,
you fear that you can't sell more boxes and line cards on the back of
"just" broadening carriage pipes.


The problem of MPLS, or label switching in general, is that, though
it was advertised to be topology driven to scale better than flow
driven, it is actually flow driven with poor scalability.

Thus, it is impossible to deploy any technology scalably over MPLS.

MPLS was considered to scale, because it supports nested labels
corresponding to hierarchical, thus, scalable, routing table.

However, to assign nested labels at the source, the source
must know hierarchical routing table at the destination, even
though the source only knows hierarchical routing table at
the source itself.

So, the routing table must be flat, which dose not scale, or
the source must detect flows to somehow request hierarchical
destination routing table on demand, which means MPLS is flow
driven.

People, including some data center people, avoiding MPLS, know
network scalability better than those deploying MPLS.

It is true that some performance improvement is possible with
label switching by flow driven ways, if flows are manually
detected. But, it means extra label-switching-capable equipment
and administrative effort to detect flows, neither of which do
not scale and cost a lot.

It cost a lot less to have more plain IP routers than insisting
on having a little fewer MPLS routers.

Masataka Ohta


Re: [c-nsp] Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Tim Durack
On Fri, Jun 19, 2020 at 10:34 AM Mark Tinka  wrote:

>
>
> On 19/Jun/20 16:09, Tim Durack wrote:
>
> >
> > It could be worse: Nexus ;-(
> >
> > There is another version of the future:
> >
> > 1. SP "Silicon One" IOS-XR
> > 2. Enterprise "Silicon One" IOS-XE
> >
> > Same hardware, different software, features, licensing model etc.
>
> All this forking weakens a vendor's position in some respects, because
> when BU's are presenting one company as 6,000 ones, it's difficult for
> buying consistency.
>
> Options are great, but when options have options, it starts to get ugly,
> quick.
>
> Ah well...
>
>
> >
> > Silicon One looks like an interesting strategy: single ASIC for fixed,
> > modular, fabric. Replace multiple internal and external ASIC family,
> > compete with merchant, whitebox, MSDC etc.
>
> That is the hope. We've been to the cinema for this one before, though.
> Quite a few times. So I'm not holding my breath.
>
>
> >
> > The Cisco 8000/8200 is not branded as NCS, which is BCM.
>
> Not all of it - remember the big pink elephant in the room, the NCS
> 6000? That is/was nPower. Again, sending customers in all sorts of
> directions with that box, where now ASR9000 and the new 8000 seem to be
> the go-to box. Someone can't make up their mind over there.
>
>
> > I asked the NCS5/55k guys why they didn't use UADP. No good answer,
> > although I suspect some big customer(s) were demanding BCM for their
> > own programming needs. Maybe there were some memory bandwidth issues
> > with UADP, which is what Q100 HBM is the answer for.
>
> When you're building boxes for one or two customers, things like this
> tend to happen.
>
> But like I've been saying for some time, the big brands competing with
> the small brands over merchant silicon doesn't make sense. If you want
> merchant silicon to reduce cost, you're better off playing with the new
> brands that will charge less and be more flexible. While I do like IOS
> XR and Junos, paying a premium for them for a chip that will struggle
> the same way across all vendor implementations just doesn't track.
>
> Mark.
>
>
Yes, for sure NCS6K was a completely different beast, much as NCS1K, NCS2K.
Not sure why the NCS naming was adopted vs. ASR, and then dropped for
8000/8200. Probably lots of battles within the Cisco conglomerate.

Not defending, just observing. Either way, networks got to get built,
debugged, maintained, debugged, upgraded, debugged. All while improving
performance, managing CAPEX, reducing OPEX.

-- 
Tim:>


RE: Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread adamv0025
> Saku Ytti
> Sent: Friday, June 19, 2020 8:50 AM
> 
> On Fri, 19 Jun 2020 at 10:24, Radu-Adrian Feurdean  adrian.feurdean.net> wrote:
> 
> 
> > > I don't understand the point of SRv6. What equipment can support
> > > IPv6 routing, but can't support MPLS label switching?
> >
> Maybe these inferior technologies will win, due to the marketing strength of
> DC solutions. Or maybe DC will later figure out the fundamental aspect in
> tunneling cost, and invent even-better-MPLS, which is entirely possible now
> that we have a bit more understanding how we use MPLS.
>
Looking back at history (VXLAN or the Google's Espresso "architecture") I'm not 
holding my breath for anything reasonable coming out of the DC camp

adam
 



Re: [c-nsp] Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Mark Tinka



On 19/Jun/20 16:09, Tim Durack wrote:

>
> It could be worse: Nexus ;-(
>
> There is another version of the future:
>
> 1. SP "Silicon One" IOS-XR
> 2. Enterprise "Silicon One" IOS-XE
>
> Same hardware, different software, features, licensing model etc.

All this forking weakens a vendor's position in some respects, because
when BU's are presenting one company as 6,000 ones, it's difficult for
buying consistency.

Options are great, but when options have options, it starts to get ugly,
quick.

Ah well...


>
> Silicon One looks like an interesting strategy: single ASIC for fixed,
> modular, fabric. Replace multiple internal and external ASIC family,
> compete with merchant, whitebox, MSDC etc.

That is the hope. We've been to the cinema for this one before, though.
Quite a few times. So I'm not holding my breath.


>
> The Cisco 8000/8200 is not branded as NCS, which is BCM.

Not all of it - remember the big pink elephant in the room, the NCS
6000? That is/was nPower. Again, sending customers in all sorts of
directions with that box, where now ASR9000 and the new 8000 seem to be
the go-to box. Someone can't make up their mind over there.


> I asked the NCS5/55k guys why they didn't use UADP. No good answer,
> although I suspect some big customer(s) were demanding BCM for their
> own programming needs. Maybe there were some memory bandwidth issues
> with UADP, which is what Q100 HBM is the answer for.

When you're building boxes for one or two customers, things like this
tend to happen.

But like I've been saying for some time, the big brands competing with
the small brands over merchant silicon doesn't make sense. If you want
merchant silicon to reduce cost, you're better off playing with the new
brands that will charge less and be more flexible. While I do like IOS
XR and Junos, paying a premium for them for a chip that will struggle
the same way across all vendor implementations just doesn't track.

Mark.



Re: Issues with Verizon FiOS

2020-06-19 Thread Izzy Goldstein - TeleGo
i got word from a verizon engineer

"Problem was isolated and fixed yesterday around noon."

On Fri, Jun 19, 2020 at 5:17 AM Dovid Bender  wrote:

> Can someone from Verizon contact me off the list. Our customers as well as
> many others are experiencing packet loss on our Voice products
>
> https://puck.nether.net/pipermail/outages/2020-June/013133.html
> https://puck.nether.net/pipermail/outages/2020-June/013136.html
>
> TIA
>
>
>


-- 

Izzy Goldstein

Chief Technology Officer

Main: (212) 477-1000 x 2085 <(212)%20477-1000>

Direct: (929) 477-2085

Website: www.telego.com 





Confidentiality Notice: This e-mail may contain confidential and/or
privileged information. If you are not the intended recipient or have
received this e-mail in error please notify us immediately by email reply
and destroy this e-mail. Any unauthorized copying, disclosure or
distribution of the material in this e-mail is strictly forbidden. Any
views or opinions presented in this email are solely those of the author
and do not necessarily represent those of TeleGo (T). Employees of TeleGo
are expressly required not to make defamatory statements and not to
infringe or authorize any infringement of copyright or any other legal
right by email communications. Any such communication is contrary to TeleGo
policy and outside the scope of the employment of the individual concerned.
TeleGo will not accept any liability in respect of such communication, and
the employee responsible will be personally liable for any damages or other
liability arising.


TeleGo Hosted PBX 


Re: [c-nsp] Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Tim Durack
On Fri, Jun 19, 2020 at 9:05 AM Mark Tinka  wrote:

>
>
> On 19/Jun/20 14:50, Tim Durack wrote:
>
> > If y'all can deal with the BU, the Cat9k family is looking
> > half-decent: MPLS PE/P, BGP L3VPN, BGP EVPN (VXLAN dataplane not MPLS)
> > etc.
> > UADP programmable pipeline ASIC, FIB ~200k, E-LLW, mandatory DNA
> > license now covers software support...
> >
> > Of course you do have to deal with a BU that lives in a parallel
> > universe (SDA, LISP, NEAT etc) - but the hardware is the right
> > price-perf, and IOS-XE is tolerable.
> >
> > No large FIB today, but Cisco appears to be headed towards "Silicon
> > One" for all of their platforms: RTC ASIC strapped over some HBM. The
> > strategy is interesting: sell it as a chip, sell it whitebox, sell it
> > fully packaged.
> >
> > YMMV
>
> I'd like to hear what Gert thinks, though. I'm sure he has a special
> place for the word "Catalyst" :-).
>
> Oddly, if Silicon One is Cisco's future, that means IOS XE may be headed
> for the guillotine, in which case investing any further into an IOS XE
> platform could be dicey at best, egg-face at worst.
>
> I could be wrong...
>
> Mark.
>
>
It could be worse: Nexus ;-(

There is another version of the future:

1. SP "Silicon One" IOS-XR
2. Enterprise "Silicon One" IOS-XE

Same hardware, different software, features, licensing model etc.

Silicon One looks like an interesting strategy: single ASIC for fixed,
modular, fabric. Replace multiple internal and external ASIC family,
compete with merchant, whitebox, MSDC etc.

The Cisco 8000/8200 is not branded as NCS, which is BCM. I asked the
NCS5/55k guys why they didn't use UADP. No good answer, although I suspect
some big customer(s) were demanding BCM for their own programming needs.
Maybe there were some memory bandwidth issues with UADP, which is what Q100
HBM is the answer for.

-- 
Tim:>


Re: [c-nsp] Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Mark Tinka



On 19/Jun/20 15:24, steve ulrich wrote:

> never underestimate the desire of product managers and engineering teams to 
> have their own petri dishes to swim around in. 

Probably what got us (and keeps getting us) into this mess to begin with.

Mark.


Xfinity (both ends) - can't ping from users home to office

2020-06-19 Thread Spencer Coplin
Hey, is anyone on here able to check routing for Xfinity residential to Xfinity 
business?


I have a client that is unable to ping his office Comcast Business connection 
from his home Xfinity connection. It was working a month ago and we can confirm 
that his connection works over his iphone's hotspot. I am able to ping from my 
own xfinity residential (same city) without issues. I suspect something in the 
routing from his home connection is messed up as tracert/ping can't even 
resolve his office's IP. The client called Xfinity residential support and they 
blamed the inability to ping the IP on his office vpn connection.



Has anyone else his this and if so, how was it resolved?



Thank you in advance!

Thank you,
Spencer Coplin


Re: [c-nsp] Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Mark Tinka



On 19/Jun/20 14:50, Tim Durack wrote:

> If y'all can deal with the BU, the Cat9k family is looking
> half-decent: MPLS PE/P, BGP L3VPN, BGP EVPN (VXLAN dataplane not MPLS)
> etc.
> UADP programmable pipeline ASIC, FIB ~200k, E-LLW, mandatory DNA
> license now covers software support...
>
> Of course you do have to deal with a BU that lives in a parallel
> universe (SDA, LISP, NEAT etc) - but the hardware is the right
> price-perf, and IOS-XE is tolerable.
>
> No large FIB today, but Cisco appears to be headed towards "Silicon
> One" for all of their platforms: RTC ASIC strapped over some HBM. The
> strategy is interesting: sell it as a chip, sell it whitebox, sell it
> fully packaged.
>
> YMMV

I'd like to hear what Gert thinks, though. I'm sure he has a special
place for the word "Catalyst" :-).

Oddly, if Silicon One is Cisco's future, that means IOS XE may be headed
for the guillotine, in which case investing any further into an IOS XE
platform could be dicey at best, egg-face at worst.

I could be wrong...

Mark.



Re: [c-nsp] Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Tim Durack
If y'all can deal with the BU, the Cat9k family is looking half-decent:
MPLS PE/P, BGP L3VPN, BGP EVPN (VXLAN dataplane not MPLS) etc.
UADP programmable pipeline ASIC, FIB ~200k, E-LLW, mandatory DNA license
now covers software support...

Of course you do have to deal with a BU that lives in a parallel universe
(SDA, LISP, NEAT etc) - but the hardware is the right price-perf, and
IOS-XE is tolerable.

No large FIB today, but Cisco appears to be headed towards "Silicon One"
for all of their platforms: RTC ASIC strapped over some HBM. The strategy
is interesting: sell it as a chip, sell it whitebox, sell it fully packaged.

YMMV

On Fri, Jun 19, 2020 at 7:40 AM Mark Tinka  wrote:

> I think it's less about just the forwarding chips and more about an entire
> solution that someone can go and buy without having to fiddle with it.
>
> You remember the saying, "Gone are the days when men were men and wrote
> their own drivers"? Well, running a network is a full-time job, without
> having to learn how to code for hardware and protocols.
>
> There are many start-ups that are working off of commodity chips and
> commodity face plates. Building software for those disparate hardware
> systems, and then developing the software so that it can be used in
> commercial deployments is non-trivial. That is the leverage Cisco, Juniper,
> Nokia... even Huawei, have, and they won't let us forget it.
>
> Then again, if one's vision is bold enough, they could play the long game,
> start now, patiently build, and then come at us in 8 or so years. Because
> the market, surely, can't continue at the rate we are currently going.
> Everything else around us is dropping in price and revenue, and yet
> traditional routing and switching equipment continues to stay the same, if
> not increase. That's broken!`
>
> Mark.
>
> On 19/Jun/20 13:25, Robert Raszuk wrote:
>
> But talking about commodity isn't this mainly Broadcom ? And is there
> single chip there which does not support line rate IP ? Or is there any
> chip which supports MPLS and cost less then IP/MPLS one ?
>
> On Fri, Jun 19, 2020 at 1:22 PM Benny Lyne Amorsen via cisco-nsp 
>  wrote:
>
>
> -- Forwarded message --
> From: Benny Lyne Amorsen  
> To: cisco-...@puck.nether.net
> Cc:
> Bcc:
> Date: Fri, 19 Jun 2020 13:12:06 +0200
> Subject: Re: [c-nsp] Devil's Advocate - Segment Routing, Why?
> Saku Ytti   writes:
>
>
> This is simply not fundamentally true, it may be true due to market
> perversion. But give student homework to design label switching chip
> and IPv6 switching chip, and you'll use less silicon for the label
> switching chip. And of course you spend less overhead on the tunnel.
>
> What you say is obviously true.
>
> However, no one AFAIK makes an MPLS switch at prices comparable to basic
> layer 3 IPv6 switches. You can argue that it is a market failure as much
> as you want, but I can only buy what is on the market. According to the
> market, MPLS is strictly Service Provider, with the accompanying Service
> Provider markup (and then ridiculous discounts to make the prices seem
> reasonable). Enterprise and datacenter are not generally using MPLS, and
> I can only look on in envy at the prices of their equipment.
>
> There is room for a startup to rethink the service provider market by
> using commodity enterprise equipment. Right now that means dumping MPLS,
> since that is only available (if at all) at the most expensive license
> level. Meanwhile you can get get low-scale BGPv6 and line-speed GRE with
> commodity hardware without extra licenses.
>
> I am not saying that it will be easy to manage such a network, the
> tooling for MPLS is vastly superior. I am merely saying that with just a
> simple IPv6-to-the-edge network you can deliver similar services to an
> MPLS-to-the-edge network at lower cost, if you can figure out how to
> build the tooling.
>
> Per-packet overhead is hefty. Is that a problem today?
>
>
>
>
> -- Forwarded message --
> From: Benny Lyne Amorsen via cisco-nsp  
> 
> To: cisco-...@puck.nether.net
> Cc:
> Bcc:
> Date: Fri, 19 Jun 2020 13:12:06 +0200
> Subject: Re: [c-nsp] Devil's Advocate - Segment Routing, Why?
> ___
> cisco-nsp mailing list  
> cisco-nsp@puck.nether.nethttps://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
> ___
> cisco-nsp mailing list  
> cisco-nsp@puck.nether.nethttps://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
> .
>
>
>

-- 
Tim:>


Re: [c-nsp] Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Mark Tinka
I think it's less about just the forwarding chips and more about an
entire solution that someone can go and buy without having to fiddle
with it.

You remember the saying, "Gone are the days when men were men and wrote
their own drivers"? Well, running a network is a full-time job, without
having to learn how to code for hardware and protocols.

There are many start-ups that are working off of commodity chips and
commodity face plates. Building software for those disparate hardware
systems, and then developing the software so that it can be used in
commercial deployments is non-trivial. That is the leverage Cisco,
Juniper, Nokia... even Huawei, have, and they won't let us forget it.

Then again, if one's vision is bold enough, they could play the long
game, start now, patiently build, and then come at us in 8 or so years.
Because the market, surely, can't continue at the rate we are currently
going. Everything else around us is dropping in price and revenue, and
yet traditional routing and switching equipment continues to stay the
same, if not increase. That's broken!`

Mark.

On 19/Jun/20 13:25, Robert Raszuk wrote:
> But talking about commodity isn't this mainly Broadcom ? And is there
> single chip there which does not support line rate IP ? Or is there any
> chip which supports MPLS and cost less then IP/MPLS one ?
>
> On Fri, Jun 19, 2020 at 1:22 PM Benny Lyne Amorsen via cisco-nsp <
> cisco-...@puck.nether.net> wrote:
>
>>
>>
>> -- Forwarded message --
>> From: Benny Lyne Amorsen 
>> To: cisco-...@puck.nether.net
>> Cc:
>> Bcc:
>> Date: Fri, 19 Jun 2020 13:12:06 +0200
>> Subject: Re: [c-nsp] Devil's Advocate - Segment Routing, Why?
>> Saku Ytti  writes:
>>
>>> This is simply not fundamentally true, it may be true due to market
>>> perversion. But give student homework to design label switching chip
>>> and IPv6 switching chip, and you'll use less silicon for the label
>>> switching chip. And of course you spend less overhead on the tunnel.
>> What you say is obviously true.
>>
>> However, no one AFAIK makes an MPLS switch at prices comparable to basic
>> layer 3 IPv6 switches. You can argue that it is a market failure as much
>> as you want, but I can only buy what is on the market. According to the
>> market, MPLS is strictly Service Provider, with the accompanying Service
>> Provider markup (and then ridiculous discounts to make the prices seem
>> reasonable). Enterprise and datacenter are not generally using MPLS, and
>> I can only look on in envy at the prices of their equipment.
>>
>> There is room for a startup to rethink the service provider market by
>> using commodity enterprise equipment. Right now that means dumping MPLS,
>> since that is only available (if at all) at the most expensive license
>> level. Meanwhile you can get get low-scale BGPv6 and line-speed GRE with
>> commodity hardware without extra licenses.
>>
>> I am not saying that it will be easy to manage such a network, the
>> tooling for MPLS is vastly superior. I am merely saying that with just a
>> simple IPv6-to-the-edge network you can deliver similar services to an
>> MPLS-to-the-edge network at lower cost, if you can figure out how to
>> build the tooling.
>>
>> Per-packet overhead is hefty. Is that a problem today?
>>
>>
>>
>>
>> -- Forwarded message --
>> From: Benny Lyne Amorsen via cisco-nsp 
>> To: cisco-...@puck.nether.net
>> Cc:
>> Bcc:
>> Date: Fri, 19 Jun 2020 13:12:06 +0200
>> Subject: Re: [c-nsp] Devil's Advocate - Segment Routing, Why?
>> ___
>> cisco-nsp mailing list  cisco-...@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>
> ___
> cisco-nsp mailing list  cisco-...@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
> .



Re: [c-nsp] Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Saku Ytti
On Fri, 19 Jun 2020 at 14:26, Mark Tinka  wrote:

> > ASR9k also has low and high scale cards, we buy the low scale, even
> > for edge. But even low scale is pretty high scale in this context.
>
> You're probably referring to the TR vs. SE line cards, yes?

I do, correct.

-- 
  ++ytti


Re: [c-nsp] Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Mark Tinka



On 19/Jun/20 13:11, Saku Ytti wrote:


> ASR9k also has low and high scale cards, we buy the low scale, even
> for edge. But even low scale is pretty high scale in this context.

You're probably referring to the TR vs. SE line cards, yes?

We would also buy TR line cards for high-touch use-cases, because, as
you say, that is pretty high scale as well, already :-).

I think the SE line cards are meant for high-capacity BNG terminations
in a single chassis, per line card. But with most IP traffic a network
is carrying being best-effort and capacity available all over the place,
what's the point anymore?


>
> I think there would be market for on-chip only LSR/core cards.
> Ultimately if you design for that day1, it won't add lot of costs to
> you. Yes the BOM difference may not be that great, because ultimately
> silicon is cheap and costs are in NRE. But the on-chip only cards
> would have bit more ports and they'd have better availability due to
> less component failures.
> I would fight very hard for us buy such cards in core, if vendor would
> offer them.

Same here.

For now, we are exploring the PTX1000/10002 machinery. I know fixed form
factor boxes for a core are a bit strange, but looking at how far the
tech has come, I'm feeling a lot more bullish about having a core box
that isn't redundant in control planes, data planes, and all the rest.
Just power supplies :-).

A lot of money to be saved if the idea works.


> Like the trio-in-pci full open, I think this is just marketing
> failure. Vendors are wearing their DC glasses and don't see anything
> else. While the big-name DC are lost cause, AMZN employs more chip
> designers than JNPR, matter of time before JNPR loses amzn edge too to
> internal amzn product.
>
> Someone with bit bolder vision on what the market could be, may have
> real opportunity here.

This is what I've been telling the vendors since about 2017, both IP and
Transport. It's only a matter of time before the level of development
happening in the cloud + content houses far surpasses what the vendors
are doing, if not already. Why spend all your time, energy and resources
on a market that isn't going to find use for your NRE, and can probably
out-code  and out-research you with one hand tied behind their back, any
day?

The only reason they aren't building their own core routers yet is
because that's an area where they can still afford to offload
development time to the vendors. The edge is where they need to badly
optimize, and they will do it a lot more efficiently than the vendors
can, in time, if not already.

When I saw one cloud provider present their idea on collapsing an entire
DWDM line card into an optic that they can code for and stick into a
server, I saw the DWDM's vendor's hearts sink right across the table
where I sat. This was 3 years ago.

The vendors need, as you say, someone with a bold enough vision to pivot
the tradition.

Mark.



Issues with Verizon FiOS

2020-06-19 Thread Dovid Bender
Can someone from Verizon contact me off the list. Our customers as well as
many others are experiencing packet loss on our Voice products

https://puck.nether.net/pipermail/outages/2020-June/013133.html
https://puck.nether.net/pipermail/outages/2020-June/013136.html

TIA


Re: Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Mark Tinka



On 19/Jun/20 10:57, Radu-Adrian Feurdean wrote:

>
> Yes, exactly that one.
> Which also happens to spill outside the DC area, because the main "vertical" 
> allows it to be sold at lower prices.

These days, half the gig is filtering the snake oil.

Mark.


Re: [c-nsp] Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Mark Tinka


On 19/Jun/20 10:40, Saku Ytti wrote:

> Maybe this is fundamental and unavoidable, maybe some systematic bias
> in human thinking drives us towards simple software and complex
> hardware.
>
> Is there an alternative future, where we went with Itanium? Where we
> have simple hardware and an increasingly complex compiler and
> increasingly complex runtime making sure the program runs fast on that
> simple hardware?
> Instead we have two guys in tel aviv waking up in night terror every
> night over confusion why does x86 run any code at all, how come it
> works. And I'm at home 'hehe new intc make program go fast:)))'
>
> Now that we have comparatively simple compilers and often no runtime
> at all, the hardware has to optimise the shitty program for us, but as
> we don't get to see how the sausage is made, we think it's probably
> something that is well done, robust and correct. If we'd do this in
> software, we'd all have to suffer how fragile the compiler and runtime
> are and how unapproachable they are.

So this brings back a discussion you and I had last year about a
scenario where the market shifts toward open vendor in-house silicon,
sold as a PCI card one can stick in a server.

Trio, ExpressPlus, Lightspeed, Silicon One, Cylon, QFP, e.t.c., with
open specs. so that folk can code for them and see what happens.

At the moment, everyone is coding for x86 as an NPU, and we know that
path is not the cheapest or most efficient for packet forwarding.

Vendors may feel a little skittish about "giving away" their IP, but I
don't think it's an issue because:

  * The target market is folk currently coding for x86 CPU's to run as
NPU's.

  * No one is about to run 100Gbps backbones on a PCI card. But hey, the
world does surprise :-).

  * Writing code for forwarding traffic as well as control plane
protocols is not easy. Buying a finished product from an equipment
vendor will be the low-hanging fruit for most of the landscape.

It potentially also has the positive side effect of getting Broadcom to
raise their game, which would make them a more viable option for
operators with significant high-touch requirements.

As we used to say in Vladivostok, "It could be a win win" :-).

Mark.


Re: Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Radu-Adrian Feurdean
On Fri, Jun 19, 2020, at 10:11, Mark Tinka wrote:
> 
> On 19/Jun/20 09:20, Radu-Adrian Feurdean wrote:
> >
> > A whole ocean of "datacenter" hardware, from pretty much evey vendor.
> 
> You mean the ones deliberately castrated so that we can create a
> specific "DC vertical", even if they are, pretty much, the same box a
> service provider will buy, just given a darker color so it can glow more
> brightly in the data centre night?

Yes, exactly that one.
Which also happens to spill outside the DC area, because the main "vertical" 
allows it to be sold at lower prices.


Re: [c-nsp] Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Saku Ytti
On Fri, 19 Jun 2020 at 11:35, Mark Tinka  wrote:


> > So instead of making it easy for software to generate MPLS packets. We
> > are making it easy for hardware teo generate complex IP packets.
> > Bizarre, but somewhat rational if you start from compute looking out
> > to networks, instead of starting from networks.
>
> Which I totally appreciate and, fundamentally, have nothing against.
>
> My concern is when we, service providers, start to get affected because
> equipment manufacturers need to follow the data centre money hard, often
> at our expense. This is not only in the IP world, but also in the
> Transport world, where service providers are having to buy DWDM gear
> formatted for DCI. Yes, it does work, but it's not without its
> eccentricities.
>
> Cycling, over the past decade, between TRILL, OTV, SPB, FabricPath,
> VXLAN, NV-GRE, ACI... and perhaps even EVPN, there is probably a lesson
> to be learned.

Maybe this is fundamental and unavoidable, maybe some systematic bias
in human thinking drives us towards simple software and complex
hardware.

Is there an alternative future, where we went with Itanium? Where we
have simple hardware and an increasingly complex compiler and
increasingly complex runtime making sure the program runs fast on that
simple hardware?
Instead we have two guys in tel aviv waking up in night terror every
night over confusion why does x86 run any code at all, how come it
works. And I'm at home 'hehe new intc make program go fast:)))'

Now that we have comparatively simple compilers and often no runtime
at all, the hardware has to optimise the shitty program for us, but as
we don't get to see how the sausage is made, we think it's probably
something that is well done, robust and correct. If we'd do this in
software, we'd all have to suffer how fragile the compiler and runtime
are and how unapproachable they are.

-- 
  ++ytti


Re: [c-nsp] Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Mark Tinka



On 19/Jun/20 10:18, Saku Ytti wrote:

> I need to give a little bit of credit to DC people. If your world is
> compute and you are looking out to networks. MPLS _is hard_, it's
> _harder_ to generate MPLS packets in Linux than arbitrarily complex IP
> stack. Now instead of fixing that on the OS stack, to have a great
> ecosystem on software to deal with MPLS, which is easy to fix, we are
> fixing that in silicon, which is hard and expensive to fix.
>
> So instead of making it easy for software to generate MPLS packets. We
> are making it easy for hardware teo generate complex IP packets.
> Bizarre, but somewhat rational if you start from compute looking out
> to networks, instead of starting from networks.

Which I totally appreciate and, fundamentally, have nothing against.

My concern is when we, service providers, start to get affected because
equipment manufacturers need to follow the data centre money hard, often
at our expense. This is not only in the IP world, but also in the
Transport world, where service providers are having to buy DWDM gear
formatted for DCI. Yes, it does work, but it's not without its
eccentricities.

Cycling, over the past decade, between TRILL, OTV, SPB, FabricPath,
VXLAN, NV-GRE, ACI... and perhaps even EVPN, there is probably a lesson
to be learned.

Mark.



Re: [c-nsp] Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Saku Ytti
On Fri, 19 Jun 2020 at 11:03, Mark Tinka  wrote:

> MPLS has been around far too long, and if you post web site content
> still talking about it or show up at conferences still talking about it,
> you fear that you can't sell more boxes and line cards on the back of
> "just" broadening carriage pipes.
>
> So we need to invent a new toy around which we can wrap a story about
> "adding value to your network" to "drive new business" and "reduce
> operating costs", to entice money to leave wallets, when all that's
> really still happening is the selling of more boxes and line cards, so
> that we can continue to broaden carriage pipes.

I need to give a little bit of credit to DC people. If your world is
compute and you are looking out to networks. MPLS _is hard_, it's
_harder_ to generate MPLS packets in Linux than arbitrarily complex IP
stack. Now instead of fixing that on the OS stack, to have a great
ecosystem on software to deal with MPLS, which is easy to fix, we are
fixing that in silicon, which is hard and expensive to fix.

So instead of making it easy for software to generate MPLS packets. We
are making it easy for hardware teo generate complex IP packets.
Bizarre, but somewhat rational if you start from compute looking out
to networks, instead of starting from networks.


>
> There are very few things that have been designed well from scratch, and
> stand the test of time regardless of how much wind is thrown at them.
> MPLS is one of those things, IMHO. Nearly 20 years to the day since
> inception, and I still can't find another packet forwarding technology
> that remains as relevant and versatile as it is simple.
>
> Mark.
>


--
  ++ytti


Re: Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Mark Tinka



On 19/Jun/20 09:50, Saku Ytti wrote:


> I'm sure such devices exist, I can't name any from top of my head. But
> this market perversion is caused by DC people who did not understand
> networks and suffer from not-invented-here. Everyone needsa  tunnel
> solution, but DC people decided before looking into or understanding
> the topic that MPLS is bad and complex, let's invent something new.
> Then we re-invented solutions that already had _MORE_ efficient
> solutions in MPLS, and a lot of those technologies are now becoming
> established in DC space, creating confusion in SP space.
>
> Maybe these inferior technologies will win, due to the marketing
> strength of DC solutions. Or maybe DC will later figure out the
> fundamental aspect in tunneling cost, and invent even-better-MPLS,
> which is entirely possible now that we have a bit more understanding
> how we use MPLS.

Let me work out how to print all this on a t-shirt.

Mark.


Re: Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Mark Tinka




On 19/Jun/20 09:20, Radu-Adrian Feurdean wrote:
>
> A whole ocean of "datacenter" hardware, from pretty much evey vendor.

You mean the ones deliberately castrated so that we can create a
specific "DC vertical", even if they are, pretty much, the same box a
service provider will buy, just given a darker color so it can glow more
brightly in the data centre night?

Mark.


Re: [c-nsp] Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Mark Tinka



On 19/Jun/20 09:32, Saku Ytti wrote:


> We need to decide if we are discussing a specific market situation or
> fundamentals. Ideally we'd drive the market to what is fundamentally
> most efficient, so that we pay the least amount of the kit that we
> use. If we drive SRv6, we drive cost up, if we drive MPLS, we drive
> cost down.
>
> Even today in many cases you can take a cheap L2 chip, and make it an
> MPLS switch, due to them supporting VLAN swap! Which has no clue of
> IPV6 or IPV4.

We need a new toy.

MPLS has been around far too long, and if you post web site content
still talking about it or show up at conferences still talking about it,
you fear that you can't sell more boxes and line cards on the back of
"just" broadening carriage pipes.

So we need to invent a new toy around which we can wrap a story about
"adding value to your network" to "drive new business" and "reduce
operating costs", to entice money to leave wallets, when all that's
really still happening is the selling of more boxes and line cards, so
that we can continue to broaden carriage pipes.

There are very few things that have been designed well from scratch, and
stand the test of time regardless of how much wind is thrown at them.
MPLS is one of those things, IMHO. Nearly 20 years to the day since
inception, and I still can't find another packet forwarding technology
that remains as relevant and versatile as it is simple.

Mark.



Re: Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Saku Ytti
On Fri, 19 Jun 2020 at 10:24, Radu-Adrian Feurdean
 wrote:


> > I don't understand the point of SRv6. What equipment can support IPv6
> > routing, but can't support MPLS label switching?
>
> A whole ocean of "datacenter" hardware, from pretty much evey vendor. Because 
> many of them automatically link MPLS to RSVP and IPv4 L3VPN (which may still 
> be an interesting feature in datacenter), many try to stay as far away from 
> it as possible. Othen than some scared C-level guys imposing this, I don't 
> really see a good reason (lack of market demand, which is sometimes invoked, 
> doesn't stand).

I'm sure such devices exist, I can't name any from top of my head. But
this market perversion is caused by DC people who did not understand
networks and suffer from not-invented-here. Everyone needsa  tunnel
solution, but DC people decided before looking into or understanding
the topic that MPLS is bad and complex, let's invent something new.
Then we re-invented solutions that already had _MORE_ efficient
solutions in MPLS, and a lot of those technologies are now becoming
established in DC space, creating confusion in SP space.

Maybe these inferior technologies will win, due to the marketing
strength of DC solutions. Or maybe DC will later figure out the
fundamental aspect in tunneling cost, and invent even-better-MPLS,
which is entirely possible now that we have a bit more understanding
how we use MPLS.

-- 
  ++ytti


Re: Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Radu-Adrian Feurdean
On Wed, Jun 17, 2020, at 20:40, Dave Bell wrote:
> 
> I don't understand the point of SRv6. What equipment can support IPv6 
> routing, but can't support MPLS label switching?

A whole ocean of "datacenter" hardware, from pretty much evey vendor. Because 
many of them automatically link MPLS to RSVP and IPv4 L3VPN (which may still be 
an interesting feature in datacenter), many try to stay as far away from it as 
possible. Othen than some scared C-level guys imposing this, I don't really see 
a good reason (lack of market demand, which is sometimes invoked, doesn't 
stand).

> where needed. LDP-SR interworking is pretty simple.

Mapping servers or something else ?