Re: (perhaps off topic, but) Microwave Towers

2018-07-14 Thread John Osmon
On Sat, Jul 14, 2018 at 08:54:25AM -0600, Miles Fidelman wrote:
[...]
> I find myself driving down Route 66.  On our way through Arizona, I
> was surprised by what look like a lot of old-style microwave links. 
> They pretty much follow the East-West rail line - where I'd expect
> there's a lot of fiber buried.

Not a lot of fiber.  If there was, the following wouldn't be needed:
   
https://uto.asu.edu/sun-corridor-network-project-plans-broadband-access-along-i-40



There *are* a number of fiber builds in the area, but none of them are a
coherent build across the region.


Re: (perhaps off topic, but) Microwave Towers

2018-07-14 Thread Tim Pozar
Most of these horns are for 6GHz.  I have had friends that have
"appropriated" some of them by adding a waveguide to N adapter and use
them for the 5.8GHz ISM band with some minor aiming.  Kick ass antenna gain.

Tim

On 7/14/18 4:37 PM, Miles Fidelman wrote:
> Looks like it!
> 
>  Original message 
> From: Tim Pozar 
> Date: 7/14/18 11:46 AM (GMT-07:00)
> To: Andy Ringsmuth , North American Network
> Operators' Group 
> Subject: Re: (perhaps off topic, but) Microwave Towers
> 
> Did it follow this route?
> 
> http://long-lines.net/places-routes/maps/MW6003.jpg
> 
> Tim
> 
> On 7/14/18 8:41 AM, Andy Ringsmuth wrote:
>>
>>
>>> On Jul 14, 2018, at 10:19 AM, Brian Kantor  wrote:
>>>
> I find myself driving down Route 66.  On our way through Arizona, I
> was surprised by what look like a lot of old-style microwave links. 
> They pretty much follow the East-West rail line - where I'd expect
> there's a lot of fiber buried.
>>>
>>> Could they be a legacy of the Southern Pacific Railroad Internal
> Network Telecommunications,
>>> now known under the acronym SPRINT?
>>> - Brian
>>>
>>
>> Not along Route 66 in Arizona. That generally parallels BNSF Railway,
> formerly the Santa Fe down there. Southern Pacific followed Interstate
> 10 much further south.
>>
>>
>> 
>> Andy Ringsmuth
>> a...@newslink.com
>> News Link – Manager Technology, Travel & Facilities
>> 2201 Winthrop Rd., Lincoln, NE 68502-4158
>> (402) 475-6397    (402) 304-0083 cellular
>>


Re: Linux BNG

2018-07-14 Thread Jérôme Nicolle
Hi Baldur,

Le 14/07/2018 à 14:13, Baldur Norddahl a écrit :
> I am investigating Linux as a BNG

As we say in France, it's like your trying to buttfuck flies (a local
saying standing for "reinventing the wheel for no practical reason").

Linux' kernel networking stack is not made for this kind of job. 6WIND
or fd.io may be right on the spot, but it's still a lot of dark magic
for something that has been done over and over for the past 20 years by
most vendors.

And it just works.

DHCP (implying straight L2 from the CPE to the BNG) may be an option
bust most codebases are still young. PPP, on the other hand, is
field-tested for extremely large scale deployments with most vendors.

If I were in you shooes, and I don't say I'd want to (my BNGs are scaled
to less than a few thousand of subscribers, with 1-4 concurrent session
each), I'd stick to plain old bitstream (PPP) model, with a decent
subscriber framework on my BNGs (I mostly use Juniper MXs, but I also
like Nokia's and Cisco's for some features).

But let's say we would want to go forward and ditch legacy / proprietary
code to surf on the NFV bullshit-wave. What would you actually need ?

Linux does soft-recirculation at every encapsulation level by memory
copy. You can't scale anything with that. You need to streamline
decapsulation with 6wind's turborouter or fd.io frameworks. It'll cost
you a few thousand of man-hours to implement your first prototype.

Let's say you got a woking framework to treat subsequent headers on the
fly (because decapsulation is not really needed, what you want is just
to forward the payload, right ?)… Well, you'd need to address
provisionning protocols on the same layers. Who would want to rebase a
DHCP server with alien packet forms incoming ? I gess no one.

Well, I could dissert on the topic for hours, because I've already spent
months to address such design issues in scalable ISP networks, and the
conclusion is :

- PPPoE is simple and proven. Its rigid structure alleviates most of the
dual-stack issues. It is well supported and largelly deployed.
- DHCP requires hacks (in the form of undocummented options from several
vendors) to seemingly work on IPv4, but the multicast boundaries for NDP
are a PITA to handle, so no one implemented that properly yet. So it is
to avoid for now.
- Subscriber frameworks, be it uniper's, Cisco's or Nokia's, are at the
core of the largest residentioal ISPs out there. It Just Works. Trust them.

That being said, I love the idea of NFV-ing all the things, let it be
BNGs first because those bricks in the wall are the most fragile we have
to maintain.

But I cleraly won't stand for an alternative to traditionnal offerings
just yet : it's too critical, and it's a PITA to build from scratch and
scale.

Best regards,


-- 
Jérôme Nicolle
+33 6 19 31 27 14


Re: (perhaps off topic, but) Microwave Towers

2018-07-14 Thread Miles Fidelman
Too far North.  BSNF territory.
 Original message From: Brian Kantor  Date: 
7/14/18  9:19 AM  (GMT-07:00) To: North American Network Operators' Group 
 Subject: Re: (perhaps off topic, but) Microwave Towers 
> > I find myself driving down Route 66.  On our way through Arizona, I was 
> > surprised by what look like a lot of old-style microwave links.  They 
> > pretty much follow the East-West rail line - where I'd expect there's a lot 
> > of fiber buried.

Could they be a legacy of the Southern Pacific Railroad Internal Network 
Telecommunications,
now known under the acronym SPRINT?
- Brian


Re: (perhaps off topic, but) Microwave Towers

2018-07-14 Thread Miles Fidelman
Looks like it!
 Original message From: Tim Pozar  Date: 7/14/18 
 11:46 AM  (GMT-07:00) To: Andy Ringsmuth , North American 
Network Operators' Group  Subject: Re: (perhaps off topic, 
but) Microwave Towers 
Did it follow this route?

http://long-lines.net/places-routes/maps/MW6003.jpg

Tim

On 7/14/18 8:41 AM, Andy Ringsmuth wrote:
> 
> 
>> On Jul 14, 2018, at 10:19 AM, Brian Kantor  wrote:
>>
 I find myself driving down Route 66.  On our way through Arizona, I was 
 surprised by what look like a lot of old-style microwave links.  They 
 pretty much follow the East-West rail line - where I'd expect there's a 
 lot of fiber buried.
>>
>> Could they be a legacy of the Southern Pacific Railroad Internal Network 
>> Telecommunications,
>> now known under the acronym SPRINT?
>>  - Brian
>>
> 
> Not along Route 66 in Arizona. That generally parallels BNSF Railway, 
> formerly the Santa Fe down there. Southern Pacific followed Interstate 10 
> much further south.
> 
> 
> 
> Andy Ringsmuth
> a...@newslink.com
> News Link – Manager Technology, Travel & Facilities
> 2201 Winthrop Rd., Lincoln, NE 68502-4158
> (402) 475-6397    (402) 304-0083 cellular
> 


Re: deploying RPKI based Origin Validation

2018-07-14 Thread Mark Tinka



On 14/Jul/18 14:04, Job Snijders wrote:

> I actually view it as a competitive advantage to carry a cleaner set of
> routes compared to the providers with a more permissive (or lack of)
> filtering strategy. Sometimes less is more.

Typically, I wouldn't disagree.

In practice, most customers only care about reachability, and not their
contribution to the Internet hygiene. A case of "They will look after
it", where "they" is not "me".

Mark.


Re: Linux BNG

2018-07-14 Thread Denys Fedoryshchenko

On 2018-07-14 15:13, Baldur Norddahl wrote:

Hello

I am investigating Linux as a BNG. The BNG (Broadband Network Gateway)
being the thing that acts as default gateway for our customers.

The setup is one VLAN per customer. Because 4095 VLANs is not enough,
we have QinQ with double VLAN tagging on the customers. The customers
can use DHCP or static configuration. DHCP packets need to be option82
tagged and forwarded to a DHCP server. Every customer has one or more
static IP addresses.

IPv4 subnets need to be shared among multiple customers to conserve
address space. We are currently using /26 IPv4 subnets with 60
customers sharing the same default gateway and netmask. In Linux terms
this means 60 VLAN interfaces per bridge interface.

However Linux is not quite ready for the task. The primary problem
being that the system does not scale to thousands of VLAN interfaces.

We do not want customers to be able to send non routed packets
directly to each other (needs proxy arp). Also customers should not be
able to steal another customers IP address. We want to hard code the
relation between IP address and VLAN tagging. This can be implemented
using ebtables, but we are unsure that it could scale to thousands of
customers.

I am considering writing a small program or kernel module. This would
create two TAP devices (tap0 and tap1). Traffic received on tap0 with
VLAN tagging, will be stripped of VLAN tagging and delivered on tap1.
Traffic received on tap1 without VLAN tagging, will be tagged
according to a lookup table using the destination IP address and then
delivered on tap0. ARP and DHCP would need some special handling.

This would be completely stateless for the IPv4 implementation. The
IPv6 implementation would be harder, because Link Local addressing
needs to be supported and that can not be stateless. The customer CPE
will make up its own Link Local address based on its MAC address and
we do not know what that is in advance.

The goal is to support traffic of minimum of 10 Gbit/s per server.
Ideally I would have a server with 4x 10 Gbit/s interfaces combined
into two 20 Gbit/s channels using bonding (LACP). One channel each for
upstream and downstream (customer facing). The upstream would be layer
3 untagged and routed traffic to our transit routers.

I am looking for comments, ideas or alternatives. Right now I am
considering what kind of CPU would be best for this. Unless I take
steps to mitigate, the workload would probably go to one CPU core only
and be limited to things like CPU cache and PCI bus bandwidth.

accel-ppp supports IPoE termination for both IPv4 and IPv6, with radius 
and everything.
It is also done such way, that it will utilize multicore server 
efficiently (might need some tuning, depends on hardware).
It should handle 2x10G easily on decent server, about 4x10 it depends on 
your hardware and how well tuning are done.




Re: Linux BNG

2018-07-14 Thread Baldur Norddahl




Den 14/07/2018 kl. 19.09 skrev Raymond Burkholder:


Where do you have this happening?  Do you have aggregation switches 
doing this?  Are those already in place, or being planned?  Because I 
would make a suggestion for how to do the aggregation.


The POI (Point of Interconnect) with the incumbent telco is one customer 
per vlan using QinQ. This telco owns all the copper and runs the VDSL2 
DSLAMs. They give us a transparent ethernet tunnel to the CPE. We own 
the CPE. Internally the incumbent uses a MPLS network to transport the 
VLANs. We in turn also use MPLS with L2VPN to transport the traffic to 
one of two datacenters.


In addition we have our own FTTH network in the ground. This is GPON on 
Zhone equipment. To make things easier we made the Zhone GPON OLT 
emulate the same one VLAN per customer setup.


The incumbent have telco buildings in each city area. Typically distance 
between buildings is 10 km. In each such building they have a room where 
alternative telcos, like us, can rent rack space. The only available 
power is -48V DC. We currently only have Zhone MXK GPON switches and ZTE 
MPLS switch equipment in these facilities.


Our current BNG solution is some big iron routers (ZTE M6000). This is a 
device that will do things like 4 million routes in hardware and move 
many Tb/s (not that we have traffic anything near that level). It works 
well enough but is not perfect. I think a discussion of the BNG 
limitations of ZTE M6000 would be a different thread.


One of the problems with ZTE M6000 is the price and that goes double for 
any alternatives mentioned here (Cisco, Juniper etc). Right now I am 
facing the prospect of investing in more line cards for M6000. I can buy 
a few servers for the price of one line card and perhaps get a solution 
that is more "perfect".


As to VXLAN the Zhone MXK can not do it and it is not an option for the 
POI with the incumbent. It would be an alternative to running MPLS but 
we are happy with the MPLS solution.


I have considered OpenFlow and might do that. We have OpenFlow capable 
switches and I may be able to offload the work to the switch hardware. 
But I also consider this solution harder to get right than the idea of 
using Linux with tap devices. Also it appears the Openvswitch implements 
a different flavour of OpenFlow than the hardware switch (the hardware 
is limited to some fixed tables that Broadcom made up), so I might not 
be able to start with the software and then move on to hardware.


Regards,

Baldur




RE: Linux BNG

2018-07-14 Thread tony


>The setup is one VLAN per customer. Because 4095 VLANs is not enough, we have 
>QinQ with double VLAN tagging on the customers. The customers can use DHCP or 
>static configuration. DHCP packets need to be option82 tagged and forwarded to 
>a DHCP server. Every >customer has one or more static IP addresses.

What you are describing is how the national fibre network delivers customers to 
the ISP's in New Zealand (with DHCP and PPP being at the ISP's choice). 
Generally in New Zealand we have a very active Linux community but I have to 
say I have not seen any of the service providers attempt to use Linux as a BNG 
in this way for production customers. Commonly an MPLS network is used to 
transport these QinQ layer2 "handovers" to centralised BNG's. These BNG's in my 
experience are normally Cisco (asr1k,asr9k), Juniper (MX, hardware of virtual), 
Nokia (7750 hardware or virtual) and a small amount of Mikrotik (tends to get 
swapped out with the previous vendor solutions when scale (2000+) rises).  As 
much as I appreciate Linux, I personally still also see the value of the vendor 
offerings in this case (think stability and guaranteed performance). My biggest 
issue with the vendor offerings is that they are not making their virtual 
offerings (VMX, VSR) attractive enough pricing wise at the small scale, we have 
successful virtual Juniper and Nokia BNG's in production but pricing wise it 
generally ends up with the service provider thinking that hardware was probably 
a better choice in the long run.



Re: (perhaps off topic, but) Microwave Towers

2018-07-14 Thread Tim Pozar
Did it follow this route?

http://long-lines.net/places-routes/maps/MW6003.jpg

Tim

On 7/14/18 8:41 AM, Andy Ringsmuth wrote:
> 
> 
>> On Jul 14, 2018, at 10:19 AM, Brian Kantor  wrote:
>>
 I find myself driving down Route 66.  On our way through Arizona, I was 
 surprised by what look like a lot of old-style microwave links.  They 
 pretty much follow the East-West rail line - where I'd expect there's a 
 lot of fiber buried.
>>
>> Could they be a legacy of the Southern Pacific Railroad Internal Network 
>> Telecommunications,
>> now known under the acronym SPRINT?
>>  - Brian
>>
> 
> Not along Route 66 in Arizona. That generally parallels BNSF Railway, 
> formerly the Santa Fe down there. Southern Pacific followed Interstate 10 
> much further south.
> 
> 
> 
> Andy Ringsmuth
> a...@newslink.com
> News Link – Manager Technology, Travel & Facilities
> 2201 Winthrop Rd., Lincoln, NE 68502-4158
> (402) 475-6397(402) 304-0083 cellular
> 


Re: Linux BNG

2018-07-14 Thread Grant Taylor via NANOG

I agree with all aspects.

On 07/14/2018 11:09 AM, Raymond Burkholder wrote:
As mentioned earlier, why make the core boxes do all of the work?  Why 
not distribute the functionality out to the edge?  Rather than using 
traditional switch gear at the edge, use smaller Linux boxes to handle 
all that complicated edge manipulation, and then keep your high 
bandwidth core boxes pushing packets only

But I do ask:

Do you (the ISP) control the CPE (modem / ONT)?  Could you push the 
VxLAN (or maybe MPLS) functionality all the way into it?


This would have the added advantage of a (presumably) trusted device 
providing the identification back to your core equipment.


Perhaps even minimal L3 routing w/ DHCP helper such that the customer 
saw the CPE as the default gateway.  (Though this might burn a lot more 
IPs.  This might not be an issue if you're using CGNAT.)




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature


RE: (perhaps off topic, but) Microwave Towers

2018-07-14 Thread frnkblk
Is it possibly AT's old network?
https://99percentinvisible.org/article/vintage-skynet-atts-abandoned-long-lines-microwave-tower-network/
http://long-lines.net/places-routes/

This network runs through our service territory, too.  The horns are 
distinctive.  

Frank

-Original Message-
From: NANOG  On Behalf Of Miles Fidelman
Sent: Saturday, July 14, 2018 9:54 AM
To: nanog@nanog.org
Subject: (perhaps off topic, but) Microwave Towers

Hi Folks,

I find myself driving down Route 66.  On our way through Arizona, I was 
surprised by what look like a lot of old-style microwave links.  They 
pretty much follow the East-West rail line - where I'd expect there's a 
lot of fiber buried.

Struck me as somewhat interesting.

It also struck me that folks here might have some comments.

Miles Fidelman

-- 
In theory, there is no difference between theory and practice.
In practice, there is.   Yogi Berra





Re: Linux BNG

2018-07-14 Thread Raymond Burkholder

interspersed comments 

On 07/14/2018 06:13 AM, Baldur Norddahl wrote:
I am investigating Linux as a BNG. The BNG (Broadband Network Gateway) 
being the thing that acts as default gateway for our customers.


The setup is one VLAN per customer. Because 4095 VLANs is not enough, we 
have QinQ with double VLAN tagging on the customers. The customers can 
use DHCP or static configuration. DHCP packets need to be option82 
tagged and forwarded to a DHCP server. Every customer has one or more 
static IP addresses.


Where do you have this happening?  Do you have aggregation switches 
doing this?  Are those already in place, or being planned?  Because I 
would make a suggestion for how to do the aggregation.


IPv4 subnets need to be shared among multiple customers to conserve 
address space. We are currently using /26 IPv4 subnets with 60 customers 
sharing the same default gateway and netmask. In Linux terms this means 
60 VLAN interfaces per bridge interface.


I suppose it could be made to work, but forcing a layer 3 boundary over 
a bunch of layer 2 boundaries, seems to be a bunch of work, but I 
suppose that would be the brute force and ignorance approach from the 
mechanisms you would be using.


However Linux is not quite ready for the task. The primary problem being 
that the system does not scale to thousands of VLAN interfaces.


It probably depends upon which Linux based tooling you wish to use. 
There are some different ways of looking at this which scale better.


We do not want customers to be able to send non routed packets directly 
to each other (needs proxy arp). Also customers should not be able to 
steal another customers IP address. We want to hard code the relation 
between IP address and VLAN tagging. This can be implemented using 
ebtables, but we are unsure that it could scale to thousands of customers.


I would consider suggesting the concepts of VxLAN (kernel plus FRR 
and/or openvswitch) or OpenFlow.(kernel plus openvswitch)


VxLAN scales to 16 million vlan equivalents.  Which is why I ask about 
your aggregation layers.  Rather than trying to do all the addressing 
across all the QinQ vlans in the core boxes, the vlans/vxlans and 
addressing are best dealt with at the edge.  Then, rather than running a 
bunch of vlans through your aggregation/distribution links, you can keep 
those resilient with a layer 3 only based strategy.


I am considering writing a small program or kernel module. This would 
create two TAP devices (tap0 and tap1). Traffic received on tap0 with 
VLAN tagging, will be stripped of VLAN tagging and delivered on tap1. 
Traffic received on tap1 without VLAN tagging, will be tagged according 
to a lookup table using the destination IP address and then delivered on 
tap0. ARP and DHCP would need some special handling.


I don't think this would be needed.  I think all the tools are already 
available and are robust from daily use.  Free Range Routing with 
EVPN/(VxLAN|MPLS) for a traditional routing mix, or use OpenFlow tooling 
in Open vSwitch to handle the layer 2 and layer 3 rule definitions you 
have in mind.


Open vSwitch can be programmed via command line rules or can be hooked 
up to a controller of some sort.  So rather than writing your own kernel 
program, you would write rules for a controller or script which drives 
the already kernel resident engines.


This would be completely stateless for the IPv4 implementation. The IPv6 
implementation would be harder, because Link Local addressing needs to 
be supported and that can not be stateless. The customer CPE will make 
up its own Link Local address based on its MAC address and we do not 
know what that is in advance.


FRR and OVS are IPv4 and IPv6 aware. The dynamics of the CPE MAC would 
be handled in various ways, depending upon what tooling you decide upon.


The goal is to support traffic of minimum of 10 Gbit/s per server. 
Ideally I would have a server with 4x 10 Gbit/s interfaces combined into 
two 20 Gbit/s channels using bonding (LACP). One channel each for 
upstream and downstream (customer facing). The upstream would be layer 3 
untagged and routed traffic to our transit routers.


As mentioned earlier, why make the core boxes do all of the work?  Why 
not distribute the functionality out to the edge?  Rather than using 
traditional switch gear at the edge, use smaller Linux boxes to handle 
all that complicated edge manipulation, and then keep your high 
bandwidth core boxes pushing packets only.


I am looking for comments, ideas or alternatives. Right now I am 
considering what kind of CPU would be best for this. Unless I take steps 
to mitigate, the workload would probably go to one CPU core only and be 
limited to things like CPU cache and PCI bus bandwidth.


There is much more to write about, but those writings would depend up on 
what you already have in place, what you would like to put in place, and 
how you wish to segment your network.


Hope this helps.


Baldur


--

Re: (perhaps off topic, but) Microwave Towers

2018-07-14 Thread Andy Ringsmuth



> On Jul 14, 2018, at 10:19 AM, Brian Kantor  wrote:
> 
>>> I find myself driving down Route 66.  On our way through Arizona, I was 
>>> surprised by what look like a lot of old-style microwave links.  They 
>>> pretty much follow the East-West rail line - where I'd expect there's a lot 
>>> of fiber buried.
> 
> Could they be a legacy of the Southern Pacific Railroad Internal Network 
> Telecommunications,
> now known under the acronym SPRINT?
>   - Brian
> 

Not along Route 66 in Arizona. That generally parallels BNSF Railway, formerly 
the Santa Fe down there. Southern Pacific followed Interstate 10 much further 
south.



Andy Ringsmuth
a...@newslink.com
News Link – Manager Technology, Travel & Facilities
2201 Winthrop Rd., Lincoln, NE 68502-4158
(402) 475-6397(402) 304-0083 cellular

Re: (perhaps off topic, but) Microwave Towers

2018-07-14 Thread Brian Kantor
> > I find myself driving down Route 66.  On our way through Arizona, I was 
> > surprised by what look like a lot of old-style microwave links.  They 
> > pretty much follow the East-West rail line - where I'd expect there's a lot 
> > of fiber buried.

Could they be a legacy of the Southern Pacific Railroad Internal Network 
Telecommunications,
now known under the acronym SPRINT?
- Brian



Re: (perhaps off topic, but) Microwave Towers

2018-07-14 Thread Andy Ringsmuth


> On Jul 14, 2018, at 9:54 AM, Miles Fidelman  
> wrote:
> 
> Hi Folks,
> 
> I find myself driving down Route 66.  On our way through Arizona, I was 
> surprised by what look like a lot of old-style microwave links.  They pretty 
> much follow the East-West rail line - where I'd expect there's a lot of fiber 
> buried.
> 
> Struck me as somewhat interesting.
> 
> It also struck me that folks here might have some comments.
> 
> Miles Fidelman

I’m not 100 percent positive, but from what I recall in my time down that way 
as a contractor for $major_railroad, I believe they are or were used by the 
railroad for their communication links. They may not necessarily be in service 
any longer though. Probably one of those instances where “if it ain’t broke, 
don’t fix it.” In other words, if the tower isn’t falling down or a hazard, why 
spend the money to go remove it?

I know as recently as 2003, BNSF Railway was still using and upgrading 
microwave infrastructure in Chicago.

http://reference.newslink.com/current-pubs/CHIC/CHIC0304.pdf   (see page 2)



Andy Ringsmuth
a...@newslink.com
News Link – Manager Technology, Travel & Facilities
2201 Winthrop Rd., Lincoln, NE 68502-4158
(402) 475-6397(402) 304-0083 cellular



Re: (perhaps off topic, but) Microwave Towers

2018-07-14 Thread Keith Stokes
There’s a lot less backhoe fade with microwave. ;-)

Kidding aside, I’m sure there are plenty of scenarios where microwave makes 
better sense than fiber especially since it’s a lot easier to clear right of 
way through the air.

Side gig has me maintaining a satellite system. Yes that still makes sense. As 
part of that I have a service that monitors people applying for microwave 
transmitters within a few hundred miles. You’d be surprised how many links are 
applied for every month.

--

Keith Stokes
Neill Technologies


> On Jul 14, 2018, at 9:56 AM, Miles Fidelman  
> wrote:
> 
> Hi Folks,
> 
> I find myself driving down Route 66.  On our way through Arizona, I was 
> surprised by what look like a lot of old-style microwave links.  They pretty 
> much follow the East-West rail line - where I'd expect there's a lot of fiber 
> buried.
> 
> Struck me as somewhat interesting.
> 
> It also struck me that folks here might have some comments.
> 
> Miles Fidelman
> 
> -- 
> In theory, there is no difference between theory and practice.
> In practice, there is.   Yogi Berra
> 


(perhaps off topic, but) Microwave Towers

2018-07-14 Thread Miles Fidelman

Hi Folks,

I find myself driving down Route 66.  On our way through Arizona, I was 
surprised by what look like a lot of old-style microwave links.  They 
pretty much follow the East-West rail line - where I'd expect there's a 
lot of fiber buried.


Struck me as somewhat interesting.

It also struck me that folks here might have some comments.

Miles Fidelman

--
In theory, there is no difference between theory and practice.
In practice, there is.   Yogi Berra



Re: deploying RPKI based Origin Validation

2018-07-14 Thread Saku Ytti
On Sat, 14 Jul 2018 at 15:07, Job Snijders  wrote:

> I actually view it as a competitive advantage to carry a cleaner set of
> routes compared to the providers with a more permissive (or lack of)
> filtering strategy. Sometimes less is more.

* When you consider your addressable market 'clueful customers'.

-- 
  ++ytti


Linux BNG

2018-07-14 Thread Baldur Norddahl

Hello

I am investigating Linux as a BNG. The BNG (Broadband Network Gateway) 
being the thing that acts as default gateway for our customers.


The setup is one VLAN per customer. Because 4095 VLANs is not enough, we 
have QinQ with double VLAN tagging on the customers. The customers can 
use DHCP or static configuration. DHCP packets need to be option82 
tagged and forwarded to a DHCP server. Every customer has one or more 
static IP addresses.


IPv4 subnets need to be shared among multiple customers to conserve 
address space. We are currently using /26 IPv4 subnets with 60 customers 
sharing the same default gateway and netmask. In Linux terms this means 
60 VLAN interfaces per bridge interface.


However Linux is not quite ready for the task. The primary problem being 
that the system does not scale to thousands of VLAN interfaces.


We do not want customers to be able to send non routed packets directly 
to each other (needs proxy arp). Also customers should not be able to 
steal another customers IP address. We want to hard code the relation 
between IP address and VLAN tagging. This can be implemented using 
ebtables, but we are unsure that it could scale to thousands of customers.


I am considering writing a small program or kernel module. This would 
create two TAP devices (tap0 and tap1). Traffic received on tap0 with 
VLAN tagging, will be stripped of VLAN tagging and delivered on tap1. 
Traffic received on tap1 without VLAN tagging, will be tagged according 
to a lookup table using the destination IP address and then delivered on 
tap0. ARP and DHCP would need some special handling.


This would be completely stateless for the IPv4 implementation. The IPv6 
implementation would be harder, because Link Local addressing needs to 
be supported and that can not be stateless. The customer CPE will make 
up its own Link Local address based on its MAC address and we do not 
know what that is in advance.


The goal is to support traffic of minimum of 10 Gbit/s per server. 
Ideally I would have a server with 4x 10 Gbit/s interfaces combined into 
two 20 Gbit/s channels using bonding (LACP). One channel each for 
upstream and downstream (customer facing). The upstream would be layer 3 
untagged and routed traffic to our transit routers.


I am looking for comments, ideas or alternatives. Right now I am 
considering what kind of CPU would be best for this. Unless I take steps 
to mitigate, the workload would probably go to one CPU core only and be 
limited to things like CPU cache and PCI bus bandwidth.


Regards,

Baldur



Re: deploying RPKI based Origin Validation

2018-07-14 Thread Job Snijders
On Fri, Jul 13, 2018 at 02:53:30PM +0200, Mark Tinka wrote:
> That, though, still leaves the problem where you end up providing a
> partial routing table to your customers, while your competitors in the
> same market aren't.

I actually view it as a competitive advantage to carry a cleaner set of
routes compared to the providers with a more permissive (or lack of)
filtering strategy. Sometimes less is more.

Kind regards,

Job


Re: deploying RPKI based Origin Validation

2018-07-14 Thread Mark Tinka



On 14/Jul/18 09:11, Baldur Norddahl wrote:

> In the RIPE part of the world there is no excuse for not getting RPKI
> correct because RIPE made it so easy. Perhaps the industry could agree on
> enabling RPKI validation on all european circuits for a start?

I think the first step (and what I'd consider to be a quick win) is if
we determined all the prefixes that are being designated Invalid, and
nail down how many of those are Invalid due to the fact that they are
more-specifics announced without a ROA, vs. the parent aggregate which
is ROA'd.

We would then ask the operators of those prefixes to either withdraw
them (easier, but unlikely) or sign them in the RPKI and create ROA's
for them (more work, but more likely). Going for the latter.

Once that is fixed, and even though the entire BGP world is not running
RPKI, those that are and are dropping Invalids would be 100% certain
that those Invalids are either leaks or hijacks.

I think that will get us 50% of the way there, with the other 50% would
now just be growing community participation in RPKI.

Thankfully, I believe all (or most) of the RIR's support a simple "click
of a button" to say "All prefixes up to a /24 or a /48 of the aggregate
should automatically be ROA'd if the aggregate, itself, is ROA'd". So it
shouldn't be a lot of work to get what is currently broken fixed. And
the beauty, we don't need everyone to participate in the RPKI today for
those that want the benefit right now to enjoy it so.

Mark.


Re: deploying RPKI based Origin Validation

2018-07-14 Thread Baldur Norddahl
In the RIPE part of the world there is no excuse for not getting RPKI
correct because RIPE made it so easy. Perhaps the industry could agree on
enabling RPKI validation on all european circuits for a start?

Regards

Baldur