Re: [j-nsp] Mirroring IPv6 neighbor advertisements

2019-03-22 Thread Jason Healy
On Mar 22, 2019, at 9:25 PM, Crist Clark  wrote:
> 
> Maybe you should be looking at DHCPv6 if you want those kinds of logs.

We did.  ;-)  However, Google seems quite set on not supporting it on Android:

https://issuetracker.google.com/issues/36949085

https://www.techrepublic.com/article/androids-lack-of-dhcpv6-support-poses-security-and-ipv6-deployment-issues/


Thus, we need some kind of measure to deal with SLAAC, as it seems that no 
Android device will do DHCPv6 (we're a school that allows BYOD, so I can't ban 
Android devices).

On another note, since my last post I've found that Junos 17 has a feature for 
MLD snooping, including static subscriptions to a listening group.  I'm going 
to need to update our QFX and see if that might get me more of what I need.

Thanks,

Jason
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Mirroring IPv6 neighbor advertisements

2019-03-22 Thread Crist Clark
Maybe you should be looking at DHCPv6 if you want those kinds of logs.

On Fri, Mar 22, 2019 at 2:19 PM Jason Healy  wrote:
>
> We're starting to play around more with IPv6, and one thing we're missing is 
> a log of who has which address.  In IPv4 we have DHCP and can check the logs, 
> but we're using SLAAC for v6 so that's not an option.
>
> I set up a quick trunk interface with all our VLANs as members and started 
> sniffing.  While I'm seeing plenty of neighbor discoveries, I'm not seeing 
> any(?) neighbor advertisements.  I'm guessing that because the sniffing box 
> doesn't have an address on each VLAN, it's not participating in ND and 
> registering for multicast, so we're getting pruned.  IGMP snooping is on by 
> default on all VLANs.
>
> I'd prefer not to have to add an interface on each VLAN just to grab all this 
> traffic (more to keep in sync, security concerns, etc).  Is there a way to 
> tell the switch to force IPv6 multicast traffic for ff02::1 to go to a 
> specific port?  Our core is a QFX5100; the other switches in the network are 
> a mix of EX3200/4200/3400.
>
> For the moment I've got it to work by setting up firewall filters on each 
> VLAN in our core and port-mirroring just the ICMPv6 (type 136) traffic to a 
> monitoring port.  That works, but it's also a lot of configuration overhead.  
> If there's a better way, I'd love suggestions!
>
> Thanks,
>
> Jason
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EVPN/VXLAN experience

2019-03-22 Thread Rob Foehl

On Fri, 22 Mar 2019, Vincent Bernat wrote:


❦ 22 mars 2019 13:39 -04, Rob Foehl :


I've got a few really large layer 2 domains that I'm looking to start
breaking up and stitching back together with EVPN+VXLAN in the middle,
on the order of a few thousand VLANs apiece.  Trying to plan around
any likely limitations, but specifics have been hard to come by...


You can find a bit more here:

- 

- 



Noted, thanks.  Raises even more questions, though...  Are these really 
QFX5110 specific, and if so, are there static limitations on the 5100 
chipset?


-Rob
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Mirroring IPv6 neighbor advertisements

2019-03-22 Thread Jason Healy
We're starting to play around more with IPv6, and one thing we're missing is a 
log of who has which address.  In IPv4 we have DHCP and can check the logs, but 
we're using SLAAC for v6 so that's not an option.

I set up a quick trunk interface with all our VLANs as members and started 
sniffing.  While I'm seeing plenty of neighbor discoveries, I'm not seeing 
any(?) neighbor advertisements.  I'm guessing that because the sniffing box 
doesn't have an address on each VLAN, it's not participating in ND and 
registering for multicast, so we're getting pruned.  IGMP snooping is on by 
default on all VLANs.

I'd prefer not to have to add an interface on each VLAN just to grab all this 
traffic (more to keep in sync, security concerns, etc).  Is there a way to tell 
the switch to force IPv6 multicast traffic for ff02::1 to go to a specific 
port?  Our core is a QFX5100; the other switches in the network are a mix of 
EX3200/4200/3400.

For the moment I've got it to work by setting up firewall filters on each VLAN 
in our core and port-mirroring just the ICMPv6 (type 136) traffic to a 
monitoring port.  That works, but it's also a lot of configuration overhead.  
If there's a better way, I'd love suggestions!

Thanks,

Jason
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] JunOS 18, ELS vs non-ELS QinQ native vlan handling.

2019-03-22 Thread Andrey Kostin

Hi Alexandre,

Did it pass frames without C-tag in Junos versions < 18?

Kind regards,
Andrey

Alexandre Snarskii писал 2019-03-22 13:03:

Hi!

Looks like JunOS 18.something introduced an incompatibility of native
vlan handling in QinQ scenario between ELS (qfx, ex2300) and non-ELS
switches: when ELS switch forwards untagged frame to QinQ, it now adds
two vlan tags (one specified as native for interface and S-vlan) 
instead

of just S-vlan as it is done by both non-ELS and 'older versions'.

As a result, if the other end of tunnel is non-ELS (or third-party)
switch, it strips only S-vlan and originally untagged frame is passed
with vlan tag :(

Are there any way to disable this additional tag insertion ?

PS: when frames sent in reverse direction, non-ELS switch adds only
S-vlan and this frame correctly decapsulated and sent untagged.

ELS-side configuration (ex2300, 18.3R1-S1.4. also tested with
qfx5100/5110):

[edit interfaces ge-0/0/0]
flexible-vlan-tagging;
native-vlan-id 1;
mtu 9216;
encapsulation extended-vlan-bridge;
unit 0 {
vlan-id-list 1-4094;
input-vlan-map push;
output-vlan-map pop;
}

(when native-vlan-id is not configured, untagged frames are not
accepted at all).

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EVPN/VXLAN experience

2019-03-22 Thread Vincent Bernat
 ❦ 22 mars 2019 13:39 -04, Rob Foehl :

> I've got a few really large layer 2 domains that I'm looking to start
> breaking up and stitching back together with EVPN+VXLAN in the middle,
> on the order of a few thousand VLANs apiece.  Trying to plan around
> any likely limitations, but specifics have been hard to come by...

You can find a bit more here:

 - 

 - 

-- 
I fell asleep reading a dull book, and I dreamt that I was reading on,
so I woke up from sheer boredom.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EVPN/VXLAN experience (was: EX4600 or QFX5110)

2019-03-22 Thread Rob Foehl

On Fri, 22 Mar 2019, Sebastian Wiesinger wrote:


What did bother us was that you are limited (at least on QFX5100) in
the amount of "VLANs" (VNIs). We were testing with 30 client
full-trunk ports per leaf and with that amount you can only provision
around 500 VLANs before you get errors and basically it seems you run
out of memory for bridge domains on the switch. This seems to be a
limitation by the chips used in the QFX5100, at least that's what I
got when I asked about it.

You can check if you know where:

root@SW-A:RE:0% ifsmon -Id | grep IFBD
IFBD   :12884  0

root@SW-A:RE:0% ifsmon -Id | grep Bridge
Bridge Domain  : 3502   0

These numbers combined need to be <= 16382.

And if you get over the limit these nice errors occur:

dcf_ng_get_vxlan_ifbd_hw_token: Max vxlan ifbd hw token reached 16382
ifbd_create_node: VXLAN IFBD hw token couldn't be allocated for 

Workaround is to decrease VLANs or trunk config.


Huh, that's potentially bad...  Can you elaborate on the config a bit 
more?  Are you hitting a limit around ~16k bridge domains total?


I've got a few really large layer 2 domains that I'm looking to start 
breaking up and stitching back together with EVPN+VXLAN in the middle, on 
the order of a few thousand VLANs apiece.  Trying to plan around any 
likely limitations, but specifics have been hard to come by...


-Rob
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] JunOS 18, ELS vs non-ELS QinQ native vlan handling.

2019-03-22 Thread Alexandre Snarskii


Hi!

Looks like JunOS 18.something introduced an incompatibility of native 
vlan handling in QinQ scenario between ELS (qfx, ex2300) and non-ELS 
switches: when ELS switch forwards untagged frame to QinQ, it now adds 
two vlan tags (one specified as native for interface and S-vlan) instead 
of just S-vlan as it is done by both non-ELS and 'older versions'.

As a result, if the other end of tunnel is non-ELS (or third-party)
switch, it strips only S-vlan and originally untagged frame is passed
with vlan tag :( 

Are there any way to disable this additional tag insertion ?

PS: when frames sent in reverse direction, non-ELS switch adds only 
S-vlan and this frame correctly decapsulated and sent untagged.

ELS-side configuration (ex2300, 18.3R1-S1.4. also tested with 
qfx5100/5110):

[edit interfaces ge-0/0/0]
flexible-vlan-tagging;
native-vlan-id 1;
mtu 9216;
encapsulation extended-vlan-bridge;
unit 0 {
vlan-id-list 1-4094;
input-vlan-map push;
output-vlan-map pop;
}

(when native-vlan-id is not configured, untagged frames are not
accepted at all).

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EVPN/VXLAN experience (was: EX4600 or QFX5110)

2019-03-22 Thread Richard McGovern via juniper-nsp
Sebastian, a couple of questions.

1.  Your design is pure QFX5100 Leaf/Spine today?  If yes, I assume you maybe 
only have 1 flat VXLAN network, that is you have no L3 VXLAN, yes?
2.  You stated you need 17.4 for improved LACP operation.  Which exact 17.4 are 
you using, and what version were you using previously?  I am wondering if you 
were ever on 17.3-R3-S3?

Many thanks, Rich

Richard McGovern
Sr Sales Engineer, Juniper Networks 
978-618-3342
 

On 3/22/19, 4:39 AM, "Sebastian Wiesinger"  wrote:

* Andrey Kostin  [2019-03-15 20:50]:
> I'm interested to hear about experience of running EVPN/VXLAN, 
particularly
> with QFX10k as L3 gateway and QFX5k as spine/leaves. As per docs, it 
should
> be immune to any single switch downtime, so might be a candidate to really
> redundant design.

All right here it goes:

I can't speak for QFX10k as spine but we have QFX5100 Leaf/Spine
setups with EVPN/VXLAN running right now. Switch downtime is no
problem at all, we unplugged a running switch, shut down ports,
unplugged cables between leaf & spine or leaf & client all while there
was storage traffic (NFS) active in the setup. Worst thing that
happend was that IOPS went down from 400k/s to 100k/s for 1-3 seconds.

What did bother us was that you are limited (at least on QFX5100) in
the amount of "VLANs" (VNIs). We were testing with 30 client
full-trunk ports per leaf and with that amount you can only provision
around 500 VLANs before you get errors and basically it seems you run
out of memory for bridge domains on the switch. This seems to be a
limitation by the chips used in the QFX5100, at least that's what I
got when I asked about it.

You can check if you know where:

root@SW-A:RE:0% ifsmon -Id | grep IFBD
 IFBD   :12884  0

root@SW-A:RE:0% ifsmon -Id | grep Bridge
 Bridge Domain  : 3502   0

These numbers combined need to be <= 16382.

And if you get over the limit these nice errors occur:

dcf_ng_get_vxlan_ifbd_hw_token: Max vxlan ifbd hw token reached 16382
ifbd_create_node: VXLAN IFBD hw token couldn't be allocated for 

Workaround is to decrease VLANs or trunk config.

Also you absolutely NEED LACP from servers to the fabric. 17.4 has
enhancements which will put the client ports in LACP standby when the
leaf gets separated from all spines.

> As a downside I see the more complex configuration at least. Adding
> vlan means adding routing instance etc. There are also other
> questions, about convergence, scalability, how stable it is and code
> maturity.

We have it automated with Ansible. Management access happens over OOB
(Mgmt) ports and everything is pushed by Ansible playbooks. Ansible
generates configuration from templates and pushes it to the switches
via netconf. I never would want to do this by hand. This demands a
certain level of structuring by every team (network, people doing the
cabling, server team) but it works out well for structured setups.

Our switch config looks like this:

--
user@sw1-spine-pod1> show configuration
## Last commit: 2019-03-11 03:13:49 CET by user
## Image name: jinstall-host-qfx-5-flex-17.4R2-S2.3-signed.tgz

version 17.4R1-S3.3;
groups {
/* Created by Ansible */
evpn-defaults { /* OMITTED */ };
/* Created by Ansible */
evpn-spine-defaults { /* OMITTED */ };
/* Created by Ansible */
evpn-spine-1 { /* OMITTED */ };
/* Created by Ansible - Empty group for maintenance operations */
client-interfaces;
}
apply-groups [ evpn-defaults evpn-spine-defaults evpn-spine-1 ];
--

So everything Ansible does is contained in apply-groups and is hidden. You 
can
immediately spot if something is configured by hand.

For code we're currently running on the 17.4 train which works mostly
fine, we had a few problems with third party 40G optics but these
should be fixed in the newest 17.4 service release.

Also we had a problem where new Spine/Leaf links did not come up but
these vanished after rebooting/upgrading the spines.

In daily operations it proves to be quite stable.


Best Regards

Sebastian

-- 
GPG Key: 0x58A2D94A93A0B9CE (F4F6 B1A3 866B 26E9 450A  9D82 58A2 D94A 93A0 
B9CE)
'Are you Death?' ... IT'S THE SCYTHE, ISN'T IT? PEOPLE ALWAYS NOTICE THE 
SCYTHE.
-- Terry Pratchett, The Fifth Elephant



___
juniper-nsp mailing list juniper-nsp@puck.nether.net

Re: [j-nsp] EVPN/VXLAN experience

2019-03-22 Thread Andrey Kostin
One more question just came to mind: what routing protocol do you use 
for underlay, eBGP/iBGP/IGP? Design guides show examples with eBGP but 
looks like for deployment that's not very big ISIS could do everything 
needed. What are pros and cons for BGP vs IGP?


Kind regards,
Andrey

Andrey Kostin писал 2019-03-22 09:46:

Thank you Sebastian for sharing your very valuable experience.

Kind regards,
Andrey

Sebastian Wiesinger писал 2019-03-22 04:39:

* Andrey Kostin  [2019-03-15 20:50]:
I'm interested to hear about experience of running EVPN/VXLAN, 
particularly
with QFX10k as L3 gateway and QFX5k as spine/leaves. As per docs, it 
should
be immune to any single switch downtime, so might be a candidate to 
really

redundant design.


All right here it goes:

I can't speak for QFX10k as spine but we have QFX5100 Leaf/Spine
setups with EVPN/VXLAN running right now. Switch downtime is no
problem at all, we unplugged a running switch, shut down ports,
unplugged cables between leaf & spine or leaf & client all while there
was storage traffic (NFS) active in the setup. Worst thing that
happend was that IOPS went down from 400k/s to 100k/s for 1-3 seconds.

What did bother us was that you are limited (at least on QFX5100) in
the amount of "VLANs" (VNIs). We were testing with 30 client
full-trunk ports per leaf and with that amount you can only provision
around 500 VLANs before you get errors and basically it seems you run
out of memory for bridge domains on the switch. This seems to be a
limitation by the chips used in the QFX5100, at least that's what I
got when I asked about it.

You can check if you know where:

root@SW-A:RE:0% ifsmon -Id | grep IFBD
 IFBD   :12884  0

root@SW-A:RE:0% ifsmon -Id | grep Bridge
 Bridge Domain  : 3502   0

These numbers combined need to be <= 16382.

And if you get over the limit these nice errors occur:

dcf_ng_get_vxlan_ifbd_hw_token: Max vxlan ifbd hw token reached 16382
ifbd_create_node: VXLAN IFBD hw token couldn't be allocated for 



Workaround is to decrease VLANs or trunk config.

Also you absolutely NEED LACP from servers to the fabric. 17.4 has
enhancements which will put the client ports in LACP standby when the
leaf gets separated from all spines.


As a downside I see the more complex configuration at least. Adding
vlan means adding routing instance etc. There are also other
questions, about convergence, scalability, how stable it is and code
maturity.


We have it automated with Ansible. Management access happens over OOB
(Mgmt) ports and everything is pushed by Ansible playbooks. Ansible
generates configuration from templates and pushes it to the switches
via netconf. I never would want to do this by hand. This demands a
certain level of structuring by every team (network, people doing the
cabling, server team) but it works out well for structured setups.

Our switch config looks like this:

--
user@sw1-spine-pod1> show configuration
## Last commit: 2019-03-11 03:13:49 CET by user
## Image name: jinstall-host-qfx-5-flex-17.4R2-S2.3-signed.tgz

version 17.4R1-S3.3;
groups {
/* Created by Ansible */
evpn-defaults { /* OMITTED */ };
/* Created by Ansible */
evpn-spine-defaults { /* OMITTED */ };
/* Created by Ansible */
evpn-spine-1 { /* OMITTED */ };
/* Created by Ansible - Empty group for maintenance operations */
client-interfaces;
}
apply-groups [ evpn-defaults evpn-spine-defaults evpn-spine-1 ];
--

So everything Ansible does is contained in apply-groups and is hidden. 
You can

immediately spot if something is configured by hand.

For code we're currently running on the 17.4 train which works mostly
fine, we had a few problems with third party 40G optics but these
should be fixed in the newest 17.4 service release.

Also we had a problem where new Spine/Leaf links did not come up but
these vanished after rebooting/upgrading the spines.

In daily operations it proves to be quite stable.


Best Regards

Sebastian


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EVPN/VXLAN experience

2019-03-22 Thread Andrey Kostin

Thank you Sebastian for sharing your very valuable experience.

Kind regards,
Andrey

Sebastian Wiesinger писал 2019-03-22 04:39:

* Andrey Kostin  [2019-03-15 20:50]:
I'm interested to hear about experience of running EVPN/VXLAN, 
particularly
with QFX10k as L3 gateway and QFX5k as spine/leaves. As per docs, it 
should
be immune to any single switch downtime, so might be a candidate to 
really

redundant design.


All right here it goes:

I can't speak for QFX10k as spine but we have QFX5100 Leaf/Spine
setups with EVPN/VXLAN running right now. Switch downtime is no
problem at all, we unplugged a running switch, shut down ports,
unplugged cables between leaf & spine or leaf & client all while there
was storage traffic (NFS) active in the setup. Worst thing that
happend was that IOPS went down from 400k/s to 100k/s for 1-3 seconds.

What did bother us was that you are limited (at least on QFX5100) in
the amount of "VLANs" (VNIs). We were testing with 30 client
full-trunk ports per leaf and with that amount you can only provision
around 500 VLANs before you get errors and basically it seems you run
out of memory for bridge domains on the switch. This seems to be a
limitation by the chips used in the QFX5100, at least that's what I
got when I asked about it.

You can check if you know where:

root@SW-A:RE:0% ifsmon -Id | grep IFBD
 IFBD   :12884  0

root@SW-A:RE:0% ifsmon -Id | grep Bridge
 Bridge Domain  : 3502   0

These numbers combined need to be <= 16382.

And if you get over the limit these nice errors occur:

dcf_ng_get_vxlan_ifbd_hw_token: Max vxlan ifbd hw token reached 16382
ifbd_create_node: VXLAN IFBD hw token couldn't be allocated for 



Workaround is to decrease VLANs or trunk config.

Also you absolutely NEED LACP from servers to the fabric. 17.4 has
enhancements which will put the client ports in LACP standby when the
leaf gets separated from all spines.


As a downside I see the more complex configuration at least. Adding
vlan means adding routing instance etc. There are also other
questions, about convergence, scalability, how stable it is and code
maturity.


We have it automated with Ansible. Management access happens over OOB
(Mgmt) ports and everything is pushed by Ansible playbooks. Ansible
generates configuration from templates and pushes it to the switches
via netconf. I never would want to do this by hand. This demands a
certain level of structuring by every team (network, people doing the
cabling, server team) but it works out well for structured setups.

Our switch config looks like this:

--
user@sw1-spine-pod1> show configuration
## Last commit: 2019-03-11 03:13:49 CET by user
## Image name: jinstall-host-qfx-5-flex-17.4R2-S2.3-signed.tgz

version 17.4R1-S3.3;
groups {
/* Created by Ansible */
evpn-defaults { /* OMITTED */ };
/* Created by Ansible */
evpn-spine-defaults { /* OMITTED */ };
/* Created by Ansible */
evpn-spine-1 { /* OMITTED */ };
/* Created by Ansible - Empty group for maintenance operations */
client-interfaces;
}
apply-groups [ evpn-defaults evpn-spine-defaults evpn-spine-1 ];
--

So everything Ansible does is contained in apply-groups and is hidden. 
You can

immediately spot if something is configured by hand.

For code we're currently running on the 17.4 train which works mostly
fine, we had a few problems with third party 40G optics but these
should be fixed in the newest 17.4 service release.

Also we had a problem where new Spine/Leaf links did not come up but
these vanished after rebooting/upgrading the spines.

In daily operations it proves to be quite stable.


Best Regards

Sebastian


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] EVPN/VXLAN experience (was: EX4600 or QFX5110)

2019-03-22 Thread Sebastian Wiesinger
* Andrey Kostin  [2019-03-15 20:50]:
> I'm interested to hear about experience of running EVPN/VXLAN, particularly
> with QFX10k as L3 gateway and QFX5k as spine/leaves. As per docs, it should
> be immune to any single switch downtime, so might be a candidate to really
> redundant design.

All right here it goes:

I can't speak for QFX10k as spine but we have QFX5100 Leaf/Spine
setups with EVPN/VXLAN running right now. Switch downtime is no
problem at all, we unplugged a running switch, shut down ports,
unplugged cables between leaf & spine or leaf & client all while there
was storage traffic (NFS) active in the setup. Worst thing that
happend was that IOPS went down from 400k/s to 100k/s for 1-3 seconds.

What did bother us was that you are limited (at least on QFX5100) in
the amount of "VLANs" (VNIs). We were testing with 30 client
full-trunk ports per leaf and with that amount you can only provision
around 500 VLANs before you get errors and basically it seems you run
out of memory for bridge domains on the switch. This seems to be a
limitation by the chips used in the QFX5100, at least that's what I
got when I asked about it.

You can check if you know where:

root@SW-A:RE:0% ifsmon -Id | grep IFBD
 IFBD   :12884  0

root@SW-A:RE:0% ifsmon -Id | grep Bridge
 Bridge Domain  : 3502   0

These numbers combined need to be <= 16382.

And if you get over the limit these nice errors occur:

dcf_ng_get_vxlan_ifbd_hw_token: Max vxlan ifbd hw token reached 16382
ifbd_create_node: VXLAN IFBD hw token couldn't be allocated for 

Workaround is to decrease VLANs or trunk config.

Also you absolutely NEED LACP from servers to the fabric. 17.4 has
enhancements which will put the client ports in LACP standby when the
leaf gets separated from all spines.

> As a downside I see the more complex configuration at least. Adding
> vlan means adding routing instance etc. There are also other
> questions, about convergence, scalability, how stable it is and code
> maturity.

We have it automated with Ansible. Management access happens over OOB
(Mgmt) ports and everything is pushed by Ansible playbooks. Ansible
generates configuration from templates and pushes it to the switches
via netconf. I never would want to do this by hand. This demands a
certain level of structuring by every team (network, people doing the
cabling, server team) but it works out well for structured setups.

Our switch config looks like this:

--
user@sw1-spine-pod1> show configuration
## Last commit: 2019-03-11 03:13:49 CET by user
## Image name: jinstall-host-qfx-5-flex-17.4R2-S2.3-signed.tgz

version 17.4R1-S3.3;
groups {
/* Created by Ansible */
evpn-defaults { /* OMITTED */ };
/* Created by Ansible */
evpn-spine-defaults { /* OMITTED */ };
/* Created by Ansible */
evpn-spine-1 { /* OMITTED */ };
/* Created by Ansible - Empty group for maintenance operations */
client-interfaces;
}
apply-groups [ evpn-defaults evpn-spine-defaults evpn-spine-1 ];
--

So everything Ansible does is contained in apply-groups and is hidden. You can
immediately spot if something is configured by hand.

For code we're currently running on the 17.4 train which works mostly
fine, we had a few problems with third party 40G optics but these
should be fixed in the newest 17.4 service release.

Also we had a problem where new Spine/Leaf links did not come up but
these vanished after rebooting/upgrading the spines.

In daily operations it proves to be quite stable.


Best Regards

Sebastian

-- 
GPG Key: 0x58A2D94A93A0B9CE (F4F6 B1A3 866B 26E9 450A  9D82 58A2 D94A 93A0 B9CE)
'Are you Death?' ... IT'S THE SCYTHE, ISN'T IT? PEOPLE ALWAYS NOTICE THE SCYTHE.
-- Terry Pratchett, The Fifth Elephant
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp