Correction - QFX5110 can now route VLAN/IP to VNI via this configuration:
https://www.juniper.net/documentation/en_US/junos/topics/concept/evpn-vxlan-qfx5110-l2-vxlan-l3-logical.html
I was no aware this information had been put out there. Min SW would be
17.3R3, but 18.1R3-S[latest, now 4] woul
If you are going to try any code for EVPN/VXLAN testing, I would highly suggest
using 18.1R3-S4, at least right now.
Rich
Richard McGovern
Sr Sales Engineer, Juniper Networks
978-618-3342
On 4/16/19, 4:21 PM, "Vincent Bernat" wrote:
❦ 16 avril 2019 20:09 +00, Richard McGovern :
❦ 16 avril 2019 20:09 +00, Richard McGovern :
> 5110, can NOT route between VLAN/IP and VXLAN, today. This is a
> future (some 19.x?).
It is believed to be able to do that now:
https://www.juniper.net/documentation/en_US/junos/topics/concept/evpn-vxlan-qfx5110-l2-vxlan-l3-logical.html
I was
5110, can NOT route between VLAN/IP and VXLAN, today. This is a future (some
19.x?).
I do believe that QFX5110 is not really "certified" as a EVPN/VXLAN Spine.
Your design is what Juniper refers to as CRB - Centralized Route/Bridged. That
is, VXLAN L3 at the core, versus the edge. The core
On Apr 16, 2019, at 12:46 PM, James Stapley wrote:
>
> This is the most relevant SNMP OID I've found:
> https://apps.juniper.net/mib-explorer/navigate.jsp#object=ipNetToPhysicalTable&product=Junos%20OS&release=17.3R3
>
> That all needs to be regularly slurped into a database of some kind, and
>
❦ 16 avril 2019 17:32 +00, Ian :
> Much appreciated reply.
>
> My understanding is EVPN-VXLAN uses anycast on all spines. All spines
> would have the same IP address (that is the gateway IP). Considering
> the limitations of the EX4600 you pointed out (which I assume is due
> to the Broadcom chip
Much appreciated reply.
My understanding is EVPN-VXLAN uses anycast on all spines. All spines would
have the same IP address (that is the gateway IP). Considering the limitations
of the EX4600 you pointed out (which I assume is due to the Broadcom chipset),
means in a case of mixing EX4600 with
> Raphael Mazelier
> Sent: Tuesday, April 16, 2019 3:51 PM
>
> On 16/04/2019 15:52, Saku Ytti wrote:
> > On Tue, 16 Apr 2019 at 16:35, Vincent Bernat wrote:
> >
> >> Can you confirm that rpd is answering ICMP echo requests? I find this
> >> surprising as I would have expected the FreeBSD kernel t
Somewhat late to the party, but I'm thinking about similar things at the
moment. We've not really had too many issues with this in the (many) years
we've been doing production IPv6, but now we're in the process of
considering rolling PI IPv6 out, and more extensively (into student VLANs
[where the
On 16/04/2019 15:52, Saku Ytti wrote:
On Tue, 16 Apr 2019 at 16:35, Vincent Bernat wrote:
Can you confirm that rpd is answering ICMP echo requests? I find this
surprising as I would have expected the FreeBSD kernel to do that.
You're probably right. So more likely is that LC CPU is busy doin
On Tue, 16 Apr 2019 at 16:35, Vincent Bernat wrote:
> Can you confirm that rpd is answering ICMP echo requests? I find this
> surprising as I would have expected the FreeBSD kernel to do that.
You're probably right. So more likely is that LC CPU is busy doing
programming RPD asked it to do, inst
❦ 16 avril 2019 11:04 +00, Ian :
> Thank you, Vincent.
>
> That is weird; it was a very simple layout illustration, I am attaching it
> again.
>
> Ultimate goal is to reduce broadcast domain size while having the
> resources be able to participate in L2 without over-sizing it and
> without using
❦ 15 avril 2019 15:09 +03, Saku Ytti :
>> I'm afraid this is not a valid test to prove the effect of relative process
>> priorities within the RE, doing this you're slowing down the clock on the
>> complete RE simulation as a whole (all simulated processes slowed down
>> equally).
> ..
>
>> Again
13 matches
Mail list logo