hi
I'd like to test with LACP slow, then can see if physical interface still
flaps...
Thanks for your support
Il giorno dom 11 feb 2024 alle ore 18:02 Saku Ytti ha
scritto:
> On Sun, 11 Feb 2024 at 17:52, james list wrote:
>
> > - why physical interface flaps in DC1 if it i
: Interface
Ethernet1/44 is down (Initializing)
Il giorno dom 11 feb 2024 alle ore 14:36 Saku Ytti ha
scritto:
> On Sun, 11 Feb 2024 at 15:24, james list wrote:
>
> > While on Juniper when the issue happens I always see:
> >
> > show log messages | last 440 | match LACPD_TIMEOUT
: CURRENT
Il giorno dom 11 feb 2024 alle ore 14:10 Saku Ytti ha
scritto:
> Hey James,
>
> You shared this off-list, I think it's sufficiently material to share.
>
> 2024 Feb 9 16:39:36 NEXUS1
> %ETHPORT-5-IF_DOWN_PORT_CHANNEL_MEMBERS_DOWN: Interface
> port-channel1
Hi
1) cable has been replaced with a brand new one, they said that to check an
MPO 100 Gbs cable is not that easy
3) no errors reported on both side
2) here the output of cisco and juniper
NEXUS1# sh interface eth1/44 transceiver details
Ethernet1/44
transceiver is present
type is QSFP-
50
et-0/1/5 Partner 32768 b0:8b:cf:83:49:5b 32768
429 100
Il giorno dom 11 feb 2024 alle ore 13:07 Gert Doering
ha scritto:
> HI,
>
> On Sun, Feb 11, 2024 at 12:50:32PM +0100, james list wrote:
> > 2024 Feb 9 16:39:36 NEXUS1 %ETHPORT-5-IF_DOWN_PORT_CHANN
f your interfaces on DC1
> links do not go down
>
> On Sun, Feb 11, 2024, 21:16 Igor Sukhomlinov via cisco-nsp <
> cisco-...@puck.nether.net> wrote:
>
>> Hi James,
>>
>> Do you happen to run the same software on all nexuses and all MXes?
>> Do the DC1 and DC2 bgp
yes same version
currently no traffic exchange is in place, just BGP peer setup
no traffic
Il giorno dom 11 feb 2024 alle ore 11:16 Igor Sukhomlinov <
dvalinsw...@gmail.com> ha scritto:
> Hi James,
>
> Do you happen to run the same software on all nexuses and all MXes?
> Do t
heers
James
Il giorno dom 11 feb 2024 alle ore 11:12 Gert Doering
ha scritto:
> Hi,
>
> On Sun, Feb 11, 2024 at 11:08:29AM +0100, james list via cisco-nsp wrote:
> > we notice BGP flaps
>
> Any particular error message? BGP flaps can happen due to many different
> reasons,
h the interconnetion at DC1 without any solution.
SFP we use in both DCs:
Juniper - QSFP-100G-SR4-T2
Cisco - QSFP-100G-SR4
over MPO cable OM4.
Distance is DC1 70 mt and DC2 80 mt, hence is less where we see the issue.
Any idea or suggestion what to check or to do ?
Thanks in advance
Cheers
h the interconnetion at DC1 without any solution.
SFP we use in both DCs:
Juniper - QSFP-100G-SR4-T2
Cisco - QSFP-100G-SR4
over MPO cable OM4.
Distance is DC1 70 mt and DC2 80 mt, hence is less where we see the issue.
Any idea or suggestion what to check or to do ?
Thanks in advance
Cheers
I see as well this:
Autonegotiation information:
Negotiation status: Incomplete
Thanks in advance
James
QFX5110A> show interfaces ge-0/0/0 extensive
Physical interface: ge-0/0/0, Enabled, Physical link is Up
Interface index: 652, SNMP ifIndex: 707, Generation: 145
Description:
m ok.
Has anyone ever experienced the same ? Any suggestions ?
Thanks in advance for any hint
Kind regards
James
JUNIPER *
> show configuration interfaces ae10 | display set
set interfaces ae10 description "to Cisco leaf"
set interfaces ae10 aggregated-ether-
nether.net/mailman/listinfo/juniper-nsp
--
+-----+
| James W. Laferriere| SystemTechniques | Give me VMS |
| Network & System Engineer | 3237 Holden Road | Give me Linux |
| j...@system-techniques.com
hat I'm missing, please let me know.
Thanks,
James
On Mon, Apr 18, 2022 at 12:11 AM Crist Clark wrote:
>
> I don't quite understand. Why don't you just export the static into your
> routing protocol? How is the static route a "fallback" if it is really the
&g
n to the spines.
Thanks,
James
On Mon, Apr 18, 2022 at 1:21 AM Nathan Ward wrote:
>
>
> On 18/04/2022, at 3:49 AM, James via juniper-nsp
> wrote:
>
>
> From: James
> Subject: Advertising inactive routes to iBGP neighbors
> Date: 18 April 2022 at 3:49:44 AM NZST
>
ccessful.
Any other suggestions on how I could achieve this?
Thanks,
James
--- End Message ---
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Can you please share the output of:
show class-of-service shared-buffer
on your QFX5100 ?
Cheers
James
Il giorno ven 19 nov 2021 alle ore 11:58 Thomas Bellman
ha scritto:
> On 2021-11-19 09:49, james list via juniper-nsp wrote:
>
> > I try to rephrase the question you do not unde
change shared buffers for unused queues and add to used one,
correct ?
Based on the output provided what you suggest to change ?
I also understand this kind of change is traffic affecting.
I also need to understand how shared buffer queues on QFX are attached to
COS queues.
Thanks, cheers
James
4305.23 KB 0.00 KB 4305.23 KB
Egress:
Total Buffer : 12480.00 KB
Dedicated Buffer : 3744.00 KB
Shared Buffer: 8736.00 KB
Lossless : 4368.00 KB
Multicast : 1659.84 KB
Lossy : 2708.16 KB
Cheers
James
Il giorno
interface (class best effort) and some UDP
packets are lost hence I am tuning to find a solution.
Thanks in advance for any hint
Cheers
James
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Hi
I've to ask for the VM routing table and then I will share.
VM gateway is load balancer.
Cheers
James
Il giorno gio 29 lug 2021 alle ore 18:17 Ryan Rawdon ha
scritto:
>
> > On Jul 29, 2021, at 11:55 AM, james list wrote:
> >
> >
> > Internet - Fire
?
Thanks in advance
James
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
possibility to use VXLAN to extend
L2...
Also any recommendation/hint/experience can be shared is appreciated.
Thanks in advance for your help
Cheers
James
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper
t; Does anyone know if that's still possible? I just want a pretend/low
> performance/fake FPC ideally.
Hi Mark,
Are you thinking of this?
"set chassis fpc 0 lite-mode" - requires a reboot to take effect.
Cheers,
James.
___
juniper-nsp
Hi Ytti
we're thinking to upgrade to 10 Gbs the interface, this should help.
Cheers James
Il giorno dom 13 dic 2020 alle ore 12:45 Saku Ytti ha
scritto:
> Hey James,
>
> > I see on a customer EX4300 in VC connected to Cisco 3560 huge drop
> errors.
>
> This isn'
Dear experts
I see on a customer EX4300 in VC connected to Cisco 3560 huge drop errors.
On Cisco side I've mitigated with hold-queue 4096 but on EX4300 is there
any related command ?
Or do I'm forced to change buffer size ?
Thanks in advance for your help
Ch
Hi
Can you elaborate the concept of L3 vs L2?
I was thinking to have L2 versus the server in A/A (vc or mclag) or A/P
(plain vlan among two switches) and a vrrp over the two switches.
Are you talking to have L3 (routed port) on each switch interface versus
server ?
Cheers
James
Il Lun 28 Set
Gbs interfaces to set vc ports.
Do you think it's a good architecture or what would you setup?
Cheers
James
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
vlans also if in each vlan
there are very few systems ( 3 o 4 servers, etc).
My question is: how did you manage the issue in case you faced it?
Private vlans?
Keep in mind we need to have a non stop environment and hence any possible
way forward must forecast it.
Cheers
James
Hi ytti
Can you share why EX4650 would do the job and EX4300 cannot? Do you have a
reference on juniper.com?
I am not sure what you mean with flexible filter, can you share an example?
Cheers
James
Il Lun 1 Giu 2020, 08:23 Saku Ytti ha scritto:
> On Sun, 31 May 2020 at 22:51, james l
).
It seems EX4300 is not able to intercept family MPLS dscp field.
Any idea ?
Cheers
James
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
For info, the issue was related to a carrier mediaconverter sending frames
with a private unknown ethertype not decoded by EX4300.
Thanks for your help all
Cheers
James
Il Lun 20 Apr 2020, 01:31 Chuck Anderson ha scritto:
> Well, that was an easy fix on my MX480s:
>
> set proto
0
> > Fragment frames 0
> > VLAN tagged frames 13196813130
> > Code violations 0
> >
> > Rate is only 24 Mbps, 2200 pps:
> >
> > admin@ex3400-a> show inte
nectivity-association MAC
Dear all, an help is appreciated and welcomme, please let me thank in
advance anyone will give an hint.
Cheers
James
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
t where is the problem and who is experiencing the problem (ie a
tier1 carrier)?
Cheers
James
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
CMP, however it is kind of the same; The
pseudowire ingress PE has access to the layer 2 / 3 / 4 headers of the
L2VPN payload traffic, so it has the same keys to feed into a CRC32.
Just a 2nd data point for you...
Cheers,
James.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
1, SRC 1.1.1.1, DST 1.1.1.1,
SPORT 1, DPORT 1, send the traffic, note which egress port was chosen
from the LAG, increment one value (.e.g DST IP) by 1, send traffic,
note the egress port, repeat; and you can sometimes "get a feel" for
the hashing being used. It's very laborious but it might give some
more insight into what's going on here.
Cheers,
James.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
nomials can be
more effective. It's probably a safe bet that most implementations
that use CRC32 for hashing use the same standard poly value but I'm
keen to hear more about this.
Cheers,
James.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
ce in this kind of design? Is bursty multicast an
issue?
I am not able to find any test on the www..
Is there any other way to reach the target without rebuild the entire
network?
Thanks in advance for any hints/recommendation.
James
___
juniper-nsp ma
As someone who has a P1 case open on the QFX10008 (8.4R1.8) [WMU has retreated
to 8.1R3.3] now sitting at day 15 that produced
three different types of core dumps with issues on the ULC-30Q28, ULC-60S-6Q,
and MSNOOPD, it would be really nice to go fishing
on the bugs myself. I have found th
aviour :)
The transit forwarding path through the router has a much more
strictly bound packet processing loop than the RE so I wouldn't really
be relying on it for anything.
Cheers,
James.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
more or less the same time, if
you've not (yet) gotten to thinking about that.
<https://www.ru.ac.za>
James Stapley
Network Architect: I&TS Division
t: +27 (0) 46 603 8849
Struben Building, Artillery Road, Grahamstown, 6139
PO Box 94, Grahamstown, 6140, South Africa
www.ru.ac.za
On F
rtionately distributed such that the failure
of one P node had a much larger impact that other P nodes. As I
mentioned in my previous email, these issues only go away when you
have the kind of luxuries that I, and I expect you, have like your own
dedicate transmission network or enough influence to tell a carrier
where to lay fibre next.
Cheers,
James.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
ks well. $dayjob has
a lot of realtime voice and video flying around so we use latency
based metrics and it works well but we also have our own transmission
infrastructure meaning that bandwidth isn't an factor for us. Not
everyone has that luxury.
Cheers,
James.
___
it at some point. Setting it to 1Tbps
makes good sense to me and in my experince it works fine (tested on
both Cisco and Juniper).
Cheers,
James.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
nterface and the receiving RFC2544 tester? Do they also show the
packets as missing (so they were dropped) or do they show the correct
number of packets received and it's just the MX receiving interface
that isn't accounting for all packets BUT all packets are forwarded?
Cheers
eantime of setting host-name as FQDN, and
leaving domain-name blank.
Best regards,
James.
On Tue, 15 Jan 2019 at 23:17, Phil Shafer wrote:
> James Stapley writes:
> >Not sure if we're the first people to notice this, but there seems to be a
> >change in the way Junos deals w
A dirty hack to move behaviour back to expected is to put the host-name in
as a FQDN and delete domain-name.
Seems not to affect anything else (domain-search helps resolution of naked
hostnames in commands), and both LLDP and SNMP report the same value.
On Mon, 14 Jan 2019 at 12:56, James Stapley
and this seems to be
what used to be the case in the past.
Not sure I'll get anywhere reporting this through our support partner (or
manage for them to pierce Tier 1 in turn...), so I thought this might be a
good place to post this info.
Thank you!
--
James Stapley
Network Architect
Infor
to me you wouldn't want to use something like HT that
can degrade performance.
I'm all ears on this one.
Cheers,
James.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
On 30 December 2018 18:40:50 CET, James Bensley wrote:
> Text with lots of typos
^ Sorry about that, on a mobile.
>I often make notes and never get around to publishing them online
>anywhere. Nearly 2 Yeats ago (where did the time go?) I was testing
>CRS1000v performance. Thi
the time go?) I was testing CRS1000v performance.
This link might have some useful info under the HugePages and Virtualization
sections:
https://docs.google.com/document/d/1YUwU3T5GNgmi6e2JwgViFRO_QoyUXiaDGnA-cixAaRY/edit?usp=drivesdk
This page has some notes on NUMA affinity, its important (IM
ight the exact reasons why one should stay away from
>fusion/fex/satellite - features must explicitly be
>ported/accommodated/tested for them. Not all performance data is
>available, OAM/CFM is a struggle etc.
Agreed.
Cheers,
James.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
;ve had lots of problems with
QFX switches and certain optics not working, either in stacked mode or
on certain code versions. Again, this went to JTAC, they couldn't fix
it, eventually we fixed it by trying various different code versions
and breaking the stack out.
So overall, not imp
L that are all allowed to use their links at 100%, you
can terminate them on a BNG/LNS and the BNG can apply a policer/shaper
based on the sync speed, and you can implement basic QoS (just a 2
class BE and priority/LLQ) on that device which is specialised for the
job (a BNG will support
On Thu, 18 Oct 2018 at 10:36, Benny Lyne Amorsen
wrote:
>
> James Bensley writes:
>
> > If customers have WAN links that are slower than their LAN links -
> > that is where fq-codel was designed to be implemented and that is why
> > it should be implemented on the CPE
On Wed, 17 Oct 2018 at 21:48, Colton Conor wrote:
>
> James,
>
> Thanks for the response. However, if you are just shaping at the CPE, then
> there could be bottlenecks between the CPE and core router causing the
> bufferbloat right? Example, if you have a wireless network,
your
customer to blast traffic into your network and this have to carry those
packets across your core only to them drop them on your expensive subscriber
management box.
One normally wants to drop excess packets as close to the source as possiblr.
So for that rea
On 4 October 2018 19:34:01 BST, james list wrote:
>Due to the fact that access switch are QFX5100 in virtual chassis, does
>anybody know if IS-IS managing virtual- chassis has something happening
>every 30 minutes which could cause delay?
>
>Cheers
As per my previous message,
It does not.
>
>
>
> Do you know if delay if from QFX5100 or MX or both? Do you know what
> Queue this traffic is going into on each switch/router?
>
>
>
> *From: *james list
> *Date: *Thursday, October 4, 2018 at 2:34 PM
> *To: *Richard McGovern , Juniper
;
>
>
> Sent from my iPhone
>
> > On Oct 2, 2018, at 12:37 PM, james list wrote:
> >
> > Dear experts
> >
> > I’ve a strange issue.
> >
> > Our customer replaced two L2/3 switches (C6500) where a pure L2 and L3
> > (hsrp) environment was set-
communication
failure of the physical link (the fact that it runs in both directions
gives bidirectional failure detection). So we can run BFDv4 and/or
BFDv6 on an interface but only one instance per-interface otherwise
you'd need to negotiate different port numbers for the different
insta
to reduce the time spent transmitting (lower
average utilisation).
Cheers,
James.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
On Tue, 2 Oct 2018 at 21:46, Mark Tinka wrote:
> On 2/Oct/18 21:13, James Bensley wrote:
>
> I presume that if one were to run MT-ISIS there would be no impact to IPv4?
>
>
> We already run MT for IS-IS. I consider this as basic a requirement as "Wide
> Metrics"
on access qfx5100 is disabled flow control
Il giorno mer 3 ott 2018 alle ore 06:50 Eldon Koyle <
ekoyle+puck.nether@gmail.com> ha scritto:
> Have you checked flow control counters?
>
> --
> Eldon
>
>
> On Tue, Oct 2, 2018, 16:22 james list wrote:
>
>> I
RPs are expring and needed to be refreshed every 30 mins
> interval. For multicast, check if any prune or joins are happening around
> the time. Any IGMP joins or prunes around the same time.
>
> On Tue, Oct 2, 2018 at 9:38 AM james list wrote:
>
>> Dear experts
>>
>&g
I put on both lists cause we had cisco and we have now juniper. Hence maybe
this is something known by cisco guru as well.
I have not a mx960 in lab unfortunately, no cpu spikes, no relevant logs.
Upgrade is not in roadmap since we re in 16.1
Cheers
Il Mar 2 Ott 2018, 21:19 James Bensley ha
On Tue, 2 Oct 2018 at 19:59, james list wrote:
>
> Can you elaborate?
> Why just every 30 minutes the issue?
Seeing as you have an all Juniper set up I don't think there is a need
to cross-post to two lists simultaneously. If you feel there is a
need, please post to the two lists
IPv6
addresses for IS-IS adjacencies (although it's a waste of IPs, I'd
still be curious).
Cheers,
James (not currently near the lab).
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Can you elaborate?
Why just every 30 minutes the issue?
Il Mar 2 Ott 2018, 20:34 Tom Beecher ha scritto:
> You have switches with completely different buffer depths than you used
> to. You prob want to look into that.
>
> On Tue, Oct 2, 2018 at 9:39 AM james list wrote:
>
Juniper separate forwarding and control, but I was thinking to OSPF LSA
refresh or something like that since the frequency is around 30 minutes..
Can anybody help me in sorting out which can be the main point here ?
Thanks in advance
Cheers,
James
On Tue, 2 Oct 2018 at 14:03, Tim Cooper wrote:
> The QoS obligations has been pretty much cut/paste from PSN into HSCN
> obligations, if you haven’t come across that yet. So look forward to that...
> ;)
>
> Tim C
Unfortunat
s a best effort service so
they had no issues with the DSCP being scrubbed to 0 in these places.
Also lots of customer didn't like packets coming in from the Internet
and hitting their section of the WAN with a DSCP marking on it. Some
explicitly asked us t
S is filling the pipes) we ended up with 8.
Yay :(
Cheers,
James.
[1] As is customary with any tech savvy government, they've since
sacked off various PSN standards without providing any replacement so
everyone is just sticking to the same expired standards for now
_
On Tue, 2 Oct 2018 at 10:10, Saku Ytti wrote:
>
> Hey James,
Hi Saku
> > Yeah so not already using RSVP means that we're not going to deploy it
> > just to deploy an IntServ QoS model. We also use DSCP and scrub it off
> > of dirty Internet packets.
>
> Hav
e scenario my experience is that mapping multiple
customer queues down to a fewer core queues helps to protect the
control plane and LLQ traffic in a simply way that covered all stake
holders, and no need for the additional signalling complexity that
RSVP brings.
Cheers,
James.
__
On Fri, 28 Sep 2018 at 11:48, Saku Ytti wrote:
>
> Hey James,
>
> In this failure mode this feature would help to find the rogue ISIS
> speaker, so looks sensible feature to me, even when very partial
> support it will limit limit the domain where the suspect exists. I've
&
scale well.
Communities work great for manipulating eBGP policy but with iBGP
policy manipulation it quickly becomes messy so you need to plan with
care.
Cheers,
James.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
-originator-edit-protocols-isis.html
As always in a mixed vendor network it can only add a limited amount
of mileage as not all our other vendors support it but, that is where
the "empty" knob could help.
Cheers,
James.
___
juniper-nsp mailing li
Thanks Saku/Lukas
Investigation still on going I will let you know if something is found.
Cheers
Il Mar 11 Set 2018, 00:20 Saku Ytti ha scritto:
> Oh I think I misunderstood OP. Yes, sounds like larger packets were
> impacted smaller were not.
>
> On Tue, 11 Sep 2018 at 01:16, Saku Ytti wrote:
models, serialise it as JSON, and send it over
gRPC to ASR9000 devices:
https://null.53bits.co.uk/index.php?page=example-2-public-peering-using-openconfig
You could write a better script than mine but then use your new script
as a stepping stone to automat
e.
I think the carrier had some problem but I'm not able to prove it.
Have you never seen this kind of issue ?
What can be realated to ?
Thanks in advance for any suggestion.
Cheers
James
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
htt
you, I just wanted to point out you could also join this Slack
channel: https://networktocode.slack.com
There are rooms for Python, Juniper, Cisco, Ansible, Salt and many
others all relating to network coding and automation. It's a good
place to get help with your network coding issues if you
Hi Aaron,
I'm not 100% what you're asking here. Opaque LSAs are used in SR to
advertise the SID for a prefix/node/adj within the IGP:
https://tools.ietf.org/html/draft-ietf-ospf-segment-routing-extensions-25
Cheers,
James.
___
juniper-nsp ma
So
the access node should *only* have exactly the labels it needs with a
single route (when using RFC5283).
Cheers,
James.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
On 30 July 2018 at 15:22, Krzysztof Szarkowicz wrote:
> James,
>
> As mentioned in my earlier mail, you can use it even with DU. If ABR has
> 1 /32 LDP FECs, you can configure LDP export policy on ABR to send only
> subset (e. g. 20 /32 FECs) to access.
>
> Saying that,
uld still need
per-LDP neighbour IP routing policies to only advertise the /32
loopback IPs that neighbor needs in the IGP, unless we use RFC5283 and
advertise a summary route (or install a static summary route).
Cheers,
James.
___
junipe
erent deployment not standard as far as I know, am I wrong?
Cheers
Il Gio 26 Lug 2018, 23:01 james list ha scritto:
> Dear experts,
> I have a virtual chassis of ex4300 connected to another vc of ex4300 with
> 2 x 1 Gbs links provided by two carriers.
>
> Lacp aggregation is up wi
.
I sent them the 802.1ae standard and ethertype to transport but no way.
As far as I could understand they have Huawei devices.
Do you have any suggestion for me to let them verify?
Cheers
James
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
On 24 July 2018 at 14:35, wrote:
> Hi James
Hi Adam,
> Suppose I have ABR advertising default-route + label down to a stub area,
> And suppose PE-3 in this stub area wants to send packets to PE1 and PE2 in
> area 0 or some other area.
> Now I guess the whole purpose of "L
LDP (and not RSVP/SR/BGP)?
Cheers,
James.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
mework" in my inbox from
the IETF WG mailing list and hadn't gotten round to reading it yet. I
might have some feedback on the draft, in which case, I will post back
to the WG mailing list.
I have found PR1278535 with Juniper so I can see that b
On 15 July 2018 at 11:12, wrote:
> @James is on my todo list so maybe we can exchange notes, (I plan on using
> it in RSVP-TE environment so the added complexity will be only marginal).
> Yes I've been waiting for this feature for quite some time in cisco (got
> promises that ma
both advertise the same loopback IP (context ID)
inside the IGP but with PE3 less preferred, so that PE3 can "collect"
the VPN traffic destined for PE2 when PE2 is down, I'd like that
alternate path to PE3 to be pre-installed in my P node FIBs/HW a-la
PIC/FRR.
Cheers,
James.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
at the draft it seems Juniper probably implemented it
specially for France Telecom. I can't find any other vendor (searching
on Google) that is supporting it (or a variation of it).
Cheers,
James.
___
juniper-nsp mailing list juniper-nsp@p
LDP session is detected as dead - but it does work if you have two
ingress PEs and two egress PEs and set up a crisscross topology of
pseudowires/backup-pseudowires.
Cheers,
James.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
ntioned in the NANOG thread that you wanted to remove BGP from
your core - are you using 6PE or BGP IPv6-LU on every hop in the path?
I know you are a happy user of BGP-SD so I guess it's Internet in the
GRT for you?
Cheers,
James.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
On 5 July 2018 09:56:40 BST, adamv0...@netconsultings.com wrote:
>> Of James Bensley
>> Sent: Thursday, July 05, 2018 9:15 AM
>>
>> - 100% rFLA coverage: TI-LA covers the "black spots" we currently
>have.
>>
>Yeah that's an interesting use c
MP, BGP-LS, BGP-MDT etc, it makes sense to me
to keep capitalising on that single signaling protocol for all services.
Cheers,
James.
P.s. sorry, on a plane so I've got time to kill.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
On 4 July 2018 at 18:13, Mark Tinka wrote:
>
>
> On 4/Jul/18 18:28, James Bensley wrote:
>
> Also
>
> Clarence Filsfils from Cisco lists some of their customers who are
> happy to be publicly named as running SR:
>
> https://www.youtube.com/watch?v=NJxtvNs
1 - 100 of 331 matches
Mail list logo