On Mon, 12 Feb 2024 at 09:44, james list wrote:
> I'd like to test with LACP slow, then can see if physical interface still
> flaps...
I don't think that's good idea, like what would we know? Would we have
to wait 30 times longer, so month-3months, to hit what ever it is,
before we have
hi
I'd like to test with LACP slow, then can see if physical interface still
flaps...
Thanks for your support
Il giorno dom 11 feb 2024 alle ore 18:02 Saku Ytti ha
scritto:
> On Sun, 11 Feb 2024 at 17:52, james list wrote:
>
> > - why physical interface flaps in DC1 if it is related to lacp ?
On Sun, 11 Feb 2024 at 17:52, james list wrote:
> - why physical interface flaps in DC1 if it is related to lacp ?
16:39:35.813 Juniper reports LACP timeout (so problem started at
16:39:32, (was traffic passing at 32, 33, 34 seconds?))
16:39:36.xxx Cisco reports interface down, long after
Hi
I have a couple of points to ask related to your idea:
- why physical interface flaps in DC1 if it is related to lacp ?
- why the same setup in DC2 do not report issues ?
NEXUS01# sh logging | in Initia | last 15
2024 Jan 17 22:37:49 NEXUS01 %ETHPORT-5-IF_DOWN_INITIALIZING: Interface
On Sun, 11 Feb 2024 at 15:24, james list wrote:
> While on Juniper when the issue happens I always see:
>
> show log messages | last 440 | match LACPD_TIMEOUT
> Jan 25 21:32:27.948 2024 MX1 lacpd[31632]: LACPD_TIMEOUT: et-0/1/5: lacp
> current while timer expired current Receive State: CURRENT
On Cisco I see physical goes down (initializing), what does that mean?
While on Juniper when the issue happens I always see:
show log messages | last 440 | match LACPD_TIMEOUT
Jan 25 21:32:27.948 2024 MX1 lacpd[31632]: LACPD_TIMEOUT: et-0/1/5: lacp
current while timer expired current Receive
Hey James,
You shared this off-list, I think it's sufficiently material to share.
2024 Feb 9 16:39:36 NEXUS1
%ETHPORT-5-IF_DOWN_PORT_CHANNEL_MEMBERS_DOWN: Interface
port-channel101 is down (No operational members)
2024 Feb 9 16:39:36 NEXUS1 %ETH_PORT_CHANNEL-5-PORT_DOWN:
port-channel101:
Hi
1) cable has been replaced with a brand new one, they said that to check an
MPO 100 Gbs cable is not that easy
3) no errors reported on both side
2) here the output of cisco and juniper
NEXUS1# sh interface eth1/44 transceiver details
Ethernet1/44
transceiver is present
type is
Hi
there are no errors on both interfaces (Cisco and Juniper).
here following logs of one event on both side, config and LACP stats.
LOGS of one event time 16:39:
CISCO
2024 Feb 9 16:39:36 NEXUS1 %ETHPORT-5-IF_DOWN_PORT_CHANNEL_MEMBERS_DOWN:
Interface port-channel101 is down (No operational
I want to clarify, I meant this in the context of the original question.
That is, if you have a BGP specific problem, and no FCS errors, then
you can't have link problems.
But in this case, the problem is not BGP specific, in fact it has
nothing to do with BGP, since the problem begins on
I don't think any of these matter. You'd see FCS failure on any
link-related issue causing the BGP packet to drop.
If you're not seeing FCS failures, you can ignore all link related
problems in this case.
On Sun, 11 Feb 2024 at 14:13, Havard Eidnes via juniper-nsp
wrote:
>
> > DC technicians
> DC technicians states cable are the same in both DCs and
> direct, no patch panel
Things I would look at:
* Has all the connectors been verified clean via microscope?
* Optical levels relative to threshold values (may relate to the
first).
* Any end seeing any input errors? (May
On Sun, 11 Feb 2024 at 13:51, james list via juniper-nsp
wrote:
> One think I've omit to say is that BGP is over a LACP with currently just
> one interface 100 Gbs.
>
> I see that the issue is triggered on Cisco when eth interface seems to go
> in Initializing state:
Ok, so we can forget BGP
DC technicians states cable are the same in both DCs and direct, no patch
panel
Cheers
Il giorno dom 11 feb 2024 alle ore 11:20 nivalMcNd d
ha scritto:
> Can it be DC1 is connecting links over an intermediary patch panel and you
> face fibre disturbance? That may be eliminated if your
yes same version
currently no traffic exchange is in place, just BGP peer setup
no traffic
Il giorno dom 11 feb 2024 alle ore 11:16 Igor Sukhomlinov <
dvalinsw...@gmail.com> ha scritto:
> Hi James,
>
> Do you happen to run the same software on all nexuses and all MXes?
> Do the DC1 and DC2 bgp
Hi
One think I've omit to say is that BGP is over a LACP with currently just
one interface 100 Gbs.
I see that the issue is triggered on Cisco when eth interface seems to go
in Initializing state:
2024 Feb 9 16:39:36 NEXUS1 %ETHPORT-5-IF_DOWN_PORT_CHANNEL_MEMBERS_DOWN:
Interface
16 matches
Mail list logo