Facing the following issue: A VPLS (also tested with EoMPLS) pseudowire indicates up state but does not send/receive frames during link failure simulation for up to 30 seconds.
It was tested severy features: only VPLS with IGP, EoMPLS over Traffic Engineering, EoMPLS over TE protected by FRR. Recovery of VPLS and AToM by itself is very fast, in all conditions. But effectively, there is no frames going through the pseudowire. I am wondering if it is SUP/Module hardware issue or have you faced this in other platform? Software tested is 12.2.33.SRB2. Here is some details of config and monitoring during failure simulation: PC1-----7604(sup720)--------7640(sup32)-----PC2 |__________________| First, pseudowire is taking interface gi 4/0/1. When failure in this link is forced, VC immediatelly takes interface gi 4/0/0. The VC status is UP, but there is no frame crossing the pseudowire from PC1 to PC2. The amount of time it takes for traffic go through pseudowire again is very big, up to 30 seconds, which remembers me Spanning Tree issue. sh mpls l2transport vc 100 det Local interface: VFI vlan100 VFI up MPLS VC type is VFI, interworking type is Ethernet Destination address: 200.222.117.41, VC ID: 100, VC status: up Output interface: Gi4/0/1, imposed label stack {16} Preferred path: not configured Default path: active Next hop: 200.164.97.33 Create time: 16:47:30, last status change time: 00:58:41 Signaling protocol: LDP, peer 200.222.117.41:0 up Targeted Hello: 200.222.117.42(LDP Id) -> 200.222.117.41 MPLS VC labels: local 16, remote 16 Group ID: local 0, remote 0 MTU: local 1500, remote 1500 Remote interface description: Sequencing: receive disabled, send disabled VC statistics: packet totals: receive 8869, send 422530 byte totals: receive 839752, send 29011888 packet drops: receive 0, send 0 int gigabitEthernet 4/0/1 flamengo(config-if)#shut sh mpls l2 vc 100 det Local interface: VFI vlan100 VFI up MPLS VC type is VFI, interworking type is Ethernet Destination address: 200.222.117.41, VC ID: 100, VC status: up Output interface: Gi4/0/0, imposed label stack {16} Preferred path: not configured Default path: active Next hop: 200.164.178.233 Create time: 16:50:09, last status change time: 01:01:20 Signaling protocol: LDP, peer 200.222.117.41:0 up Targeted Hello: 200.222.117.42(LDP Id) -> 200.222.117.41 MPLS VC labels: local 16, remote 16 Group ID: local 0, remote 0 MTU: local 1500, remote 1500 Remote interface description: Sequencing: receive disabled, send disabled VC statistics: packet totals: receive 8902, send 423880 byte totals: receive 842842, send 29104224 packet drops: receive 0, send 0 Following is the basic config when it was tested for VPLS: l2 vfi vlan100 manual vpn id 100 neighbor 200.222.117.41 encapsulation mpls ! interface Vlan100 ip address 100.100.100.1 255.255.255.0 xconnect vfi vlan100 And here is the basic config when it was tested for AToM with MPLS TE and FRR. The result was the same, up to 30 seconds of no traffic between PC1 to PC2, even though Tunnel1 came up in 600ms due to G4/0/1 protection by Tunnel2. interface Vlan600 ip address 160.4.4.2 255.255.255.0 xconnect 200.222.117.42 600 encapsulation mpls pw-class usetunnel1 interface Vlan601 ip address 161.4.4.2 255.255.255.0 xconnect 200.222.117.42 601 encapsulation mpls pw-class usetunnel2 pseudowire-class usetunnel1 encapsulation mpls preferred-path interface Tunnel1 disable-fallback pseudowire-class usetunnel2 encapsulation mpls preferred-path interface Tunnel2 disable-fallback sh ip route 20.20.20.0 Routing entry for 20.20.20.0/24 Known via "ospf 2", distance 110, metric 2, type intra area Last update from 160.4.4.2 on Vlan600, 00:07:24 ago Routing Descriptor Blocks: * 161.4.4.2, from 200.164.178.233, 00:07:24 ago, via Vlan601 Route metric is 2, traffic share count is 1 160.4.4.2, from 200.164.178.233, 00:07:24 ago, via Vlan600 Route metric is 2, traffic share count is 1 >From OSPF point of view, there is no issue. It keeps point traffic to extended Vlan 601, as Vlan 601 VC status is UP. But effectively, traffic seems to go to black hole. Tks, Alaerte _______________________________________________ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/