Hi Chris,

The traffic is some kind of state replication mechanism between to geographically diverse appliances. My guess is that the appliances are sending layer 3 headers inside layer 2 broadcast over the HA vlan.

Someone asked out the config - can't get much more simpler. Also remember is working fine for IPv6.

Ingress port:

interface GigabitEthernet2/13
 switchport trunk allowed vlan 327
 switchport mode trunk
 switchport nonegotiate
 mtu 9198
 load-interval 30
 flowcontrol receive off
 flowcontrol send off
 no cdp enable
 spanning-tree portfast trunk
 spanning-tree bpdufilter enable

Egress port (same device for testing):

interface TenGigabitEthernet2/7
 switchport access vlan 327
 switchport trunk allowed vlan none
 switchport mode access
 switchport nonegotiate
 mtu 9198
 load-interval 30
 flowcontrol receive off
 flowcontrol send off
 no cdp enable

Also the counters someone was suggesting looking at;

AKNNR-ISP-SW1#show int counters detail | in 2/13|Port
Port InBytes InUcastPkts InMcastPkts InBcastPkts Gi2/13 222183306824 0 0 2114072064 Port OutBytes OutUcastPkts OutMcastPkts OutBcastPkts Gi2/13 682063116 0 61300 5592900 Port InPkts 64 OutPkts 64 InPkts 65-127 OutPkts 65-127 Gi2/13 0 1 2106943835 5103190 Port InPkts 128-255 OutPkts 128-255 InPkts 256-511 OutPkts 256-511 Gi2/13 7128226 551009 0 0
Port             InPkts 512-1023  OutPkts 512-1023
Gi2/13                         0                 0
Port InPkts 1024-1518 OutPkts 1024-1518 InPkts 1519-1548 OutPkts 1519-1548 Gi2/13 0 0 0 0
Port            InPkts 1549-9216 OutPkts 1549-9216
Gi2/13                         0                 0
Port Tx-Bytes-Queue-1 Tx-Bytes-Queue-2 Tx-Bytes-Queue-3 Tx-Bytes-Queue-4 Gi2/13 4413448 0 0 0 Port Tx-Bytes-Queue-5 Tx-Bytes-Queue-6 Tx-Bytes-Queue-7 Tx-Bytes-Queue-8 Gi2/13 0 0 0 677643104 Port Tx-Drops-Queue-1 Tx-Drops-Queue-2 Tx-Drops-Queue-3 Tx-Drops-Queue-4 Gi2/13 0 0 0 0 Port Tx-Drops-Queue-5 Tx-Drops-Queue-6 Tx-Drops-Queue-7 Tx-Drops-Queue-8 Gi2/13 0 0 0 0 Port Dbl-Drops-Queue-1 Dbl-Drops-Queue-2 Dbl-Drops-Queue-3 Dbl-Drops-Queue-4 Gi2/13 0 0 0 0 Port Dbl-Drops-Queue-5 Dbl-Drops-Queue-6 Dbl-Drops-Queue-7 Dbl-Drops-Queue-8 Gi2/13 0 0 0 0 Port Rx-No-Pkt-Buff RxPauseFrames TxPauseFrames PauseFramesDrop Gi2/13 0 0 0 0
Port            UnsupOpcodePause
Gi2/13                         0

Have logged a support case so hopefully can report back more soon.

Thanks

Ivan

On 1/Jul/2014 1:20 a.m., Chris Marget wrote:
Hi Ivan,

Your L2 broadcast / L3 unicast traffic has piqued my curiosity.

Can you share some details about the use case for this unusual traffic?

I have a project in mind where I'll be doing exactly the opposite: IPv4
multicast in Ethernet unicast.

My use case is a multicast application with an un-graceful startup. If
the application restarts mid-day, there's a long delay while it collects
state information from incoming multicast packets. There is no mechanism
for priming this application - the only option right now is to wait
while the infrequent state messages re-build the state database. I plan
to cache incoming state data in an L2 adjacent server, and blast this
traffic at any instances which have recently restarted. I can't mess
with the traffic at all because it's cryptographically signed by the
sender, and I have to do it with unicast frames because the anti-replay
mechanisms mean it's trouble if I deliver these packets to the wrong box.

Thanks!

/chris
_______________________________________________
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Reply via email to