Problem solved with "no ip verify header vlan all". It seems Cisco 4500/6500 (including the 4900M) switches do some verification of layer 3 headers (probably only the IPv4 ones as IPv6 had no issues). This happens even for layer 2 switched traffic. It is almost certain that the "HA" traffic is a little custom - at the very least I found the IP header checksum always zero (and it wasn't any fancy NIC offloading).

So a few interesting commands

show platform software drop-port (especially the outout for InpL2AclDrop)

show platform hardware acl input entries static (especially the output for Ipv4HeaderException)

And it seems this issue has been seen before, but Juniper has fixed the issue.

"In Junos version 10.0 and below, the fabric link probes are using Juniper proprietary IP datagrams, where IP Total Length field is set to 0" http://kb.juniper.net/InfoCenter/index?page=content&id=KB15141

http://juniper-frac.blogspot.co.nz/2009/09/deploy-srx-cluster-across-layer-2.html

Thanks to TAC. I have had some long cases but this one was sorted nice and quick!

Cheers

Ivan

On 1/Jul/2014 1:03 p.m., Chris Marget wrote:
Your case reminds me of something Tim Stevenson said about N7K and IPv4
multicast.

I don't remember the details exactly, but he left me with the impression
that the L2 filtering stuff for multicast frames, which usually doesn't
do *exactly* what you want (subscribe to 239.1.2.3 and you'll get L2
traffic for 239.2.2.3 as well) was "fixed" on N7K: It filters/forwards
at L2 using L3 criteria.

Your problem is almost exactly the other way around. Sorry I don't have
any answers, thanks for filling me in on the application. It makes sense
that these crazy frames are generated by a "magic box" HA setup.

Good luck, and please follow up with the list if TAC gives you anything
helpful..

/chris


On Mon, Jun 30, 2014 at 4:21 PM, Ivan <cisco-...@itpro.co.nz
<mailto:cisco-...@itpro.co.nz>> wrote:

    Hi Chris,

    The traffic is some kind of state replication mechanism between to
    geographically diverse appliances.  My guess is that the appliances
    are sending layer 3 headers inside layer 2 broadcast over the HA vlan.

    Someone asked out the config - can't get much more simpler.  Also
    remember is working fine for IPv6.

    Ingress port:

    interface GigabitEthernet2/13
      switchport trunk allowed vlan 327
      switchport mode trunk
      switchport nonegotiate
      mtu 9198
      load-interval 30
      flowcontrol receive off
      flowcontrol send off
      no cdp enable
      spanning-tree portfast trunk
      spanning-tree bpdufilter enable

    Egress port (same device for testing):

    interface TenGigabitEthernet2/7
      switchport access vlan 327
      switchport trunk allowed vlan none
      switchport mode access
      switchport nonegotiate
      mtu 9198
      load-interval 30
      flowcontrol receive off
      flowcontrol send off
      no cdp enable

    Also the counters someone was suggesting looking at;

    AKNNR-ISP-SW1#show int counters detail | in 2/13|Port
    Port                     InBytes       InUcastPkts      InMcastPkts
       InBcastPkts
    Gi2/13              222183306824                 0                0
        2114072064
    Port                    OutBytes      OutUcastPkts     OutMcastPkts
      OutBcastPkts
    Gi2/13                 682063116                 0            61300
           5592900
    Port                   InPkts 64        OutPkts 64    InPkts 65-127
    OutPkts 65-127
    Gi2/13                         0                 1 2106943835
    <tel:2106943835>       5103190
    Port              InPkts 128-255   OutPkts 128-255   InPkts 256-511
    OutPkts 256-511
    Gi2/13                   7128226            551009                0
                 0
    Port             InPkts 512-1023  OutPkts 512-1023
    Gi2/13                         0                 0
    Port            InPkts 1024-1518 OutPkts 1024-1518 InPkts 1519-1548
    OutPkts 1519-1548
    Gi2/13                         0                 0                0
                 0
    Port            InPkts 1549-9216 OutPkts 1549-9216
    Gi2/13                         0                 0
    Port            Tx-Bytes-Queue-1  Tx-Bytes-Queue-2 Tx-Bytes-Queue-3
    Tx-Bytes-Queue-4
    Gi2/13                   4413448                 0                0
                 0
    Port            Tx-Bytes-Queue-5  Tx-Bytes-Queue-6 Tx-Bytes-Queue-7
    Tx-Bytes-Queue-8
    Gi2/13                         0                 0                0
         677643104
    Port            Tx-Drops-Queue-1  Tx-Drops-Queue-2 Tx-Drops-Queue-3
    Tx-Drops-Queue-4
    Gi2/13                         0                 0                0
                 0
    Port            Tx-Drops-Queue-5  Tx-Drops-Queue-6 Tx-Drops-Queue-7
    Tx-Drops-Queue-8
    Gi2/13                         0                 0                0
                 0
    Port            Dbl-Drops-Queue-1 Dbl-Drops-Queue-2
    Dbl-Drops-Queue-3 Dbl-Drops-Queue-4
    Gi2/13                          0                 0
    0               0
    Port            Dbl-Drops-Queue-5 Dbl-Drops-Queue-6
    Dbl-Drops-Queue-7 Dbl-Drops-Queue-8
    Gi2/13                          0                 0
    0               0
    Port              Rx-No-Pkt-Buff     RxPauseFrames    TxPauseFrames
    PauseFramesDrop
    Gi2/13                         0                 0                0
                 0
    Port            UnsupOpcodePause
    Gi2/13                         0

    Have logged a support case so hopefully can report back more soon.

    Thanks

    Ivan

    On 1/Jul/2014 1:20 a.m., Chris Marget wrote:

        Hi Ivan,

        Your L2 broadcast / L3 unicast traffic has piqued my curiosity.

        Can you share some details about the use case for this unusual
        traffic?

        I have a project in mind where I'll be doing exactly the
        opposite: IPv4
        multicast in Ethernet unicast.

        My use case is a multicast application with an un-graceful
        startup. If
        the application restarts mid-day, there's a long delay while it
        collects
        state information from incoming multicast packets. There is no
        mechanism
        for priming this application - the only option right now is to wait
        while the infrequent state messages re-build the state database.
        I plan
        to cache incoming state data in an L2 adjacent server, and blast
        this
        traffic at any instances which have recently restarted. I can't mess
        with the traffic at all because it's cryptographically signed by the
        sender, and I have to do it with unicast frames because the
        anti-replay
        mechanisms mean it's trouble if I deliver these packets to the
        wrong box.

        Thanks!

        /chris

    _________________________________________________
    cisco-nsp mailing list cisco-nsp@puck.nether.net
    <mailto:cisco-nsp@puck.nether.net>
    https://puck.nether.net/__mailman/listinfo/cisco-nsp
    <https://puck.nether.net/mailman/listinfo/cisco-nsp>
    archive at http://puck.nether.net/__pipermail/cisco-nsp/
    <http://puck.nether.net/pipermail/cisco-nsp/>


_______________________________________________
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Reply via email to