Joel,
Just to clarify my cat6K is a 6500 series with a sup720.
You got me beat on the traffic stats. We are averaging 150 Kpps daily
and we see spikes at 250 Kpps. no where near your numbers. Don't be
fooled Cisco makes extremely solid hardware and software but they are
not the fastest switch/router manufacturers out there. Often times
there chassis and modules are NOT non-blocking. What modules do you
have on your box? I would guess that at your traffic rates your dropping
packets somewhere. I bring this up because I've always wondered if
netflow data includes dropped packets or not (i think it does).
>From my experience I had to enable "ip flow ingress" on all my Vlan or
L3 interfaces to receive netflow data for that vlan. I'm also running
IPservicesK9 version of IOS, if that makes a difference.
Perhaps the article is referring to the PFC per interface subject is
referring to the physical interfaces. IE: while NDE is enabled globally
it will not be forwarded to the collector unless an it has L3 interface
configured with "IP flow ingress". Thanks for the articles I'll read up
on them.
time out settings for active flows are in minutes and inactive flows are
in seconds. Maybe this is your issue?
ip flow-cache timeout inactive 10 (10 seconds of inactivity)
ip flow-cache timeout active 1 (active for 1 minute)
Also from experience i found that L2 data collection doubled or more the
amount of traffic collected. If you haven't tried it yet you should
disable the L2 data collection at least for testing.
(no ip flow export layer2-switched vlan 1-4094)
Some time ago I opened a ticket with Cisco concerning the ifindexes
reported in my cat6k the flows. After a week or 2 they got back to me
saying that there was no way for the reported flows to contain the
ifindexes of L2 interfaces (switchports). Only ifindexes of L3 devices
would be reported and only those with "ip flow ingress" configured.
This is something that i would want to run if anyone has found a way to
accomplish this let me know.
Shane
On Wed, 2007-05-12 at 12:41 -0800, Joel Krauska wrote:
> Shane:
>
> What sort of traffic loads are you pushing through the cat?
>
> I find that my tables fill up when I reach about 5 Gigs of aggregate traffic.
> (all ports in+out)
>
> For a system that's supposed to support 400 Mpps L3 forwarding (1.5Mpps is
> line rate Gig at 64bytes),
> I find it broken that my tables fill up at 5Gigs (much less than 7Mpps) of
> aggregate throughput.
> (mostly since the NF is not per-interface.. the exports maybe, but the tcam
> is not... yet)
>
> I believe your below setting of timing out active flows after 1 second will
> result in lots of extra flow data.
> Instead of one long lived http 1.1 session resulting in one flow record,
> you'll end up with lots of small burst records.
>
> Here's a good discussion on cisco-nsp about netflow on the cat6k:
> http://www.gossamer-threads.com/lists/cisco/nsp/47459
> (it seems that it comes down to cpu load when doing fast timeouts)
>
> The guy from Cisco in that thread says that per interface NF is in the
> roadmap..
> I think it's in the 12.2.33-SXH releases. Not the 12.2.18-SXF?
>
>
> This thread has some interesting mls table info to look at:
> http://www.gossamer-threads.com/lists/cisco/nsp/36854
>
> Cheers,
>
> Joel
>
>
> Shane Gaumond wrote:
> > I run the following on my cat 6K
> > I have no problems..
> >
> > IOS Version 12.2(18)
> > ip flow-cache timeout inactive 10
> > ip flow-cache timeout active 1
> > no ip flow export layer2-switched vlan 1-4094
> > mls netflow usage notify 80 120
> > mls flow ip interface-full
> > no mls flow ipv6
> > ip flow-export version 9
> >
> >
> > The timeout values keep my graph less spikey "comb like"
> >
> > I think the layer 2 flows are filling up your buffers and causing your
> > error. Whenever i enable the layer 2 flows i get a whole lot of data.
> > most of it is useless ARP replys, OSPF, HSRP etc. I think if you
> > disable the layer 2 reporting you should be fine. You can still enable
> > layer 2 for a few vlans if you really need that info but layer 2 data
> > for the whole chassis will overrun buffers. I also run Version 9 which
> > is supposed to be more efficient.
> >
> > I disagree with the netflow being automatic or broken on the Cat6k. I
> > had to enable "ip flow ingress" on all my vlan interfaces to have the
> > chassis report flows for the interfaces. Netflow reporting worked
> > exactly like Cisco said it would.
> >
> > My only complaint of netflow on the Cat6K is that netflow will only
> > report IFindexes of interfaces with IP's. This rules out switchports.
> > The ifindex reported in flows will always be the ifindex of the vlan
> > interface and never the physical interface. Unless the physical
> > interface has an IP and "IP flow ingress" configured which is not
> > standard.
> >
> >
> > Shane
> >
> >
> > -------------------------------------------------------------------------
> > SF.Net email is sponsored by: The Future of Linux Business White Paper
> > from Novell. From the desktop to the data center, Linux is going
> > mainstream. Let it simplify your IT future.
> > http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
> > _______________________________________________
> > Nfsen-discuss mailing list
> > [email protected]
> > https://lists.sourceforge.net/lists/listinfo/nfsen-discuss
> >
>
-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop to the data center, Linux is going
mainstream. Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
_______________________________________________
Nfsen-discuss mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nfsen-discuss