Hi Brandon it could be ntopng does not behave the same as n2disk to traffic spikes etc, it is a different kind of processing and that can lead to different packet loss behaviours. If your hw configuration is good enough, it is probably just a matter of tuning. How did you run the cluster? Please note you can enlarge queues to absorb spikes. As of the link speed, since you are using software queues ntopng is not aware of the link speed.
Alfredo > Hi, > > I have ntopng reading from a PF_RING ZC cluster interface instance zc:1@1 and > I’m seeing about 3.5% packet loss as reported in the ntop Interface ingress > statistics. I also notice that ntop is reporting the interface as 1Gbps > instead of 10Gbps. I have n2disk reading from the zc:1@0 instance, and it > will occasionally show some dropped packets, but the absolute count is only > at 0.24%. > > So I have two questions: > > 1. Is it ok to run ntopng against a cluster interface > 2. Assuming it’s ok to run as it is, any ideas why the drop rate would > be different between the two instances? > > Thanks, > > Brandon >
signature.asc
Description: Message signed with OpenPGP using GPGMail
_______________________________________________ Ntop mailing list [email protected] http://listgateway.unipi.it/mailman/listinfo/ntop
