Alfredo, It occurs to me that we probably don't need symmetric RSS, because our existing architecture consumes packets from multiple devices and then queues them to multiple worker threads using our own hashing algorithm. We can therefore use asymmetric RSS as long as we configure the application to read from all of the RX queues in the set. Clearly there is room for optimization here but it does seem that a useful starting point is to test using just the ixgbe driver, configured as you showed: insmod ./ixgbe.ko MQ=1,1,1,1 RSS=4,4,4,4
Thanks for your help. Jim On Mon, Sep 10, 2012 at 3:29 PM, Alfredo Cardigliano <[email protected]>wrote: > Jim > please see inline > > On Sep 11, 2012, at 12:02 AM, Jim Lloyd <[email protected]> wrote: > > Hi Alfredo. Our 10GigE interfaces are already configured with ixgbe > drivers. Are there changes in the ixgbe driver sources that come with the > PF_RING distribution that we must use, or can we use the installed drivers? > > > If you don't want to use the DNA drivers, you can either: > - use your standard drivers > - or use the PF_RING-aware drivers you can find > in PF_RING/drivers/PF_RING_aware (in this case you can increase the > performance loading pf_ring.ko with transparent_mode=2) > > I am inspecting the load_dna_driver.sh script for ixgbe, but I would > appreciate it if you could provide more specifics. Assuming our 10GigE nic > is configured as device eth2, and I want to have 4 RX queues, what steps > would I take? > > > According to the latest ixgbe drivers, you should use: > insmod ./ixgbe.ko MQ=1,1,1,1 RSS=4,4,4,4 > > From other reading I have done, I think it is clear that I will need to > ensure that the hashing used for the RSS is flow based, i.e. both sides of > a TCP conversation must be directed to the same queue. > > > Symmetric RSS is supported with DNA drivers only at the moment. > > Alfredo > > > Thanks. > > On Mon, Sep 10, 2012 at 1:52 PM, Alfredo Cardigliano <[email protected] > > wrote: > >> Hi Jim >> please see inline >> >> On Sep 10, 2012, at 10:30 PM, Jim Lloyd <[email protected]> >> wrote: >> >> > Greetings, >> > >> > We have an existing libpcap application which analyzes http traffic. >> The application uses one sniffer thread per device, copies each packet, and >> then queues those packets for consumption by N worker threads. The >> architecture has served us well but we are encountering use cases where one >> sniffer thread per device is the clear bottleneck. >> > >> > It seems clear to us that we would benefit from using PF_RING and may >> want to rewrite our application to use DNA+libzero. But before we start >> that effort, we'd like to take the baby step of recompiling our application >> with libpcap-1.1.1-ring and seeing what performance gains can be obtained. >> > >> > From the documentation, it seems that we should be able to configure 4 >> RX queues, and then configure our application to sniff from those 4 queues >> as if they were different ethernet devices. And if the 10GigE device is >> eth2, the device names we would pass to libpcap would be eth2@0 .. >> eth2@3. Is that correct? >> >> Correct. Actually it depends on the drivers you are using: standard, >> PF_RING-aware, or DNA (in the latter case the device name will be dnaX@Y) >> >> > Finally, I haven't seen exactly how one configures these 4 RX queues. >> Can someone please point me at the documentation for that configuration? >> >> This depends on the drivers and card model you have. A modinfo <driver> >> should be enough in most cases, if you are using DNA drivers you can also >> have a look at the load_dna_driver.sh inside the >> PF_RING/drivers/DNA/<model>-<version>-DNA/src folder. >> Once the driver is loaded, you can use ethtool to check the card >> configuration (e.g. with "ethtool -S eth2" you can see per-queue stats) >> >> Best Regards >> Alfredo >> >> > >> > Thanks. >> > >> > _______________________________________________ >> > Ntop-misc mailing list >> > [email protected] >> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc >> >> _______________________________________________ >> Ntop-misc mailing list >> [email protected] >> http://listgateway.unipi.it/mailman/listinfo/ntop-misc >> > > _______________________________________________ > Ntop-misc mailing list > [email protected] > http://listgateway.unipi.it/mailman/listinfo/ntop-misc > > > > _______________________________________________ > Ntop-misc mailing list > [email protected] > http://listgateway.unipi.it/mailman/listinfo/ntop-misc > >
_______________________________________________ Ntop-misc mailing list [email protected] http://listgateway.unipi.it/mailman/listinfo/ntop-misc
