Hi Steven
please pay attention to logical/physical cores (hyper threading), you should 
avoid binding multiple threads to the same physical core.
Can I see the output of cat /proc/cpuinfo | grep "processor\|model 
name\|physical id" ?

Best Regards
Alfredo

On Jun 28, 2013, at 10:50 PM, Steven Zarea <[email protected]> wrote:

> Hi,
> 
> Why DNA cluster perform poorly in multithread environment  (multi-slave ring)?
> 
> For the last few weeks I’m testing my application, which uses: PF-Ring DNA, 
> DNA-cluster, and Lib-Zero. During my testing I observed that my application 
> perform poorly in multithread mode (more one slave ring). Also, the 
> application was performing very inconsistent from a run to run.
> 
> Finally I decide to try the sample application, which is provided by PF-Ring 
> (pfdnacluster_multithread) from PF_RING/userland/example folder. The sample 
> application acted the same manner as my application.
> 
> Background information:
> 
> Loading PF-ring & ixgbe kernel modules:
> ==============================
> 
> #!/bin/bash
> # Configure here the network interfaces to activate
> IF[0]=dna0
> IF[1]=dna1
> IF[2]=dna2
> IF[3]=dna3
> 
> #service udev start
> # Remove old modules (if loaded)
> rmmod ixgbe
> rmmod pf_ring
> insmod ../../../../kernel/pf_ring.ko transparent_mode=2 enable_ip_defrag=1 
> enable_tx_capture=0 min_num_slots=65536 
> insmod ./ixgbe.ko RSS=1,1,1,1 allow_unsupported_sfp=1,1 num_rx_slots=32768 
> numa_cpu_affinity=0,0,0,0 
> 
> sleep 1
> killall irqbalance 
> 
> for index in 0 1 2 3
> do  
>    if [ -z ${IF[index]} ]; then
>     continue
>    fi
>    printf "Configuring %s\n" "${IF[index]}"
>   ifconfig ${IF[index]} up
>   sleep 1
> done
> 
> 
> Network Card:
> ============
> DNA0 - 04:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFP+ 
> Network Connection (rev 01)
> DNA1 - 04:00.1 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFP+ 
> Network Connection (rev 01)
> 
> Hardware:
> ========
> DELL R420 – 16 cores and 128 Gig of memory
> 
> 
> Test Case -1 (six Salve-ring)
> ==========
> ./pfdnacluster_multithread -i dna0,dna1 -c 1 -n 6 -r 0 -t 2 -g 4:6:8:10:12:14 
> -x 5 -z dna1 -a 1 -u /mnt/hugepages -p
> 
> I’m using 6 threads (cores 4,6,8,10,12, and14) for this test. All the cores 
> are from NUMA-zero. This application uses both interfaces DNA0, and DNA1.
> DNA Cluster Tx, and Rx cores’ affinity are set to core zero and core two 
> respectably.
> Cores 4, 6, 8, 10, 12, and 14 are used by ring slaves.
> 
> Input data to the test application (pfdnacluster_multithread) is 10 Gbps 
> (14.8 million pps)
> I see initially 60-50% drops but gradually it decreases to 20-15% drops. 
> 
>  Test Case -2 (One Slave-ring)
> ==========
> /pfdnacluster_multithread -i dna0,dna1 -c 1 -n 1 -r 0 -t 2 -g 4 -x 5 -z dna1 
> -a 1 -u /mnt/hugepages -p
> 
> Input data to the test application (pfdnacluster_multithread) is 10 Gbps 
> (14.8 million pps)
> No drops. 
> 
> 
> Conclusion:
> =========
> Why is this behavior observed? Is there any good explanation I'm missing?
> Could this a configuration issue of some sort? 
> 
> Any help would be greatly appreciated.
>  
> Thank you,
> Steven
> _______________________________________________
> Ntop-dev mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-dev

_______________________________________________
Ntop-dev mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-dev

Reply via email to