> On 04 Sep 2015, at 22:06, Nick Allen <[email protected]> wrote:
> 
> Am I totally wrong on running multiple balancers?   Can I launch two separate 
> processes that each use the pfring_zc_run_balancer?

Yes, you can. Just use a different cluster id.

> We chose the x710 because we are using fiber.  The x540 seems to only support 
> copper.  Is there another card that you might recommend that supports OM4 
> fiber?

x540 use SFP+, you can plug both copper or fiber.

Alfredo

> 
> Thanks for all the great information.
> 
> On Fri, Sep 4, 2015 at 3:45 PM, Alfredo Cardigliano <[email protected] 
> <mailto:[email protected]>> wrote:
> Hi Nick
> first of all please note that X710 is not that fast due to limited buffering 
> power (less then 8K slots per RX ring),
> 82599/X540 era much better in terms of buffering as they are able to handle 
> up to 32K slots.
> 
> You said "I cannot run multiple balancer processes on a single host because 
> the ZC kernel module doesn't like to share (as is documented)”, actually you 
> can run multiple balancer processes, as long as they are isolated doing 
> packet processing. What you cannot do is mixing buffers from different 
> clusters. If I understood correctly, you are able to scale processing 
> independently each link, thus you should have no problem here.
> 
> The total throughput you can achieve on a single host really depend on hw and 
> available bandwidth, I can tell you that with 12gbit line rate you are close 
> to the packet distribution limit per cpu core.
> 
> Alfredo
> 
>> On 04 Sep 2015, at 20:28, Nick Allen <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> I have a Napatech NT40E3-4-PTP that I have used for all of my work so far.  
>> I am going to test the software on an Intel x710 dual port 10G card and see 
>> if that will suffice instead.
>> 
>> I need to handle about 40 gbps in aggregate.  Based on the upstream hardware 
>> that I have, I can spread that traffic across anywhere from 4 to 32 incoming 
>> 10G links.
>> 
>> With our initial approach, I was trying to ingest all of that traffic on one 
>> host over the 4 ports on the Napatech card.  Based on the time I put in, I 
>> was able to consume 10-12 gbps.  I would have liked to consume all 40 gpbs, 
>> but I could not get there.  So on that one host I can only handle 1 of the 
>> incoming links.
>> 
>> To spread the load on the balancer thread, I would have liked to run 
>> multiple independent processes on the same host, each consuming from a 
>> single incoming link.  I cannot run multiple balancer processes on a single 
>> host because the ZC kernel module doesn't like to share (as is documented).
>> 
>> I could probably spend much more time on it to improve the performance in 
>> that single process, but its going to be cheaper to just buy more hardware.  
>> In addition, it will also help my downstream processing and overall 
>> reliability to spread the aggregate load across multiple hosts.
>> 
>> What kinds of throughput do you think is feasible on a single host with N 
>> network interfaces?
>> 
>> 
>> 
>> 
>> 
>> On Fri, Sep 4, 2015 at 1:58 PM, Alfredo Cardigliano <[email protected] 
>> <mailto:[email protected]>> wrote:
>> Hi Nick
>> what is the card model/chipset you are using? What is the traffic rate (pps) 
>> you need to handle? How many interfaces are you using in the zc balancer?
>> 
>> BTW, just a curiosity, you said "I only need to achieve 10 Gbps on a single 
>> host and then I am going to scale horizontall”, then you said "it does not 
>> seem capable of handling additional worker threads to scale beyond 10-12 
>> Gbps.”, did I miss something?
>> 
>> Alfredo
>> 
>>> On 04 Sep 2015, at 19:47, Nick Allen <[email protected] 
>>> <mailto:[email protected]>> wrote:
>>> 
>>> Unless I am misunderstanding you, that is exactly what I am doing.  Correct 
>>> me if I am wrong.
>>> 
>>> I am using the 'pfring_zc_run_balancer' to load balance packets across 
>>> multiple worker threads.  It is the master (aka balancer) thread that is my 
>>> bottleneck.
>>> 
>>> On Fri, Sep 4, 2015 at 1:50 AM, Luca Deri <[email protected] 
>>> <mailto:[email protected]>> wrote:
>>> Nick,
>>> if you want to scale you need to enable RSS (or use the zbalancer) and have 
>>> a thread per queue.
>>> 
>>> Luca
>>> 
>>>> On 03 Sep 2015, at 18:24, Nick Allen <[email protected] 
>>>> <mailto:[email protected]>> wrote:
>>>> 
>>>> I have a similar need.  I need to ingest 40+ gpbs into a Hadoop grid.  
>>>> Kafka is acting as my landing zone/front door for the grid.
>>>> 
>>>> I tried many variations of using tcpdump, Flume, and other concoctions.  I 
>>>> ended up building a custom pcap ingest process in C.  The app uses PF_RING 
>>>> ZC to load balance packets across multiple threads.  I then push the 
>>>> packet data into Kafka using librdkafka.  Both the pull from PF_RING and 
>>>> the push to Kafka batch many packets at a time (trading latency for 
>>>> throughput).
>>>> 
>>>> With the minimal tuning that I have done, it can handle roughly 10-12 
>>>> Gbps.  I only need to achieve 10 Gbps on a single host and then I am going 
>>>> to scale horizontally to manage the aggregate pcap that I need to capture.
>>>> 
>>>> Right now, the bottleneck is the master thread in PF_RING that dispatches 
>>>> packets off to each worker thread.  That thread pegs a single CPU core (a 
>>>> rather beefy core, I might add).  It does not seem capable of handling 
>>>> additional worker threads to scale beyond 10-12 Gbps.
>>>> 
>>>> I wish I had access to the source to review and confirm, but that is how 
>>>> it appears with the information that I have.
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> On Thu, Sep 3, 2015 at 11:46 AM, Manny Veloso <[email protected] 
>>>> <mailto:[email protected]>> wrote:
>>>> Also, when you say 1k flows per second is that 1k devices reporting their 
>>>> flows every second? We’d need a two to three orders of magnitude more 
>>>> performance.
>>>> --
>>>> Manny Veloso
>>>> Sr. Solutions Engineer
>>>> Smartrg.com <http://smartrg.com/>
>>>> 
>>>> From: <[email protected] 
>>>> <mailto:[email protected]>> on behalf of Luca Deri 
>>>> <[email protected] <mailto:[email protected]>>
>>>> Reply-To: "[email protected] 
>>>> <mailto:[email protected]>" <[email protected] 
>>>> <mailto:[email protected]>>
>>>> Date: Tuesday, September 1, 2015 at 10:52 PM
>>>> To: "[email protected] 
>>>> <mailto:[email protected]>" <[email protected] 
>>>> <mailto:[email protected]>>
>>>> Subject: Re: [Ntop-misc] nprobe and kafka?
>>>> 
>>>> Manny
>>>> we have added kafka support on one of our development prototypes so movign 
>>>> to the official nprobe should not be too difficult. The performance is 
>>>> similar to the ZMQ or elasticsearch implementation, so considered the JSON 
>>>> conversion is at least 1k flows/sec
>>>> 
>>>> Luca
>>>> 
>>>>> On 01 Sep 2015, at 23:20, Manny Veloso <[email protected] 
>>>>> <mailto:[email protected]>> wrote:
>>>>> 
>>>>> Hi!
>>>>> 
>>>>> I’m looking to use nprobe as a bridge into kafka. In the splunk app 
>>>>> nprobe just sends data into splunk. Is that basically the same 
>>>>> configuration as a kafka install?
>>>>> 
>>>>> Also, what kind of throughput can I expect out of nprobe?
>>>>> --
>>>>> Manny Veloso
>>>>> Sr. Solutions Engineer
>>>>> Smartrg.com 
>>>>> <http://smartrg.com/>_______________________________________________
>>>>> Ntop-misc mailing list
>>>>> [email protected] <mailto:[email protected]>
>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>> 
>>>> _______________________________________________
>>>> Ntop-misc mailing list
>>>> [email protected] <mailto:[email protected]>
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>> 
>>>> 
>>>> 
>>>> --
>>>> Nick Allen <[email protected] <mailto:[email protected]>>
>>>> _______________________________________________
>>>> Ntop-misc mailing list
>>>> [email protected] <mailto:[email protected]>
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>> 
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> [email protected] <mailto:[email protected]>
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>> 
>>> 
>>> 
>>> --
>>> Nick Allen <[email protected] <mailto:[email protected]>>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> [email protected] <mailto:[email protected]>
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>> 
>> _______________________________________________
>> Ntop-misc mailing list
>> [email protected] <mailto:[email protected]>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>> 
>> 
>> 
>> --
>> Nick Allen <[email protected] <mailto:[email protected]>>
>> _______________________________________________
>> Ntop-misc mailing list
>> [email protected] <mailto:[email protected]>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> _______________________________________________
> Ntop-misc mailing list
> [email protected] <mailto:[email protected]>
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> 
> 
> --
> Nick Allen <[email protected] <mailto:[email protected]>>
> <signature.asc>_______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to