As reported in examples_zc/README.kvm, you do not need to use virtio nics,
you need to run zbalance_ipc on the host to balance traffic to SPSC queues,
and attach your IDSs to those queues (ZC will do the magic).

Alfredo

> On 07 Jul 2015, at 15:32, C. L. Martinez <[email protected]> wrote:
> 
> Ok, perfect .. I will wait to this new version.
> 
> But, I don't see how can I balance traffic using ZC using libvirt/qemu:
> 
> a) Configure two virtio nics in every guest: one nic for management
> and another to sniff traffic.
> b) Start every guest with a monitor socket (two sockets in total from
> kvm host side, correct?).
> 
> 
> Is this correct?? If it is, how traffic is balanced across these vms ??
> 
> 
> 
> On Tue, Jul 7, 2015 at 1:24 PM, Alfredo Cardigliano
> <[email protected]> wrote:
>> 
>>> On 07 Jul 2015, at 15:17, C. L. Martinez <[email protected]> wrote:
>>> 
>>> Many thanks Alfredo.
>>> 
>>> Some questions:
>>> 
>>> a) Do I need to patch qemu released by vendor?? I would like to avoid
>>> this if it is possible…
>> 
>> Last version we tested required that due to a bug, we will provide a new 
>> tested version (which likely fixed that) ASAP.
>> 
>>> b) If i am not wrong, I need to cconfigure pf_ring module with the
>>> following option: transparent_mode=0. ZC will be available using this
>>> mode??
>> 
>> ZC just ignores kernel flags. BTW transparent_mode is no more needed in any 
>> case.
>> 
>> Alfredo
>> 
>>> 
>>> 
>>> 
>>> 
>>> On Tue, Jul 7, 2015 at 1:09 PM, Alfredo Cardigliano
>>> <[email protected]> wrote:
>>>> Of so you need to replicate the same traffic form the same interface to 3 
>>>> VM, got it.
>>>> Please note you cannot use ZC drivers because your card is not supported 
>>>> (Intel only at the moment),
>>>> however you can still use ZC to distribute traffic across VMs. In this 
>>>> case you need pf_ring on both
>>>> host and guest systems, please have a look at 
>>>> https://github.com/ntop/PF_RING/blob/dev/userland/examples_zc/README.kvm
>>>> 
>>>> Alfredo
>>>> 
>>>>> On 07 Jul 2015, at 14:59, C. L. Martinez <[email protected]> wrote:
>>>>> 
>>>>> Yes, but not via pci-passthorugh ... I need to pass the same phys nic
>>>>> to three vms ...
>>>>> 
>>>>> On Tue, Jul 7, 2015 at 12:56 PM, Alfredo Cardigliano
>>>>> <[email protected]> wrote:
>>>>>> Hi
>>>>>> it depends on your architecture, do you have to bind an interface to a 
>>>>>> single VM?
>>>>>> 
>>>>>> Alfredo
>>>>>> 
>>>>>>> On 07 Jul 2015, at 14:50, C. L. Martinez <[email protected]> wrote:
>>>>>>> 
>>>>>>> Hi all,
>>>>>>> 
>>>>>>> I need to use pf_ring to setup some virtual IDS host sensors in
>>>>>>> CentOS 6.x/7.x KVM hosts.
>>>>>>> 
>>>>>>> With PF_RING recent releases, do I need to install it in both (kvm
>>>>>>> host and kvm guests) or only in kvm guest??
>>>>>>> 
>>>>>>> These kvm hosts have broadcom interfaces:
>>>>>>> 
>>>>>>> tg3.c:v3.137 (May 11, 2014)
>>>>>>> tg3 0000:03:04.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
>>>>>>> tg3 0000:03:04.0: eth1: Tigon3 [partno(011276-001) rev 9003]
>>>>>>> (PCIX:133MHz:64-bit) MAC address 80:c1:6e:62:f4:44
>>>>>>> tg3 0000:03:04.0: eth1: attached PHY is 5714 (10/100/1000Base-T
>>>>>>> Ethernet) (WireSpeed[1], EEE[0])
>>>>>>> tg3 0000:03:04.0: eth1: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] 
>>>>>>> TSOcap[1]
>>>>>>> tg3 0000:03:04.0: eth1: dma_rwctrl[76148000] dma_mask[40-bit]
>>>>>>> tg3 0000:03:04.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17
>>>>>>> tg3 0000:03:04.1: eth2: Tigon3 [partno(011276-001) rev 9003]
>>>>>>> (PCIX:133MHz:64-bit) MAC address 80:c1:6e:62:f4:45
>>>>>>> tg3 0000:03:04.1: eth2: attached PHY is 5714 (10/100/1000Base-T
>>>>>>> Ethernet) (WireSpeed[1], EEE[0])
>>>>>>> tg3 0000:03:04.1: eth2: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] 
>>>>>>> TSOcap[1]
>>>>>>> tg3 0000:03:04.1: eth2: dma_rwctrl[76148000] dma_mask[40-bit]
>>>>>>> tg3 0000:03:04.1: irq 61 for MSI/MSI-X
>>>>>>> tg3 0000:03:04.1: eth1: Link is up at 1000 Mbps, full duplex
>>>>>>> tg3 0000:03:04.1: eth1: Flow control is on for TX and on for RX
>>>>>>> 
>>>>>>> Thanks.
>>>>>>> _______________________________________________
>>>>>>> Ntop-misc mailing list
>>>>>>> [email protected]
>>>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>>> 
>>>>>> 
>>>>>> _______________________________________________
>>>>>> Ntop-misc mailing list
>>>>>> [email protected]
>>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>> _______________________________________________
>>>>> Ntop-misc mailing list
>>>>> [email protected]
>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>> 
>>>> 
>>>> _______________________________________________
>>>> Ntop-misc mailing list
>>>> [email protected]
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> [email protected]
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>> 
>> 
>> _______________________________________________
>> Ntop-misc mailing list
>> [email protected]
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to