Re: [dpdk-users] KNI interfaces at bonding ports

2017-06-22 Thread Alex Kiselev
Hi, Cody. Thank you. It helped.
rte_eth_bond_primary_get() function is exactly what I needed.


2017-06-22 19:08 GMT+03:00 Cody Doucette :
> Hi Alex,
>
> I previously ran into the same issue. I just used the PCI device information
> from the primary slave of the bonded interface:
>
> https://github.com/AltraMayor/gatekeeper/blob/master/cps/main.c#L545
>
> Hope that helps,
> Cody
>
> On Thu, Jun 22, 2017 at 5:06 AM, Alex Kiselev  wrote:
>>
>> Hello.
>>
>> Is it possible to create a KNI interface at a bonding port?
>>
>> My code that works perfectly fine with usual dpdk ports fails
>> when I try to use it with bonding ports.
>>
>>   rte_eth_dev_info_get(port_id, _info);
>>   conf.addr = dev_info.pci_dev->addr;
>>   ...
>>   return rte_kni_alloc(pktmbuf_pool, , );
>>
>>
>> The code fails because pci_dev member of variable dev_info is NULL.
>> Is there any workarounds for this problem?
>>
>> Thanks.
>>
>> --
>> Alexander Kiselev
>
>



-- 
--
Kiselev Alexander


[dpdk-users] Mellanox ConnectX-4, DPDK and extreme latency issues

2017-06-22 Thread Arjun Roy
Greetings all.

I have a weird issue regarding excessive latency using Mellanox ConnectX-4
100Gbe cards, DPDK and packet forwarding. Specifically: running the l3fwd
and basicfwd DPDK example programs yields ping latencies of several (5-8)
milliseconds. I tried the same test using an Intel X-540 AT2 card on the
same systems and the latency was on the order of 4-5 microseconds.

Setup:

I have three systems, SysA, SysB, and SysC. Each runs Ubuntu 16.04 and
kernel 4.4.0-78-generic.
Each system is a dual socket numa machine, where each socket is a 12 core
(+12 with hyperthreading enabled) Xeon E5-2650.
SysA and SysB each have a single Mellanox ConnectX-4 card, connected to
numa node 1, showing up as enp129s0f0 and enp129s0f1.
SysC has two ConnectX-4 cards, connected to node 0 and node 1. Node 0 has
enp4s0f0 and enp4s0f1, while node 1 has enp129s0f0 and enp129s0f1.
All machines also have a single dual port Intel X-540 AT2 10Gbe NIC that
also supports DPDK.


SysC forwards packets between SysA and SysB. SysA is connected to
enp129s0f0 on SysC, while SysB is connected to enp4s0f0 on SysC. (Note: I
tried a variety of configurations; including connecting SysA and SysB to
the same physical cards on SysC, and the same latency issue still
persists). No switches involved; all direct connect.

If it helps, the driver version is the OFED 4.0-2 and the card firmware
is 12.18.2000.

Now, with this setup, with normal linux forwarding setup, I can get 0.095
msecs ping on average from SysA to SysB (or vice versa).
However, if I run the DPDK forwarding apps, I get about 5-8 msecs.
The ping test I'm using is both regular (1 second gaps between pings) and
burst mode (flooding ping packets as fast as possible). In either case the
latency is 5-8 msecs per ping.

I have been running l3fwd with this command line:
sudo ./l3fwd -l 2,3   -n4 -w 81:00.0 -w 04:00.0 --socket-mem=1024,1024 --
-p 0x3 -P  --config="(1,0,2),(0,0,3)"

In this case, I have verified that the cores and numa nodes line up; ie.
I'm assigning each port to a core on the local numa node.


Regarding my sanity check: I tried the same test with Intel X-540 cards,
wired with the same topology (SysA connects to one port on SysC, SysB
connects to the other port; note this is the same physical card) and for
the same test I get just 4-5 microseconds for ping in flood mode).

Any ideas what might be causing multiple milliseconds of latency on the
Mellanox cards?

Thanks,
-Arjun Roy


Re: [dpdk-users] KNI interfaces at bonding ports

2017-06-22 Thread Cody Doucette
Hi Alex,

I previously ran into the same issue. I just used the PCI device
information from the primary slave of the bonded interface:

https://github.com/AltraMayor/gatekeeper/blob/master/cps/main.c#L545

Hope that helps,
Cody

On Thu, Jun 22, 2017 at 5:06 AM, Alex Kiselev  wrote:

> Hello.
>
> Is it possible to create a KNI interface at a bonding port?
>
> My code that works perfectly fine with usual dpdk ports fails
> when I try to use it with bonding ports.
>
>   rte_eth_dev_info_get(port_id, _info);
>   conf.addr = dev_info.pci_dev->addr;
>   ...
>   return rte_kni_alloc(pktmbuf_pool, , );
>
>
> The code fails because pci_dev member of variable dev_info is NULL.
> Is there any workarounds for this problem?
>
> Thanks.
>
> --
> Alexander Kiselev
>


Re: [dpdk-users] Question about range type of DPDK ACL

2017-06-22 Thread Shyam Shrivastav
Yes you are right , the IPv4 conversion keeps the host byte and ip address
in packet mbuf would appear other way round for matching.
I got confused because firewall parser code reads the dotted decimal user
input as big endian, that's why rte_be_to_cpu32 is required.
Hope some help comes your way, you can also try posting in dpdk dev
d...@dpdk.org

On Thu, Jun 22, 2017 at 12:57 PM, Doohwan Lee  wrote:

> I really appreciate your help. but I think there's some misunderstanding.
> I also know that DPDK ACL rule expects host order.
>
> IPv4(1,2,3,4) means 0x01020304 and it is little endian(host order) again.
> and the IP address 1.2.3.4 in packet data on memory is 0x04030201 (big
> endian).
> So, I think I didn't have any mistake for using DPDK ACL library.
>
>
>
> 2017-06-22 16:12 GMT+09:00 Shyam Shrivastav :
>
>> No it is not as dotted decimal representation is in big endian that is as
>> they appear in packet, but acl library expects addition in host byte order.
>> I would suggest to try the change, in firewall also we are converting user
>> input in dotted decimal to integer then to host byte order .., anyway it is
>> your choice
>>
>> On Thu, Jun 22, 2017 at 12:28 PM, Doohwan Lee  wrote:
>>
>>> IPv4(1,2,3,4) means 0x01020304, and it is already host order (little
>>> endian).
>>> It need not to be converted using rte_be_to_cpu32() for setting rule.
>>>
>>>
>>>
>>> 2017-06-22 15:27 GMT+09:00 Shyam Shrivastav 
>>> :
>>>

 Yes Doohwan,  it is there in rte_ip.h just saw. So this conversion
 leaves the address in big endian format. Theoretically, we are supposed to
 add the acl fields in host order, if you see pipeline code even for ip
 address with mask, following conversion is being done line 1096
 examples/ip_pipeline/pipeline-firewall.c

 key.type = PIPELINE_FIREWALL_IPV4_5TUPLE;
 key.key.ipv4_5tuple.src_ip = rte_be_to_cpu_32(sipaddr.s_addr);
 key.key.ipv4_5tuple.src_ip_mask = sipdepth;
 key.key.ipv4_5tuple.dst_ip = rte_be_to_cpu_32(dipaddr.s_addr);

 I would suggest this in your code

 rule->field[2].value.u32 = rte_be_to_cpu_32(IPv4(1,2,3,4));
 rule->filed[2].mask_range.u32 = rte_be_to_cpu_32(IPv4(1,10,10,10));






 On Thu, Jun 22, 2017 at 11:28 AM, Doohwan Lee  wrote:

> Ok. The code to set rule for IPv4 address is like below.
>
> 
> #define IPv4(a,b,c,d) ((uint32_t)(((a) & 0xff) << 24) | \
>(((b) & 0xff) << 16) | \
>(((c) & 0xff) << 8)  | \
>((d) & 0xff))
>
> ...
> rule->field[2].value.u32 = IPv4(1,2,3,4);
> rule->filed[2].mask_range.u32 = IPv4(1,10,10,10);
> ...
> -
>
> The macro IPv4() is from the DPDK (rte_ip.h)
> The matching data is from the packet. so it is network order. (big
> endian)
>
>
>
> On Thu, Jun 22, 2017 at 1:26 PM, Shyam Shrivastav <
> shrivastav.sh...@gmail.com> wrote:
>
>> Yes if these are the results then might be some issue, but can not be
>> sure unless do this myself, have been using ACL library but not this case
>> till now.
>> Can you share code fragment converting dotted decimal to integer if
>> possible ..
>>
>> On Thu, Jun 22, 2017 at 7:57 AM, Doohwan Lee 
>> wrote:
>>
>>> Thank you Shyam.
>>> Let me explain my situation in detail.
>>> All the cases described below use RTE_ACL_FIELD_TYPE_RANGE type.
>>>
>>> ---
>>> Case 1.
>>> rule: 1.2.3.4 ~ 1.2.3.4
>>> packet: 1.2.3.4
>>> result: match (correct)
>>>
>>> Case 2.
>>> rule: 1.2.3.4 ~ 1.10.10.10
>>> packet: 1.2.10.5
>>> result: match (correct)
>>>
>>> Case 3
>>> rule: 1.2.3.4 ~ 1.10.10.10
>>> packet: 1.10.10.11
>>> result: not match (correct)
>>>
>>> Case 4
>>> rule: 1.2.3.4 ~ 1.10.10.10
>>> packet: 1.2.3.10
>>> result: match (correct)
>>>
>>> Case 5:
>>> rule: 1.2.3.4~1.10.10.10
>>> packet: 1.2.10.11
>>> result: not match (incorrect)
>>>
>>> Case 6:
>>> rule: 1.2.3.4~1.10.10.10
>>> packet: 1.2.10.3
>>> result: not match (incorrect)
>>> ---
>>>
>>>
>>> Considering case 1~4, It shows expected results and there is no
>>> problem with byte ordering.
>>> But, in case 5~6, the result should be 'match' but it was not.
>>> This is why I doubt DPDK ACL library doesn't support 32-bit range
>>> matching.
>>>
>>>
>>> On Wed, Jun 21, 2017 at 9:09 PM, Shyam Shrivastav <
>>> 

Re: [dpdk-users] [Qemu-devel] [dpdk-dev] Will huge page have negative effect on guest vm in qemu enviroment?

2017-06-22 Thread Dr. David Alan Gilbert
* Sam (batmanu...@gmail.com) wrote:
> Thank you~
> 
> 1. We have a compare test on qemu-kvm enviroment with huge page and without
> huge page. Qemu start process is much longer in huge page enviromwnt. And I
> write an email titled with '[DPDK-memory] how qemu waste such long time
> under dpdk huge page envriment?'. I could resend it later.

> 2. Then I have another test on qemu-kvm enviroment with huge page and
> without huge page, which I didn't start ovs-dpdk and vhostuser port in qemu
> start process. And I found Qemu start process is also much longer in huge
> page enviroment.
> 
> So I think huge page enviroment, which grub2.cfg file is specified in
> ‘[DPDK-memory]
> how qemu waste such long time under dpdk huge page envriment?’, will really
> have negative effect on qemu start up process.
> 
> That's why we don't like to use ovs-dpdk. Althrough ovs-dpdk is faster, but
> the start up process of qemu is much longer then normal ovs, and the reason
> is nothing with ovs but huge page. For customers, vm start up time is
> important then network speed.

How are you setting up hugepages?  What values are you putting in the
various /proc or cmdline options and how are you specifying them on
QEMU's commandline.

I think one problem is that with hugepages qemu normally allocates them
all at the start;  I think there are cases where that means moving a lot
of memory about, especially if you lock it to particular NUMA nodes.

> BTW, ovs-dpdk start up process is also longer then normal ovs. But I know
> the reason, it's dpdk EAL init process with forking big continous memory
> and zero this memory. For qemu, I don't know why, as there is no log to
> report this.

I suspect it's the mmaping and madvising of those hugepages - you should
be able to see it with an strace of a qemu startup, or perhaps a
'perf top'  on the host as it's in that pause.

I'm told that hugepages are supposed to be especially useful with IOMMU
performance for cards passed through to the guest, so it might still
be worth doing.

Dave

> 2017-06-21 14:15 GMT+08:00 Pavel Shirshov :
> 
> > Hi Sam,
> >
> > Below I'm saying about KVM. I don't have experience with vbox and others.
> > 1. I'd suggest don't use dpdk inside of VM if you want to see best
> > perfomance on the box.
> > 2. huge pages enabled globally will not have any bad effect to guest
> > OS. Except you have to enable huge pages inside of VM and provide real
> > huge page for VM's huge pages from the host system. Otherwise dpdk
> > will use "hugepages" inside of VM, but this "huge pages" will not real
> > ones. They will be constructed from normal pages outside. Also when
> > you enable huge pages OS will reserve them from start and your OS will
> > not able use them for other things. Also you can't swap out huge
> > pages, KSM will not work for them and so on.
> > 3. You can enable huge pages just for one numa node. It's impossible
> > to enable them just for one core. Usually you reserve some memory for
> > hugepages when the system starts and you can't use this memory in
> > normal applications unless normal application knows how to use them.
> >
> > Also why it didn't work inside of the docker?
> >
> >
> > On Tue, Jun 20, 2017 at 8:35 PM, Sam  wrote:
> > > BTW, we also think about use ovs-dpdk in docker enviroment, but test
> > result
> > > said it's not good idea, we don't know why.
> > >
> > > 2017-06-21 11:32 GMT+08:00 Sam :
> > >
> > >> Hi all,
> > >>
> > >> We plan to use DPDK on HP host machine with several core and big memory.
> > >> We plan to use qemu-kvm enviroment. The host will carry 4 or more guest
> > vm
> > >> and 1 ovs.
> > >>
> > >> Ovs-dpdk is much faster then normal ovs, but to use ovs-dpdk, we have to
> > >> enable huge page globally.
> > >>
> > >> My question is, will huge page enabled globally have negative effect on
> > >> guest vm's memory orperate or something? If it is, how to prevent this,
> > or
> > >> could I enable huge page on some core or enable huge page for a part of
> > >> memory?
> > >>
> > >> Thank you~
> > >>
> >
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK


Re: [dpdk-users] Question about range type of DPDK ACL

2017-06-22 Thread Doohwan Lee
I really appreciate your help. but I think there's some misunderstanding.
I also know that DPDK ACL rule expects host order.

IPv4(1,2,3,4) means 0x01020304 and it is little endian(host order) again.
and the IP address 1.2.3.4 in packet data on memory is 0x04030201 (big
endian).
So, I think I didn't have any mistake for using DPDK ACL library.



2017-06-22 16:12 GMT+09:00 Shyam Shrivastav :

> No it is not as dotted decimal representation is in big endian that is as
> they appear in packet, but acl library expects addition in host byte order.
> I would suggest to try the change, in firewall also we are converting user
> input in dotted decimal to integer then to host byte order .., anyway it is
> your choice
>
> On Thu, Jun 22, 2017 at 12:28 PM, Doohwan Lee  wrote:
>
>> IPv4(1,2,3,4) means 0x01020304, and it is already host order (little
>> endian).
>> It need not to be converted using rte_be_to_cpu32() for setting rule.
>>
>>
>>
>> 2017-06-22 15:27 GMT+09:00 Shyam Shrivastav :
>>
>>>
>>> Yes Doohwan,  it is there in rte_ip.h just saw. So this conversion
>>> leaves the address in big endian format. Theoretically, we are supposed to
>>> add the acl fields in host order, if you see pipeline code even for ip
>>> address with mask, following conversion is being done line 1096
>>> examples/ip_pipeline/pipeline-firewall.c
>>>
>>> key.type = PIPELINE_FIREWALL_IPV4_5TUPLE;
>>> key.key.ipv4_5tuple.src_ip = rte_be_to_cpu_32(sipaddr.s_addr);
>>> key.key.ipv4_5tuple.src_ip_mask = sipdepth;
>>> key.key.ipv4_5tuple.dst_ip = rte_be_to_cpu_32(dipaddr.s_addr);
>>>
>>> I would suggest this in your code
>>>
>>> rule->field[2].value.u32 = rte_be_to_cpu_32(IPv4(1,2,3,4));
>>> rule->filed[2].mask_range.u32 = rte_be_to_cpu_32(IPv4(1,10,10,10));
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Thu, Jun 22, 2017 at 11:28 AM, Doohwan Lee  wrote:
>>>
 Ok. The code to set rule for IPv4 address is like below.

 
 #define IPv4(a,b,c,d) ((uint32_t)(((a) & 0xff) << 24) | \
(((b) & 0xff) << 16) | \
(((c) & 0xff) << 8)  | \
((d) & 0xff))

 ...
 rule->field[2].value.u32 = IPv4(1,2,3,4);
 rule->filed[2].mask_range.u32 = IPv4(1,10,10,10);
 ...
 -

 The macro IPv4() is from the DPDK (rte_ip.h)
 The matching data is from the packet. so it is network order. (big
 endian)



 On Thu, Jun 22, 2017 at 1:26 PM, Shyam Shrivastav <
 shrivastav.sh...@gmail.com> wrote:

> Yes if these are the results then might be some issue, but can not be
> sure unless do this myself, have been using ACL library but not this case
> till now.
> Can you share code fragment converting dotted decimal to integer if
> possible ..
>
> On Thu, Jun 22, 2017 at 7:57 AM, Doohwan Lee  wrote:
>
>> Thank you Shyam.
>> Let me explain my situation in detail.
>> All the cases described below use RTE_ACL_FIELD_TYPE_RANGE type.
>>
>> ---
>> Case 1.
>> rule: 1.2.3.4 ~ 1.2.3.4
>> packet: 1.2.3.4
>> result: match (correct)
>>
>> Case 2.
>> rule: 1.2.3.4 ~ 1.10.10.10
>> packet: 1.2.10.5
>> result: match (correct)
>>
>> Case 3
>> rule: 1.2.3.4 ~ 1.10.10.10
>> packet: 1.10.10.11
>> result: not match (correct)
>>
>> Case 4
>> rule: 1.2.3.4 ~ 1.10.10.10
>> packet: 1.2.3.10
>> result: match (correct)
>>
>> Case 5:
>> rule: 1.2.3.4~1.10.10.10
>> packet: 1.2.10.11
>> result: not match (incorrect)
>>
>> Case 6:
>> rule: 1.2.3.4~1.10.10.10
>> packet: 1.2.10.3
>> result: not match (incorrect)
>> ---
>>
>>
>> Considering case 1~4, It shows expected results and there is no
>> problem with byte ordering.
>> But, in case 5~6, the result should be 'match' but it was not.
>> This is why I doubt DPDK ACL library doesn't support 32-bit range
>> matching.
>>
>>
>> On Wed, Jun 21, 2017 at 9:09 PM, Shyam Shrivastav <
>> shrivastav.sh...@gmail.com> wrote:
>>
>>> I haven't used range type with 32 bit integers yet ...
>>> Just some theory in case if you haven't already taken into account,
>>> if little-endian host 10.10.10.30 actually means 0x1e0a0a0a for acl 
>>> match,
>>> dotted decimal is in big endian so when in little endian host you need 
>>> to
>>> add it other way round as integers for matching. This means if you add
>>> range 0x0a0a0a0a to 0x1e1e1e1e should match 10.10.10.30,  this is my
>>> understanding theoretically ..
>>>
>>> On Wed, Jun 21, 

Re: [dpdk-users] Question about range type of DPDK ACL

2017-06-22 Thread Shyam Shrivastav
No it is not as dotted decimal representation is in big endian that is as
they appear in packet, but acl library expects addition in host byte order.
I would suggest to try the change, in firewall also we are converting user
input in dotted decimal to integer then to host byte order .., anyway it is
your choice

On Thu, Jun 22, 2017 at 12:28 PM, Doohwan Lee  wrote:

> IPv4(1,2,3,4) means 0x01020304, and it is already host order (little
> endian).
> It need not to be converted using rte_be_to_cpu32() for setting rule.
>
>
>
> 2017-06-22 15:27 GMT+09:00 Shyam Shrivastav :
>
>>
>> Yes Doohwan,  it is there in rte_ip.h just saw. So this conversion leaves
>> the address in big endian format. Theoretically, we are supposed to add the
>> acl fields in host order, if you see pipeline code even for ip address with
>> mask, following conversion is being done line 1096
>> examples/ip_pipeline/pipeline-firewall.c
>>
>> key.type = PIPELINE_FIREWALL_IPV4_5TUPLE;
>> key.key.ipv4_5tuple.src_ip = rte_be_to_cpu_32(sipaddr.s_addr);
>> key.key.ipv4_5tuple.src_ip_mask = sipdepth;
>> key.key.ipv4_5tuple.dst_ip = rte_be_to_cpu_32(dipaddr.s_addr);
>>
>> I would suggest this in your code
>>
>> rule->field[2].value.u32 = rte_be_to_cpu_32(IPv4(1,2,3,4));
>> rule->filed[2].mask_range.u32 = rte_be_to_cpu_32(IPv4(1,10,10,10));
>>
>>
>>
>>
>>
>>
>> On Thu, Jun 22, 2017 at 11:28 AM, Doohwan Lee  wrote:
>>
>>> Ok. The code to set rule for IPv4 address is like below.
>>>
>>> 
>>> #define IPv4(a,b,c,d) ((uint32_t)(((a) & 0xff) << 24) | \
>>>(((b) & 0xff) << 16) | \
>>>(((c) & 0xff) << 8)  | \
>>>((d) & 0xff))
>>>
>>> ...
>>> rule->field[2].value.u32 = IPv4(1,2,3,4);
>>> rule->filed[2].mask_range.u32 = IPv4(1,10,10,10);
>>> ...
>>> -
>>>
>>> The macro IPv4() is from the DPDK (rte_ip.h)
>>> The matching data is from the packet. so it is network order. (big
>>> endian)
>>>
>>>
>>>
>>> On Thu, Jun 22, 2017 at 1:26 PM, Shyam Shrivastav <
>>> shrivastav.sh...@gmail.com> wrote:
>>>
 Yes if these are the results then might be some issue, but can not be
 sure unless do this myself, have been using ACL library but not this case
 till now.
 Can you share code fragment converting dotted decimal to integer if
 possible ..

 On Thu, Jun 22, 2017 at 7:57 AM, Doohwan Lee  wrote:

> Thank you Shyam.
> Let me explain my situation in detail.
> All the cases described below use RTE_ACL_FIELD_TYPE_RANGE type.
>
> ---
> Case 1.
> rule: 1.2.3.4 ~ 1.2.3.4
> packet: 1.2.3.4
> result: match (correct)
>
> Case 2.
> rule: 1.2.3.4 ~ 1.10.10.10
> packet: 1.2.10.5
> result: match (correct)
>
> Case 3
> rule: 1.2.3.4 ~ 1.10.10.10
> packet: 1.10.10.11
> result: not match (correct)
>
> Case 4
> rule: 1.2.3.4 ~ 1.10.10.10
> packet: 1.2.3.10
> result: match (correct)
>
> Case 5:
> rule: 1.2.3.4~1.10.10.10
> packet: 1.2.10.11
> result: not match (incorrect)
>
> Case 6:
> rule: 1.2.3.4~1.10.10.10
> packet: 1.2.10.3
> result: not match (incorrect)
> ---
>
>
> Considering case 1~4, It shows expected results and there is no
> problem with byte ordering.
> But, in case 5~6, the result should be 'match' but it was not.
> This is why I doubt DPDK ACL library doesn't support 32-bit range
> matching.
>
>
> On Wed, Jun 21, 2017 at 9:09 PM, Shyam Shrivastav <
> shrivastav.sh...@gmail.com> wrote:
>
>> I haven't used range type with 32 bit integers yet ...
>> Just some theory in case if you haven't already taken into account,
>> if little-endian host 10.10.10.30 actually means 0x1e0a0a0a for acl 
>> match,
>> dotted decimal is in big endian so when in little endian host you need to
>> add it other way round as integers for matching. This means if you add
>> range 0x0a0a0a0a to 0x1e1e1e1e should match 10.10.10.30,  this is my
>> understanding theoretically ..
>>
>> On Wed, Jun 21, 2017 at 4:54 PM, Doohwan Lee 
>> wrote:
>>
>>> Yes. you are right. I also already knew that 32bit match with mask
>>> type works well.
>>> My point is 32bit match with 'range type' doesn't work in some case.
>>>
>>>
>>> On Wed, Jun 21, 2017 at 6:46 PM, Anupam Kapoor <
>>> anupam.kap...@gmail.com> wrote:
>>>

 On Wed, Jun 21, 2017 at 11:36 AM, Doohwan Lee 
 wrote:

> DPDK ACL library uses multi-bit trie with 8-bit stride.
> I guess that 

Re: [dpdk-users] Question about range type of DPDK ACL

2017-06-22 Thread Doohwan Lee
IPv4(1,2,3,4) means 0x01020304, and it is already host order (little
endian).
It need not to be converted using rte_be_to_cpu32() for setting rule.



2017-06-22 15:27 GMT+09:00 Shyam Shrivastav :

>
> Yes Doohwan,  it is there in rte_ip.h just saw. So this conversion leaves
> the address in big endian format. Theoretically, we are supposed to add the
> acl fields in host order, if you see pipeline code even for ip address with
> mask, following conversion is being done line 1096
> examples/ip_pipeline/pipeline-firewall.c
>
> key.type = PIPELINE_FIREWALL_IPV4_5TUPLE;
> key.key.ipv4_5tuple.src_ip = rte_be_to_cpu_32(sipaddr.s_addr);
> key.key.ipv4_5tuple.src_ip_mask = sipdepth;
> key.key.ipv4_5tuple.dst_ip = rte_be_to_cpu_32(dipaddr.s_addr);
>
> I would suggest this in your code
>
> rule->field[2].value.u32 = rte_be_to_cpu_32(IPv4(1,2,3,4));
> rule->filed[2].mask_range.u32 = rte_be_to_cpu_32(IPv4(1,10,10,10));
>
>
>
>
>
>
> On Thu, Jun 22, 2017 at 11:28 AM, Doohwan Lee  wrote:
>
>> Ok. The code to set rule for IPv4 address is like below.
>>
>> 
>> #define IPv4(a,b,c,d) ((uint32_t)(((a) & 0xff) << 24) | \
>>(((b) & 0xff) << 16) | \
>>(((c) & 0xff) << 8)  | \
>>((d) & 0xff))
>>
>> ...
>> rule->field[2].value.u32 = IPv4(1,2,3,4);
>> rule->filed[2].mask_range.u32 = IPv4(1,10,10,10);
>> ...
>> -
>>
>> The macro IPv4() is from the DPDK (rte_ip.h)
>> The matching data is from the packet. so it is network order. (big endian)
>>
>>
>>
>> On Thu, Jun 22, 2017 at 1:26 PM, Shyam Shrivastav <
>> shrivastav.sh...@gmail.com> wrote:
>>
>>> Yes if these are the results then might be some issue, but can not be
>>> sure unless do this myself, have been using ACL library but not this case
>>> till now.
>>> Can you share code fragment converting dotted decimal to integer if
>>> possible ..
>>>
>>> On Thu, Jun 22, 2017 at 7:57 AM, Doohwan Lee  wrote:
>>>
 Thank you Shyam.
 Let me explain my situation in detail.
 All the cases described below use RTE_ACL_FIELD_TYPE_RANGE type.

 ---
 Case 1.
 rule: 1.2.3.4 ~ 1.2.3.4
 packet: 1.2.3.4
 result: match (correct)

 Case 2.
 rule: 1.2.3.4 ~ 1.10.10.10
 packet: 1.2.10.5
 result: match (correct)

 Case 3
 rule: 1.2.3.4 ~ 1.10.10.10
 packet: 1.10.10.11
 result: not match (correct)

 Case 4
 rule: 1.2.3.4 ~ 1.10.10.10
 packet: 1.2.3.10
 result: match (correct)

 Case 5:
 rule: 1.2.3.4~1.10.10.10
 packet: 1.2.10.11
 result: not match (incorrect)

 Case 6:
 rule: 1.2.3.4~1.10.10.10
 packet: 1.2.10.3
 result: not match (incorrect)
 ---


 Considering case 1~4, It shows expected results and there is no problem
 with byte ordering.
 But, in case 5~6, the result should be 'match' but it was not.
 This is why I doubt DPDK ACL library doesn't support 32-bit range
 matching.


 On Wed, Jun 21, 2017 at 9:09 PM, Shyam Shrivastav <
 shrivastav.sh...@gmail.com> wrote:

> I haven't used range type with 32 bit integers yet ...
> Just some theory in case if you haven't already taken into account,
> if little-endian host 10.10.10.30 actually means 0x1e0a0a0a for acl match,
> dotted decimal is in big endian so when in little endian host you need to
> add it other way round as integers for matching. This means if you add
> range 0x0a0a0a0a to 0x1e1e1e1e should match 10.10.10.30,  this is my
> understanding theoretically ..
>
> On Wed, Jun 21, 2017 at 4:54 PM, Doohwan Lee  wrote:
>
>> Yes. you are right. I also already knew that 32bit match with mask
>> type works well.
>> My point is 32bit match with 'range type' doesn't work in some case.
>>
>>
>> On Wed, Jun 21, 2017 at 6:46 PM, Anupam Kapoor <
>> anupam.kap...@gmail.com> wrote:
>>
>>>
>>> On Wed, Jun 21, 2017 at 11:36 AM, Doohwan Lee 
>>> wrote:
>>>
 DPDK ACL library uses multi-bit trie with 8-bit stride.
 I guess that implementation of the trie doesn't support 32bit range
 matching.

>>>
>>> ​well, you _can_ have address wildcard matches e.g. an address+mask
>>> combination of 1.2.3.0/24 would match all addresses 1.2.3.[0..255]
>>>
>>> ​--
>>> kind regards
>>> anupam​
>>>
>>> In the beginning was the lambda, and the lambda was with Emacs, and
>>> Emacs was the lambda.
>>>
>>
>>
>

>>>
>>
>


Re: [dpdk-users] Question about range type of DPDK ACL

2017-06-22 Thread Anupam Kapoor
On Thu, Jun 22, 2017 at 11:28 AM, Doohwan Lee  wrote:

> Ok. The code to set rule for IPv4 address is like below.
>

​this is same as setting masks (1.0.0.0/8) :)

fwiw, beware of memory explosion in this case though...
​
--
kind regards
anupam​



In the beginning was the lambda, and the lambda was with Emacs, and Emacs
was the lambda.