[dpdk-dev] AF_PACKET poll mode driver

2016-04-12 Thread Rapelly, Varun
Hi,

We are facing a problem in DPDK - AF_PACKET poll mode driver.

We are able to create RAW socket and receive RTP-Signaling packets successfully 
and our application is running fine.

But if we try doing scp to the same interface we are facing a crash.

The stack trace:

(gdb) bt
#0  __memcpy_ssse3 () at ../sysdeps/x86_64/multiarch/memcpy-ssse3.S:1644
#1  0x0059053e in eth_af_packet_rx (queue=0x7f6b2936c060, 
bufs=0x7f6b27df16e0, nb_pkts=32)
at 
/software/src/cmn_thirdparty/Intel/DPDK/2.1/blddir/dpdk-2.1.0/drivers/net/af_packet/rte_eth_af_packet.c:186
#2  0x004330aa in rte_eth_rx_burst (queue_id=0, nb_pkts=32, 
rx_pkts=0x7f6b27df16e0, port_id=0 '\000')
at 
/software/src/cmn_thirdparty/Intel/DPDK/2.1/blddir/dpdk-2.1.0//x86_64-native-linuxapp-gcc/include/rte_ethdev.h:2629
#3  drain_mgt_pkts (tim=, arg=) at bp.c:1157
#4  0x004a33d4 in rte_timer_manage ()
#5  0x00433953 in handle_timers () at bp.c:1101
#6  packet_capture_loop () at bp.c:1265
#7  0x00434149 in ssbc_pkt_capture_launch_one_lcore (dummy=) at bp.c:1502
#8  0x004d2755 in eal_thread_loop ()
#9  0x7f6ba9c06b50 in start_thread (arg=) at 
pthread_create.c:304
#10 0x7f6ba995095d in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#11 0x in ?? ()
(gdb) bt full
#0  __memcpy_ssse3 () at ../sysdeps/x86_64/multiarch/memcpy-ssse3.S:1644
No locals.
#1  0x0059053e in eth_af_packet_rx (queue=0x7f6b2936c060, 
bufs=0x7f6b27df16e0, nb_pkts=32)
at 
/software/src/cmn_thirdparty/Intel/DPDK/2.1/blddir/dpdk-2.1.0/drivers/net/af_packet/rte_eth_af_packet.c:186
ethVlan = 0x7f6ba3e550c0
i = 7
ppd = 0x7f6ba9410800
mbuf = 0x7f6ba3e55840
pbuf = 0x7f6ba9410842 
pkt_q = 0x7f6b2936c060
num_rx = 7
framecount = 512
framenum = 291
#2  0x004330aa in rte_eth_rx_burst (queue_id=0, nb_pkts=32, 
rx_pkts=0x7f6b27df16e0, port_id=0 '\000')
at 
/software/src/cmn_thirdparty/Intel/DPDK/2.1/blddir/dpdk-2.1.0//x86_64-native-linuxapp-gcc/include/rte_ethdev.h:2629
temp = 
internal_af = 
dev = 0x8448ee0
nb_rx = 0
cb = 



Please help us out regarding this issue.

I will provide more information if you require any in reproducing the issue.

Thanks,
Ganapati Hegde




[dpdk-dev] ACL trie build incrementally

2016-04-05 Thread Rapelly, Varun
Hi All,

Can we build ACL trie in following way [lets say 4000 rules]: [I'm using DPDK 
2.1.0]

1.   Create context

2.   Add 1000 rules, to the context [rte_acl_add_rules]

3.   Then build the trie [rte_acl_build]for 1000 rules.

4.   Then repeat the steps 2-3 for the remaining 3000 rules


Is the above approach is ok? Or is there any other way to build the trie in 
incremental fashion?

Regards,
Varun



[dpdk-dev] ACL memory allocation failures

2016-02-29 Thread Rapelly, Varun
Thanks Konstantin. Few more questions in line:

> 
> Previous allocation error was coming with 1024 huge pages of 2 MB size.
> 
> After increasing the huge pages to 2048, I was able to add another
> ~140 rules [IPv4 rule data--> with src, dst IP address & port, next header ] 
> more, ie., 950 rules were added.

That's strange according to your log, all you need is ~13MB of hugepage memory:
ACL: allocation of 12966528 bytes on socket 0 for ipv4_acl_table1 Wonder what 
consumed rest of 4GB?
>> We are creating mem pools (for DPDK compatible 3 ports) for packet 
>> processing.
>>> And there are no free huge pages available after our DPDK app 
>>> initialization.

Again do you re-build your table after every rule you add?
If so, then it seems a bit strange approach (and definitely not the fastest 
one).
>>Yes, we are rebuilding the rules every time and is due to 2 reasons: 
>>1. Our application, gives full list of rules every time you add new rule. 
>>2. There is no way to delete a specific rule in the trie. Is there any way to 
>>delete a specific ACL rule?


What you can do instead: create context; add all your rules into it; build; 
>>> By following the same approach (what I explained above, rebuilding the ACL 
>>> trie everytime), can we fix this memory allocation issue?
>>>If yes, please provide me some pointers to modify the code.

> 
> Logically it did not increase number of rules [expected 2*817, but only 950 
> were added]. Is it really using huge pages memory only?
> 
> From the code it looks like heap memory. [ ret = 
> malloc_heap_alloc(>malloc_heaps[i], type, size, 0, align == 0 ?
> 1 : align, 0) ]

As I can see from the log it fails at GEN phase, when trying to allocate 
hugepages for RT table.
At lib/librte_acl/acl_gen.c:509

rte_acl_gen(struct rte_acl_ctx *ctx, struct rte_acl_trie *trie,
struct rte_acl_bld_trie *node_bld_trie, uint32_t num_tries,
uint32_t num_categories, uint32_t data_index_sz, size_t max_size) { ...
mem = rte_zmalloc_socket(ctx->name, total_size, RTE_CACHE_LINE_SIZE,
ctx->socket_id); if (mem == NULL) {
RTE_LOG(ERR, ACL,
"allocation of %zu bytes on socket %d for %s failed\n",
total_size, ctx->socket_id, ctx->name);
return -ENOMEM;
}
>>> Is there any way to reserve some particular amount of huge page memory for 
>>> ACL trie (in eal_init())?

Konstantin

> 
> > -Original Message-
> > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Rapelly, Varun
> > Sent: Friday, February 26, 2016 10:28 AM
> > To: dev at dpdk.org
> > Subject: Re: [dpdk-dev] ACL memory allocation failures
> >
> > Hi All,
> >
> > When I'm trying to configure some 5000+ ACL rules with different 
> > source IP addresses, getting ACL memory allocation failure. I'm using DPDK 
> > 2.1.
> >
> > [root at ACLISSUE log_2015_10_26_08_19_42]# vim np.log match 
> > nodes/bytes
> > used: 816/104448
> > total: 12940832 bytes
> > ACL: Build phase for ACL "ipv4_acl_table2":
> > memory consumed: 947913495
> > ACL: trie 0: number of rules: 816
> > ACL: allocation of 12966528 bytes on socket 0 for ipv4_acl_table1 
> > failed
> > ACL: Build phase for ACL "ipv4_acl_table1":
> > memory consumed: 947913495
> > ACL: trie 0: number of rules: 817
> > EAL: Error - exiting with code: 1
> >   Cause: Failed to build ACL trie
> >
> > Again sourced the ACL config file. After adding around 77 again the same 
> > error came.
> >
> > total: 14912784 bytes
> > ACL: Build phase for ACL "ipv4_acl_table1":
> > memory consumed: 1040188260
> > ACL: trie 0: number of rules: 893
> > ACL: allocation of 14938480 bytes on socket 0 for ipv4_acl_table2 
> > failed
> 
> You are running out of hugepages memory.
> 
> > ACL: Build phase for ACL "ipv4_acl_table2":
> > memory consumed: 1040188260
> > ACL: trie 0: number of rules: 894
> > EAL: Error - exiting with code: 1
> >   Cause: Failed to build ACL trie
> >
> > Where to increase the memory to avoid this issue?
> 
>  Refer to:
> http://dpdk.org/doc/guides/linux_gsg/sys_reqs.html#running-dpdk-applic
> ations
> Section 2.3.2
> 
> Konstantin



[dpdk-dev] ACL memory allocation failures

2016-02-29 Thread Rapelly, Varun
> 
> Thanks Konstantin.
> 
> Previous allocation error was coming with 1024 huge pages of 2 MB size.
> 
> After increasing the huge pages to 2048, I was able to add another 
> ~140 rules [IPv4 rule data--> with src, dst IP address & port, next header ] 
> more, ie., 950 rules were added.

That's strange according to your log, all you need is ~13MB of hugepage memory:
ACL: allocation of 12966528 bytes on socket 0 for ipv4_acl_table1 Wonder what 
consumed rest of 4GB?


>> We are creating mem pools (for DPDK compatible 3 ports) for packet 
>> processing.

Again do you re-build your table after every rule you add?
If so, then it seems a bit strange approach (and definitely not the fastest 
one).
>>Yes, we are rebuilding the rules every time and is due to 2 reasons: 
>>1. Our application, gives full list of rules every time you add new rule. 
>>2. There is no way to delete a specific rule in the trie. Is there any way to 
>>delete a specific ACL rule?
What you can do instead: create context; add all your rules into it; build; 

> 
> Logically it did not increase number of rules [expected 2*817, but only 950 
> were added]. Is it really using huge pages memory only?
> 
> From the code it looks like heap memory. [ ret = 
> malloc_heap_alloc(>malloc_heaps[i], type, size, 0, align == 0 ? 
> 1 : align, 0) ]

As I can see from the log it fails at GEN phase, when trying to allocate 
hugepages for RT table.
At lib/librte_acl/acl_gen.c:509

rte_acl_gen(struct rte_acl_ctx *ctx, struct rte_acl_trie *trie,
struct rte_acl_bld_trie *node_bld_trie, uint32_t num_tries,
uint32_t num_categories, uint32_t data_index_sz, size_t max_size) { ...
mem = rte_zmalloc_socket(ctx->name, total_size, RTE_CACHE_LINE_SIZE,
ctx->socket_id); if (mem == NULL) {
RTE_LOG(ERR, ACL,
"allocation of %zu bytes on socket %d for %s failed\n",
total_size, ctx->socket_id, ctx->name);
return -ENOMEM;
}

Konstantin

> 
> > -Original Message-
> > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Rapelly, Varun
> > Sent: Friday, February 26, 2016 10:28 AM
> > To: dev at dpdk.org
> > Subject: Re: [dpdk-dev] ACL memory allocation failures
> >
> > Hi All,
> >
> > When I'm trying to configure some 5000+ ACL rules with different 
> > source IP addresses, getting ACL memory allocation failure. I'm using DPDK 
> > 2.1.
> >
> > [root at ACLISSUE log_2015_10_26_08_19_42]# vim np.log match 
> > nodes/bytes
> > used: 816/104448
> > total: 12940832 bytes
> > ACL: Build phase for ACL "ipv4_acl_table2":
> > memory consumed: 947913495
> > ACL: trie 0: number of rules: 816
> > ACL: allocation of 12966528 bytes on socket 0 for ipv4_acl_table1 
> > failed
> > ACL: Build phase for ACL "ipv4_acl_table1":
> > memory consumed: 947913495
> > ACL: trie 0: number of rules: 817
> > EAL: Error - exiting with code: 1
> >   Cause: Failed to build ACL trie
> >
> > Again sourced the ACL config file. After adding around 77 again the same 
> > error came.
> >
> > total: 14912784 bytes
> > ACL: Build phase for ACL "ipv4_acl_table1":
> > memory consumed: 1040188260
> > ACL: trie 0: number of rules: 893
> > ACL: allocation of 14938480 bytes on socket 0 for ipv4_acl_table2 
> > failed
> 
> You are running out of hugepages memory.
> 
> > ACL: Build phase for ACL "ipv4_acl_table2":
> > memory consumed: 1040188260
> > ACL: trie 0: number of rules: 894
> > EAL: Error - exiting with code: 1
> >   Cause: Failed to build ACL trie
> >
> > Where to increase the memory to avoid this issue?
> 
>  Refer to:
> http://dpdk.org/doc/guides/linux_gsg/sys_reqs.html#running-dpdk-applic
> ations
> Section 2.3.2
> 
> Konstantin



[dpdk-dev] ACL memory allocation failures

2016-02-26 Thread Rapelly, Varun
Thanks Konstantin.

Previous allocation error was coming with 1024 huge pages of 2 MB size. 

After increasing the huge pages to 2048, I was able to add another ~140 rules 
[IPv4 rule data--> with src, dst IP address & port, next header ] more, ie., 
950 rules were added.

Logically it did not increase number of rules [expected 2*817, but only 950 
were added]. Is it really using huge pages memory only? 

>From the code it looks like heap memory. [ ret = 
>malloc_heap_alloc(>malloc_heaps[i], type, size, 0, align == 0 ? 1 : 
>align, 0) ]

> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Rapelly, Varun
> Sent: Friday, February 26, 2016 10:28 AM
> To: dev at dpdk.org
> Subject: Re: [dpdk-dev] ACL memory allocation failures
> 
> Hi All,
> 
> When I'm trying to configure some 5000+ ACL rules with different 
> source IP addresses, getting ACL memory allocation failure. I'm using DPDK 
> 2.1.
> 
> [root at ACLISSUE log_2015_10_26_08_19_42]# vim np.log match nodes/bytes 
> used: 816/104448
> total: 12940832 bytes
> ACL: Build phase for ACL "ipv4_acl_table2":
> memory consumed: 947913495
> ACL: trie 0: number of rules: 816
> ACL: allocation of 12966528 bytes on socket 0 for ipv4_acl_table1 
> failed
> ACL: Build phase for ACL "ipv4_acl_table1":
> memory consumed: 947913495
> ACL: trie 0: number of rules: 817
> EAL: Error - exiting with code: 1
>   Cause: Failed to build ACL trie
> 
> Again sourced the ACL config file. After adding around 77 again the same 
> error came.
> 
> total: 14912784 bytes
> ACL: Build phase for ACL "ipv4_acl_table1":
> memory consumed: 1040188260
> ACL: trie 0: number of rules: 893
> ACL: allocation of 14938480 bytes on socket 0 for ipv4_acl_table2 
> failed

You are running out of hugepages memory.

> ACL: Build phase for ACL "ipv4_acl_table2":
> memory consumed: 1040188260
> ACL: trie 0: number of rules: 894
> EAL: Error - exiting with code: 1
>   Cause: Failed to build ACL trie
> 
> Where to increase the memory to avoid this issue?

 Refer to:
http://dpdk.org/doc/guides/linux_gsg/sys_reqs.html#running-dpdk-applications
Section 2.3.2 

Konstantin



[dpdk-dev] ACL memory allocation failures

2016-02-26 Thread Rapelly, Varun
Hi All,

When I'm trying to configure some 5000+ ACL rules with different source IP 
addresses, getting ACL memory allocation failure. I'm using DPDK 2.1.

[root at ACLISSUE log_2015_10_26_08_19_42]# vim np.log
match nodes/bytes used: 816/104448
total: 12940832 bytes
ACL: Build phase for ACL "ipv4_acl_table2":
memory consumed: 947913495
ACL: trie 0: number of rules: 816
ACL: allocation of 12966528 bytes on socket 0 for ipv4_acl_table1 failed
ACL: Build phase for ACL "ipv4_acl_table1":
memory consumed: 947913495
ACL: trie 0: number of rules: 817
EAL: Error - exiting with code: 1
  Cause: Failed to build ACL trie

Again sourced the ACL config file. After adding around 77 again the same error 
came.

total: 14912784 bytes
ACL: Build phase for ACL "ipv4_acl_table1":
memory consumed: 1040188260
ACL: trie 0: number of rules: 893
ACL: allocation of 14938480 bytes on socket 0 for ipv4_acl_table2 failed
ACL: Build phase for ACL "ipv4_acl_table2":
memory consumed: 1040188260
ACL: trie 0: number of rules: 894
EAL: Error - exiting with code: 1
  Cause: Failed to build ACL trie

Where to increase the memory to avoid this issue?

Regards,
Varun



[dpdk-dev] ACL memory allocation failures

2016-02-26 Thread Rapelly, Varun
Hi All,

When I'm trying to configure some 5000+ ACL rules with different source IP 
addresses, getting ACL memory allocation failure.\

[root at ACLISSUE log_2015_10_26_08_19_42]# vim np.log
match nodes/bytes used: 816/104448
total: 12940832 bytes
ACL: Build phase for ACL "ipv4_acl_table2":
memory consumed: 947913495
ACL: trie 0: number of rules: 816
ACL: allocation of 12966528 bytes on socket 0 for ipv4_acl_table1 failed
ACL: Build phase for ACL "ipv4_acl_table1":
memory consumed: 947913495
ACL: trie 0: number of rules: 817
EAL: Error - exiting with code: 1
  Cause: Failed to build ACL trie

Again sourced the ACL config file. After adding around 77 again the same error 
came.

total: 14912784 bytes
ACL: Build phase for ACL "ipv4_acl_table1":
memory consumed: 1040188260
ACL: trie 0: number of rules: 893
ACL: allocation of 14938480 bytes on socket 0 for ipv4_acl_table2 failed
ACL: Build phase for ACL "ipv4_acl_table2":
memory consumed: 1040188260
ACL: trie 0: number of rules: 894
EAL: Error - exiting with code: 1
  Cause: Failed to build ACL trie

Where to increase the memory to avoid this issue?

Regards,
Varun



[dpdk-dev] iommu Error on DL380 G8 with RHEL 7.1

2015-07-24 Thread Rapelly, Varun
Hi Monroy,

Thanks for the reply :)

Ok Will check with Lee.

-Original Message-
From: Gonzalez Monroy, Sergio [mailto:sergio.gonzalez.mon...@intel.com] 
Sent: Friday, July 24, 2015 2:47 PM
To: Rapelly, Varun
Cc: dev at dpdk.org
Subject: Re: [dpdk-dev] iommu Error on DL380 G8 with RHEL 7.1

On 24/07/2015 09:54, Rapelly, Varun wrote:
> Hi All,
>
> I'm not able to send packets through OVS bridge [with dpdk] when iommu is 
> enabled on HP DL380 G8 [RHEL 7.1]. Getting following errors in dmesg.
>
> [0.657681] IOMMU: Setting identity map for device :01:00.2 
> [0xbdff6000 - 0xbdffcfff]
> [0.657686] IOMMU: Prepare 0-16MiB unity mapping for LPC
> [ 4458.091522] dmar: DMAR:[DMA Read] Request device [0a:00.0] fault 
> addr 1a633d000 DMAR:[fault reason 06] PTE Read access is not set [ 
> 4458.164541] dmar: DMAR:[DMA Read] Request device [0a:00.0] fault addr 
> 1a633d000 DMAR:[fault reason 06] PTE Read access is not set [ 
> 4458.337565] dmar: DMAR:[DMA Read] Request device [0a:00.1] fault addr 
> 1a63cd000 DMAR:[fault reason 06] PTE Read access is not set [ 
> 4458.637356] dmar: DMAR:[DMA Read] Request device [0a:00.0] fault addr 
> 1a637d000 DMAR:[fault reason 06] PTE Read access is not set
>
> Where as in another machine ProLiant DL380 G7 [RHEL 7.1] everything works 
> fine. Is it something do with the hardware on Gen8 machine?
>
> The following link shows the similar problem.
>
> http://comments.gmane.org/gmane.comp.networking.dpdk.devel/2281
>
> In the below HP link, they quoted the same problem. I upgraded ROM to 
> the latest version, but no luck. :(
>
> http://h20564.www2.hp.com/hpsc/doc/public/display?docId=emr_na-c033238
> 00
>
>
>
> We need this flag to be enabled [SR-IOV] on that machine.
>
>
>
> Please let me know, is there any way to resolve this issue?
>
> Regards,
> Varun
>
Hi Varun,

As I replied in you previous post, the issue is most likely related to HP.
You need to contact HP for a solution to your issues as suggested in the 
following thread by Lee A. Roberts from HP:
http://dpdk.org/ml/archives/dev/2015-March/015504.html

Sergio


[dpdk-dev] iommu Error on DL380 G8 with RHEL 7.1

2015-07-24 Thread Rapelly, Varun
Hi Lee,

In following link you said about BIOS settings to eliminate RMRRs. 

http://dpdk.org/ml/archives/dev/2015-March/015504.html

2) If your application requires the IOMMU, there are BIOS parameters that can be
   configured to eliminate the RMRRs on a slot-by-slot basis.  (I will send 
instructions
   for this separately, since it is not a DPDK issue.)

Please send the details for the above configuration. PFA screen shot for the 
Gen8 server version details.

Thanks in advance.

-Original Message-
From: Gonzalez Monroy, Sergio [mailto:sergio.gonzalez.mon...@intel.com] 
Sent: Friday, July 24, 2015 2:47 PM
To: Rapelly, Varun
Cc: dev at dpdk.org
Subject: Re: [dpdk-dev] iommu Error on DL380 G8 with RHEL 7.1

On 24/07/2015 09:54, Rapelly, Varun wrote:
> Hi All,
>
> I'm not able to send packets through OVS bridge [with dpdk] when iommu is 
> enabled on HP DL380 G8 [RHEL 7.1]. Getting following errors in dmesg.
>
> [0.657681] IOMMU: Setting identity map for device :01:00.2 
> [0xbdff6000 - 0xbdffcfff]
> [0.657686] IOMMU: Prepare 0-16MiB unity mapping for LPC
> [ 4458.091522] dmar: DMAR:[DMA Read] Request device [0a:00.0] fault 
> addr 1a633d000 DMAR:[fault reason 06] PTE Read access is not set [ 
> 4458.164541] dmar: DMAR:[DMA Read] Request device [0a:00.0] fault addr 
> 1a633d000 DMAR:[fault reason 06] PTE Read access is not set [ 
> 4458.337565] dmar: DMAR:[DMA Read] Request device [0a:00.1] fault addr 
> 1a63cd000 DMAR:[fault reason 06] PTE Read access is not set [ 
> 4458.637356] dmar: DMAR:[DMA Read] Request device [0a:00.0] fault addr 
> 1a637d000 DMAR:[fault reason 06] PTE Read access is not set
>
> Where as in another machine ProLiant DL380 G7 [RHEL 7.1] everything works 
> fine. Is it something do with the hardware on Gen8 machine?
>
> The following link shows the similar problem.
>
> http://comments.gmane.org/gmane.comp.networking.dpdk.devel/2281
>
> In the below HP link, they quoted the same problem. I upgraded ROM to 
> the latest version, but no luck. :(
>
> http://h20564.www2.hp.com/hpsc/doc/public/display?docId=emr_na-c033238
> 00
>
>
>
> We need this flag to be enabled [SR-IOV] on that machine.
>
>
>
> Please let me know, is there any way to resolve this issue?
>
> Regards,
> Varun
>
Hi Varun,

As I replied in you previous post, the issue is most likely related to HP.
You need to contact HP for a solution to your issues as suggested in the 
following thread by Lee A. Roberts from HP:
http://dpdk.org/ml/archives/dev/2015-March/015504.html

Sergio
-- next part --
A non-text attachment was scrubbed...
Name: iommu_Gen8_version.png
Type: image/png
Size: 114600 bytes
Desc: iommu_Gen8_version.png
URL: 
<http://dpdk.org/ml/archives/dev/attachments/20150724/62a027af/attachment-0001.png>


[dpdk-dev] iommu Error on DL380 G8 with RHEL 7.1

2015-07-24 Thread Rapelly, Varun
Hi All,

I'm not able to send packets through OVS bridge [with dpdk] when iommu is 
enabled on HP DL380 G8 [RHEL 7.1]. Getting following errors in dmesg.

[0.657681] IOMMU: Setting identity map for device :01:00.2 [0xbdff6000 
- 0xbdffcfff]
[0.657686] IOMMU: Prepare 0-16MiB unity mapping for LPC
[ 4458.091522] dmar: DMAR:[DMA Read] Request device [0a:00.0] fault addr 
1a633d000
DMAR:[fault reason 06] PTE Read access is not set
[ 4458.164541] dmar: DMAR:[DMA Read] Request device [0a:00.0] fault addr 
1a633d000
DMAR:[fault reason 06] PTE Read access is not set
[ 4458.337565] dmar: DMAR:[DMA Read] Request device [0a:00.1] fault addr 
1a63cd000
DMAR:[fault reason 06] PTE Read access is not set
[ 4458.637356] dmar: DMAR:[DMA Read] Request device [0a:00.0] fault addr 
1a637d000
DMAR:[fault reason 06] PTE Read access is not set

Where as in another machine ProLiant DL380 G7 [RHEL 7.1] everything works fine. 
Is it something do with the hardware on Gen8 machine?

The following link shows the similar problem.

http://comments.gmane.org/gmane.comp.networking.dpdk.devel/2281

In the below HP link, they quoted the same problem. I upgraded ROM to the 
latest version, but no luck. :(

http://h20564.www2.hp.com/hpsc/doc/public/display?docId=emr_na-c03323800



We need this flag to be enabled [SR-IOV] on that machine.



Please let me know, is there any way to resolve this issue?

Regards,
Varun



[dpdk-dev] iommu on DPDK2.0.0 Error

2015-07-23 Thread Rapelly, Varun
Hi,

I'm facing problems with "iommu=pt intel_iommu=on" in ProLiant DL380p Gen8 
server[RHEL 7.1]. But on ProLiant DL380 G7[RHEL 7.1] server, not facing this 
issue.

When I add the above options to the kernel boot line and configure OVS bridge 
[with -dpdk option], not able to send the packets out from the OVS bridge.
But when I pass iommu=pt intel_iommu=off, i'm able to send & receive packets. 
Following are the dmesg details for both the scenarios on gen8 machine.

[root at ARTHA ~]# nic

Network devices using DPDK-compatible driver

:0a:00.0 'I350 Gigabit Network Connection' drv=igb_uio unused=

Network devices using kernel driver
===
:03:00.0 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=eno1 drv=tg3 
unused=igb_uio *Active*
:03:00.1 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=eno2 drv=tg3 
unused=igb_uio
:03:00.2 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=eno3 drv=tg3 
unused=igb_uio
:03:00.3 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=eno4 drv=tg3 
unused=igb_uio
:0a:00.1 'I350 Gigabit Network Connection' if=ens3f1 drv=igb unused=igb_uio

[root at ARTHA ~]# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.10.0-229.el7.x86_64 
root=UUID=89019831-4506-451e-8259-68171411ac4b ro crashkernel=auto rhgb quiet 
iommu=pt intel_iommu=on default_hugepagesz=1G hugepagesz=1G hugepages=16 
isolcpus=2-7

[root at ARTHA ~]# dmesg | grep -e DMAR -e IOMMU [deleted few lines below as it 
was a big list]
{
[0.00] ACPI: DMAR bddad200 00450 (v01 HP ProLiant 0001  
 \xffd2? 162E)
[0.00] Intel-IOMMU: enabled
[0.019553] dmar: IOMMU 0: reg_base_addr f8ffe000 ver 1:0 cap d2078c106f0466 
ecap f020de
[0.019652] IOAPIC id 8 under DRHD base  0xf8ffe000 IOMMU 0
[0.019653] IOAPIC id 0 under DRHD base  0xf8ffe000 IOMMU 0
[0.633163] IOMMU 0 0xf8ffe000: using Queued invalidation
[0.633505] IOMMU: Setting identity map for device :0a:00.0 [0xe8000 - 
0xe8fff]
[0.633539] IOMMU: Setting identity map for device :0a:00.1 [0xe8000 - 
0xe8fff]
[0.633713] IOMMU: Prepare 0-16MiB unity mapping for LPC
}

After creating ovs bridge and adding ports to it

[  150.845001] SELinux: initialized (dev hugetlbfs, type hugetlbfs), uses 
transition SIDs
[  150.867256] igb_uio: module verification failed: signature and/or required 
key missing - tainting kernel
[  150.867415] igb_uio: Use MSIX interrupt by default
[  150.991486] igb :0a:00.0: removed PHC on ens3f0
[  150.991490] igb :0a:00.0: DCA disabled
[  151.051927] igb_uio :0a:00.0: irq 85 for MSI/MSI-X
[  151.052122] igb_uio :0a:00.0: uio device registered with irq 55
[  151.086101] gre: GRE over IPv4 demultiplexor driver
[  151.095108] openvswitch: Open vSwitch switching datapath
[  153.946783] device ovs-netdev entered promiscuous mode
[  153.956208] device ovs entered promiscuous mode
[  154.132619] dmar: DRHD: handling fault status reg 2
[  154.132625] dmar: DMAR:[DMA Read] Request device [0a:00.0] fault addr 
6633d000
DMAR:[fault reason 06] PTE Read access is not set
[  154.211283] dmar: DRHD: handling fault status reg 102
[  154.211287] dmar: DMAR:[DMA Read] Request device [0a:00.0] fault addr 
6633d000
DMAR:[fault reason 06] PTE Read access is not set
[  154.394040] dmar: DRHD: handling fault status reg 202
[  154.394046] dmar: DMAR:[DMA Read] Request device [0a:00.0] fault addr 
6637d000
DMAR:[fault reason 06] PTE Read access is not set


[root at ARTHA ~]# ping -I ovs 10.54.218.1
PING 10.54.218.1 (10.54.218.1) from 10.54.218.89 ovs: 56(84) bytes of data.
^C
--- 10.54.218.1 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 1999ms


[root at ARTHA ~]# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.10.0-229.el7.x86_64 
root=UUID=89019831-4506-451e-8259-68171411ac4b ro crashkernel=auto rhgb quiet 
iommu=pt intel_iommu=off default_hugepagesz=1G hugepagesz=1G hugepages=16 
isolcpus=2-7
[root at ARTHA ~]# dmesg | grep -e DMAR -e IOMMU
[0.00] ACPI: DMAR bddad200 00450 (v01 HP ProLiant 0001  
 \xffd2? 162E)
[0.00] Intel-IOMMU: disabled
[0.019551] dmar: IOMMU 0: reg_base_addr f8ffe000 ver 1:0 cap d2078c106f0466 
ecap f020de
[0.019651] IOAPIC id 8 under DRHD base  0xf8ffe000 IOMMU 0
[0.019652] IOAPIC id 0 under DRHD base  0xf8ffe000 IOMMU 0

[  263.084959] SELinux: initialized (dev hugetlbfs, type hugetlbfs), uses 
transition SIDs
[  263.113272] igb_uio: module verification failed: signature and/or required 
key missing - tainting kernel
[  263.113455] igb_uio: Use MSIX interrupt by default
[  263.240329] igb :0a:00.0: removed PHC on ens3f0
[  263.240333] igb :0a:00.0: DCA disabled
[  263.298807] igb_uio :0a:00.0: irq 85 for MSI/MSI-X
[  263.299001] igb_uio :0a:00.0: uio device registered with irq 55
[  263.332111] gre: GRE over IPv4 demultiplexor driver
[  263.341157] openvswitch: Open vSwitch switching datapath
[  266.172120] 

[dpdk-dev] Testpmd application failing with Cause: No probed ethernet device with DPDK-1.8.0

2015-03-23 Thread Rapelly, Varun
Hi Bruce,

Thanks for your reply.

I used dpdk_nic_bind.py script to bind igb_uio driver for the specific NIC 
ports. 

[root at SSBC swe]# ./ dpdk_nic_bind.py  --status

Network devices using IGB_UIO driver

:03:00.0 'VMXNET3 Ethernet Controller' drv=igb_uio unused=vmxnet3
:13:00.0 'VMXNET3 Ethernet Controller' drv=igb_uio unused=vmxnet3
:1b:00.0 'VMXNET3 Ethernet Controller' drv=igb_uio unused=vmxnet3

Network devices using kernel driver
===
:0b:00.0 'VMXNET3 Ethernet Controller' if=ha0 drv=vmxnet3 unused=igb_uio

When we ported to DPDK1.7.0, we didn't face this kind of issues.

When I googled, found similar problem. But didn't get any solution from that.
http://patchwork.dpdk.org/ml/archives/dev/2014-February/001437.html

Please give some directions to resolve this issue.

Regards,
Varun
-Original Message-
From: Bruce Richardson [mailto:bruce.richard...@intel.com] 
Sent: Friday, March 20, 2015 7:33 PM
To: Rapelly, Varun
Cc: dev at dpdk.org
Subject: Re: [dpdk-dev] Testpmd application failing with Cause: No probed 
ethernet device with DPDK-1.8.0

On Fri, Mar 20, 2015 at 07:09:00AM +, Rapelly, Varun wrote:
> 
> Hi All,
> 
> I'm facing the following issue when testing testpmd application with 
> DPDK-1.8.0.
> 
> EAL: PCI device :03:00.0 on NUMA socket -1
> EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
> EAL:   :03:00.0 not managed by UIO driver, skipping
> EAL: PCI device :0b:00.0 on NUMA socket -1
> EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
> EAL:   :0b:00.0 not managed by UIO driver, skipping
> EAL: PCI device :13:00.0 on NUMA socket -1
> EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
> EAL:   :13:00.0 not managed by UIO driver, skipping
> EAL: PCI device :1b:00.0 on NUMA socket -1
> EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
> EAL:   :1b:00.0 not managed by UIO driver, skipping
> EAL: Error - exiting with code: 1
>   Cause: No probed ethernet device
> 
> Please let me know, what could be the issue.
> 
> FYI:
> Igb_uio and rte_kni ko modules are inserted successfully.

After you load the uio driver, you still need to bind some NIC devices to use 
it, otherwise DPDK will ignore those devices as being used by the kernel.
The script "dpdk_nic_bind.py" in the tools directory can help with the binding 
and unbinding to the different drivers.

/Bruce

> 
> Regards,
> Varun
> 


[dpdk-dev] Testpmd application failing with Cause: No probed ethernet device with DPDK-1.8.0

2015-03-20 Thread Rapelly, Varun

Hi All,

I'm facing the following issue when testing testpmd application with DPDK-1.8.0.

EAL: PCI device :03:00.0 on NUMA socket -1
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL:   :03:00.0 not managed by UIO driver, skipping
EAL: PCI device :0b:00.0 on NUMA socket -1
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL:   :0b:00.0 not managed by UIO driver, skipping
EAL: PCI device :13:00.0 on NUMA socket -1
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL:   :13:00.0 not managed by UIO driver, skipping
EAL: PCI device :1b:00.0 on NUMA socket -1
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL:   :1b:00.0 not managed by UIO driver, skipping
EAL: Error - exiting with code: 1
  Cause: No probed ethernet device

Please let me know, what could be the issue.

FYI:
Igb_uio and rte_kni ko modules are inserted successfully.


Regards,
Varun



[dpdk-dev] Testpmd application failing with Cause: No probed ethernet device

2015-03-20 Thread Rapelly, Varun
Hi All,

I'm facing the following issue when testing testpmd application.

EAL: PCI device :03:00.0 on NUMA socket -1
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL:   :03:00.0 not managed by UIO driver, skipping
EAL: PCI device :0b:00.0 on NUMA socket -1
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL:   :0b:00.0 not managed by UIO driver, skipping
EAL: PCI device :13:00.0 on NUMA socket -1
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL:   :13:00.0 not managed by UIO driver, skipping
EAL: PCI device :1b:00.0 on NUMA socket -1
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL:   :1b:00.0 not managed by UIO driver, skipping
EAL: Error - exiting with code: 1
  Cause: No probed ethernet device

Please let me know, what could be the issue.

FYI:
Igb_uio and rte_kni ko modules are inserted successfully.


Regards,
Varun

-- next part --
An embedded and charset-unspecified text was scrubbed...
Name: test_pmd_log.txt
URL: 



[dpdk-dev] ACL Issue with single field rule and rest with wild card entry

2015-02-06 Thread Rapelly, Varun
Hi,

struct ipv6_5tuple {
   uint8_t proto; /* Protocol, next header. */
   uint32_t src_addr0;  /* IP address of source host. */
   uint32_t src_addr1;  /* IP address of source host. */
   uint32_t src_addr2;  /* IP address of source host. */
   uint32_t src_addr3;  /* IP address of source host. */
};

enum {
   PROTO_FIELD_IPV6,
   SRC_FIELD0_IPV6,
   SRC_FIELD1_IPV6,
   SRC_FIELD2_IPV6,
   SRC_FIELD3_IPV6,
   NUM_FIELDS_IPV6
};


I'm using the above data to insert in to ACL trie.

If I'm inserting rules with only different proto fields, [I'm expecting others 
fields as wild card entries]  then the rules are not matching.

But if I insert one rule with dummy entries [in the attached file line num 
118-125], then the above issue is resolved.

Please let me know:


1.   Can we have rules with only one entry and others as wild card entries?

2.   Is there any other way to match wild card entries in a rule?

Regards,
Varun

-- next part --
An embedded and charset-unspecified text was scrubbed...
Name: main.c
URL: 



[dpdk-dev] ACL trie insertion and search

2015-01-28 Thread Rapelly, Varun
Hi,

We were converting the acl rule data, from host to network byte order, [by 
mistake] while inserting into trie. And while searching we are not converting 
the search data to n/w byte order.
With the above also rules are matching, except few scenarios.

After correcting the above mistake, all rules are matching perfectly fine.

I believe the above[converting while insertion] should also work for all rules. 
Please clarify the above.

Regards,
Varun