[dpdk-dev] two different threads polls the same port, different queues, why the throughput is the same

2016-11-14 Thread


I do not have any lock/critical sections in my code. 
I have logs to print out the core id, src port, dst port and queue id.  worker 
0 runs on core 1,  run macswap very light, the throughput is 4.5Mpps. worker 1 
runs on core2, is a load balancer heavy, the throughput is also 4.5Mpps. This 
does not make sense at all. 

***thread core_id=1, src_port=0, dst_port=0, rx_queue_id=0, tx_queue_id=0

***thread core_id=2, src_port=0, dst_port=0, rx_queue_id=1, tx_queue_id=1

Core 0: Running stat thread

worker_id=0, core_id=1, pkt_rate=4418972

worker_id=1, core_id=2, pkt_rate=4419808

worker_id=0, core_id=1, pkt_rate=4631684

worker_id=1, core_id=2, pkt_rate=4632928








At 2016-11-14 14:13:25, "Kyle Larose"  wrote:
>On Mon, Nov 14, 2016 at 2:28 PM, ??  wrote:
>> Hi all,
>>
>>
>> I have two threads process the packets with different ways. thread A (core 
>> 0) is very heavy, thread B (core 1) is very light.   If I just run each of 
>> them, their throughput is huge different with small packet. Thread A polls 
>> queue 0 of port 0, thread B polls queue 1 of port 0. If I run them at the 
>> same time, why thread A and thread B get same throughput. This makes me very 
>> confused. Does anyone have the same experience or know some possible reasons?
>>
>
>Can you give some examples with numbers? My first thought is that
>maybe the two threads are contending for the same physical core. You
>don't have any locking/critical sections, do you?
>>
>> Thanks,
>> wei


[dpdk-dev] two different threads polls the same port, different queues, why the throughput is the same

2016-11-14 Thread
Hi all, 


I have two threads process the packets with different ways. thread A (core 0) 
is very heavy, thread B (core 1) is very light.   If I just run each of them, 
their throughput is huge different with small packet. Thread A polls queue 0 of 
port 0, thread B polls queue 1 of port 0. If I run them at the same time, why 
thread A and thread B get same throughput. This makes me very confused. Does 
anyone have the same experience or know some possible reasons?  


Thanks, 
wei


[dpdk-dev] dpdk example qos meter compile error

2016-08-01 Thread

Please ignore this message. It works. I just made a mistake by myself. Sorry. 







At 2016-08-01 06:10:24, "??"  wrote:
>Hi, 
>
>
>I want to compile and run dpdk example qos_meter. But it shows compile errors. 
>
>
>qos_meter/rte_policer.h:34:20: error: #include nested too deeply
>
>qos_meter/rte_policer.h:35:25: error: #include nested too deeply
>
>qos_meter/rte_policer.h:38:43: error: unknown type name ?uint32_t?
>
>...
>
>I am using dpdk-16.04. Does anyone know how to fix the bugs? 


[dpdk-dev] dpdk example qos meter compile error

2016-08-01 Thread
Hi, 


I want to compile and run dpdk example qos_meter. But it shows compile errors. 


qos_meter/rte_policer.h:34:20: error: #include nested too deeply

qos_meter/rte_policer.h:35:25: error: #include nested too deeply

qos_meter/rte_policer.h:38:43: error: unknown type name ?uint32_t?

...

I am using dpdk-16.04. Does anyone know how to fix the bugs? 


[dpdk-dev] EAL: memzone_reserve_aligned_thread_unsafe(): No more room in config

2016-05-19 Thread
Hi all, 


When using dpdk multi process client server example, I create many clients. 
After the number of clients 1239, I met this error:

EAL: memzone_reserve_aligned_thread_unsafe(): No more room in config

RING: Cannot reserve memory

EAL: Error - exiting with code: 1

  Cause: Cannot create tx ring queue for client 1239

I have 32G huge page memory. Can anyone give some guidance how to increase the 
memzone memory? Which parameter should I adjust it? 


[dpdk-dev] rte_hash_del_key crash in multi-process environment

2016-04-20 Thread
Thanks so much! That fixes my problem. 








At 2016-04-19 15:39:16, "De Lara Guarch, Pablo"  wrote:
>Hi,
>
>> -Original Message-
>> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of ??
>> Sent: Tuesday, April 19, 2016 5:58 AM
>> To: De Lara Guarch, Pablo
>> Cc: Thomas Monjalon; Gonzalez Monroy, Sergio; dev at dpdk.org; Dhana
>> Eadala; Richardson, Bruce; Qiu, Michael
>> Subject: [dpdk-dev] rte_hash_del_key crash in multi-process environment
>> 
>> Hi all,
>> 
>> 
>> In the multi-process environment, before I met a bug when calling
>> rte_hash_lookup_with_hash. Using Dhana's patch fixed my problem. Now I
>> need to remove the flow in the multi-process environment, the system gets
>> crashed when calling rte_hash_del_key function. The following is the gdb
>> trace. Does anybody meet this problem or know how to fix it?
>
>First of all, another fix for the multi-process support was implemented and 
>merged for 16.04 release,
>so take a look at it, if you can.
>Regarding the rte_hash_del_key() function, you should use 
>rte_hash_del_key_with_hash,
>if you want to use it in a multi-process environment (as you did for the 
>lookup function).
>
>Thanks,
>Pablo
>
>> 
>> 
>> 
>> 
>> Program received signal SIGILL, Illegal instruction.
>> 
>> 0x0048a0dd in rte_port_ring_reader_frag_free
>> (port=0x7ffe113d4100) at /home/zhangwei1984/timopenNetVM/dpdk-
>> 2.2.0/lib/librte_port/rte_port_frag.c:266
>> 
>> 266return -1;
>> 
>> (gdb) bt
>> 
>> #0  0x0048a0dd in rte_port_ring_reader_frag_free
>> (port=0x7ffe113d4100) at /home/zhangwei1984/timopenNetVM/dpdk-
>> 2.2.0/lib/librte_port/rte_port_frag.c:266
>> 
>> #1  0x0049c537 in rte_hash_del_key (h=0x7ffe113d4100,
>> key=0x7ffe092e1000)
>> 
>>at /home/zhangwei1984/timopenNetVM/dpdk-
>> 2.2.0/lib/librte_hash/rte_cuckoo_hash.c:917
>> 
>> #2  0x0043716a in onvm_ft_remove_key (table=0x7ffe113c3e80,
>> key=0x7ffe092e1000) at /home/zhangwei1984/onvm-shared-
>> cpu/onvm/shared/onvm_flow_table.c:160
>> 
>> #3  0x0043767e in onvm_flow_dir_del_and_free_key
>> (key=0x7ffe092e1000) at /home/zhangwei1984/onvm-shared-
>> cpu/onvm/shared/onvm_flow_dir.c:144
>> 
>> #4  0x00437619 in onvm_flow_dir_del_key (key=0x7ffe092e1000) at
>> /home/zhangwei1984/onvm-shared-
>> cpu/onvm/shared/onvm_flow_dir.c:128
>> 
>> #5  0x00423ded in remove_flow_rule (idx=3) at
>> /home/zhangwei1984/onvm-shared-cpu/examples/flow_dir/flow_dir.c:130
>> 
>> #6  0x00423e44 in clear_stat_remove_flow_rule
>> (nf_info=0x7fff3e652100) at /home/zhangwei1984/onvm-shared-
>> cpu/examples/flow_dir/flow_dir.c:145
>> 
>> #7  0x004247e3 in alloc_nfs_install_flow_rule (services=0xd66e90
>> , pkt=0x7ffe13f56400)
>> 
>>at /home/zhangwei1984/onvm-shared-
>> cpu/examples/flow_dir/flow_dir.c:186
>> 
>> #8  0x00424bdb in packet_handler (pkt=0x7ffe13f56400,
>> meta=0x7ffe13f56440) at /home/zhangwei1984/onvm-shared-
>> cpu/examples/flow_dir/flow_dir.c:294
>> 
>> #9  0x0043001d in onvm_nf_run (info=0x7fff3e651b00,
>> handler=0x424b21 ) at /home/zhangwei1984/onvm-
>> shared-cpu/onvm/onvm_nf/onvm_nflib.c:462
>> 
>> #10 0x00424cc2 in main (argc=3, argv=0x7fffe660) at
>> /home/zhangwei1984/onvm-shared-cpu/examples/flow_dir/flow_dir.c:323
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> At 2016-03-23 03:53:43, "De Lara Guarch, Pablo"
>>  wrote:
>> >Hi Thomas,
>> >
>> >> -Original Message-
>> >> From: Thomas Monjalon [mailto:thomas.monjalon at 6wind.com]
>> >> Sent: Tuesday, March 22, 2016 11:42 AM
>> >> To: De Lara Guarch, Pablo; Gonzalez Monroy, Sergio
>> >> Cc: dev at dpdk.org; Dhana Eadala; Richardson, Bruce; Qiu, Michael
>> >> Subject: Re: [dpdk-dev] [PATCH] hash: fix memcmp function pointer in
>> multi-
>> >> process environment
>> >>
>> >> Hi,
>> >>
>> >> Pablo, Sergio, please could you help with this issue?
>> >
>> >I agree this is not the best way to fix this. I will try to have a fix 
>> >without
>> having to use ifdefs.
>> >
>> >Thanks,
>> >Pablo
>> >>
>> >> 2016-03-13 22:16, Dhana Eadala:
>> >> > We found a problem in dpdk-2.2 using under multi-process
>> environment.
>> >> > Here is the brief description how we are using the dpdk:
>> >> >
>> >> > We have two processes proc1, proc2 using dpdk. These proc1 and proc2
>> >> are
>> >> > two different compiled binaries.
>> >> > proc1 is started as primary process and proc2 as secondary process.
>> >> >
>> >> > proc1:
>> >> > Calls srcHash = rte_hash_create("src_hash_name") to create rte_hash
>> >> structure.
>> >> > As part of this, this api initalized the rte_hash structure and set the
>> >> > srcHash->rte_hash_cmp_eq to the address of memcmp() from proc1
>> >> address space.
>> >> >
>> >> > proc2:
>> >> > calls srcHash =  rte_hash_find_existing("src_hash_name").
>> >> > This function call returns the rte_hash created by proc1.
>> >> > This srcHash->rte_hash_cmp_eq still points to the address of
>> >> > memcmp() from proc1 address space.
>> >> > Later proc2  

[dpdk-dev] rte_hash_del_key crash in multi-process environment

2016-04-19 Thread
Hi all, 


In the multi-process environment, before I met a bug when calling 
rte_hash_lookup_with_hash. Using Dhana's patch fixed my problem. Now I need to 
remove the flow in the multi-process environment, the system gets crashed when 
calling rte_hash_del_key function. The following is the gdb trace. Does anybody 
meet this problem or know how to fix it?




Program received signal SIGILL, Illegal instruction.

0x0048a0dd in rte_port_ring_reader_frag_free (port=0x7ffe113d4100) at 
/home/zhangwei1984/timopenNetVM/dpdk-2.2.0/lib/librte_port/rte_port_frag.c:266

266return -1;

(gdb) bt

#0  0x0048a0dd in rte_port_ring_reader_frag_free (port=0x7ffe113d4100) 
at 
/home/zhangwei1984/timopenNetVM/dpdk-2.2.0/lib/librte_port/rte_port_frag.c:266

#1  0x0049c537 in rte_hash_del_key (h=0x7ffe113d4100, 
key=0x7ffe092e1000)

   at 
/home/zhangwei1984/timopenNetVM/dpdk-2.2.0/lib/librte_hash/rte_cuckoo_hash.c:917

#2  0x0043716a in onvm_ft_remove_key (table=0x7ffe113c3e80, 
key=0x7ffe092e1000) at 
/home/zhangwei1984/onvm-shared-cpu/onvm/shared/onvm_flow_table.c:160

#3  0x0043767e in onvm_flow_dir_del_and_free_key (key=0x7ffe092e1000) 
at /home/zhangwei1984/onvm-shared-cpu/onvm/shared/onvm_flow_dir.c:144

#4  0x00437619 in onvm_flow_dir_del_key (key=0x7ffe092e1000) at 
/home/zhangwei1984/onvm-shared-cpu/onvm/shared/onvm_flow_dir.c:128

#5  0x00423ded in remove_flow_rule (idx=3) at 
/home/zhangwei1984/onvm-shared-cpu/examples/flow_dir/flow_dir.c:130

#6  0x00423e44 in clear_stat_remove_flow_rule (nf_info=0x7fff3e652100) 
at /home/zhangwei1984/onvm-shared-cpu/examples/flow_dir/flow_dir.c:145

#7  0x004247e3 in alloc_nfs_install_flow_rule (services=0xd66e90 
, pkt=0x7ffe13f56400)

   at /home/zhangwei1984/onvm-shared-cpu/examples/flow_dir/flow_dir.c:186

#8  0x00424bdb in packet_handler (pkt=0x7ffe13f56400, 
meta=0x7ffe13f56440) at 
/home/zhangwei1984/onvm-shared-cpu/examples/flow_dir/flow_dir.c:294

#9  0x0043001d in onvm_nf_run (info=0x7fff3e651b00, handler=0x424b21 
) at 
/home/zhangwei1984/onvm-shared-cpu/onvm/onvm_nf/onvm_nflib.c:462

#10 0x00424cc2 in main (argc=3, argv=0x7fffe660) at 
/home/zhangwei1984/onvm-shared-cpu/examples/flow_dir/flow_dir.c:323








At 2016-03-23 03:53:43, "De Lara Guarch, Pablo"  wrote:
>Hi Thomas,
>
>> -Original Message-
>> From: Thomas Monjalon [mailto:thomas.monjalon at 6wind.com]
>> Sent: Tuesday, March 22, 2016 11:42 AM
>> To: De Lara Guarch, Pablo; Gonzalez Monroy, Sergio
>> Cc: dev at dpdk.org; Dhana Eadala; Richardson, Bruce; Qiu, Michael
>> Subject: Re: [dpdk-dev] [PATCH] hash: fix memcmp function pointer in multi-
>> process environment
>> 
>> Hi,
>> 
>> Pablo, Sergio, please could you help with this issue?
>
>I agree this is not the best way to fix this. I will try to have a fix without 
>having to use ifdefs.
>
>Thanks,
>Pablo
>> 
>> 2016-03-13 22:16, Dhana Eadala:
>> > We found a problem in dpdk-2.2 using under multi-process environment.
>> > Here is the brief description how we are using the dpdk:
>> >
>> > We have two processes proc1, proc2 using dpdk. These proc1 and proc2
>> are
>> > two different compiled binaries.
>> > proc1 is started as primary process and proc2 as secondary process.
>> >
>> > proc1:
>> > Calls srcHash = rte_hash_create("src_hash_name") to create rte_hash
>> structure.
>> > As part of this, this api initalized the rte_hash structure and set the
>> > srcHash->rte_hash_cmp_eq to the address of memcmp() from proc1
>> address space.
>> >
>> > proc2:
>> > calls srcHash =  rte_hash_find_existing("src_hash_name").
>> > This function call returns the rte_hash created by proc1.
>> > This srcHash->rte_hash_cmp_eq still points to the address of
>> > memcmp() from proc1 address space.
>> > Later proc2  calls
>> > rte_hash_lookup_with_hash(srcHash, (const void*) , key.sig);
>> > rte_hash_lookup_with_hash() invokes __rte_hash_lookup_with_hash(),
>> > which in turn calls h->rte_hash_cmp_eq(key, k->key, h->key_len).
>> > This leads to a crash as h->rte_hash_cmp_eq is an address
>> > from proc1 address space and is invalid address in proc2 address space.
>> >
>> > We found, from dpdk documentation, that
>> >
>> > "
>> >  The use of function pointers between multiple processes
>> >  running based of different compiled
>> >  binaries is not supported, since the location of a given function
>> >  in one process may be different to
>> >  its location in a second. This prevents the librte_hash library
>> >  from behaving properly as in a  multi-
>> >  threaded instance, since it uses a pointer to the hash function 
>> > internally.
>> >
>> >  To work around this issue, it is recommended that
>> >  multi-process applications perform the hash
>> >  calculations by directly calling the hashing function
>> >  from the code and then using the
>> >  rte_hash_add_with_hash()/rte_hash_lookup_with_hash() functions
>> >  instead of the functions which do
>> >  

[dpdk-dev] [PATCH] hash: fix memcmp function pointer in multi-process environment

2016-03-15 Thread
Thanks so much for your patch! Your patch exactly solves my bug. :)


At 2016-03-15 08:57:29, "Dhananjaya Eadala"  wrote:
Hi

I looked at your info from gdb and source code.


[dpdk-dev] dpdk hash lookup function crashed (segment fault)

2016-03-15 Thread
Thanks for your reply! I used one patch solve my problem someone posted  last 
night in the mailing list.


At 2016-03-14 21:02:13, "Kyle Larose"  wrote:
>Hello,
>
>On Sun, Mar 13, 2016 at 10:38 AM, ??  wrote:
>> Hi all,
>> When I use the dpdk lookup function, I met the segment fault problem. Can  
>> anybody help to look at why this happens. I will put the aim what I want to 
>> do and the related piece of code, and my debug message,
>>
>>
>> This problem is that in dpdk multi process - client and server example, 
>> dpdk-2.2.0/examples/multi_process/client_server_mp
>> My aim is that server create a hash table, then share it to client. Client 
>> will write the hash  table, server will read the hash table.  I am using 
>> dpdk hash table.  What I did is that server create a hash table (table and 
>> array entries), return the table address.  I use memzone pass the table 
>> address to client.  In client, the second lookup gets segment fault. The 
>> system gets crashed.  I will put some related code here.
>> create hash table function:
>>
>
>Let me see if I understand correctly. You're allocating a hash table
>on huge-page backed memory.
>You pass a pointer to that table over a shared memory structure.
>
>Is that correct?
>
>I don't think something being in a huge-page necessarily means it is
>shared. That is, allocating your hash table using rte_calloc in the
>primary isn't sufficient to make it available in the secondary.
>
>Further, even if it was, I do not think that it would work, because
>there are a bunch of pointers involved (i.e. ft->data). As far as I'm
>aware, each  process has its own "view" of the shared memory. It maps
>it into its own local address space, and gives it an address according
>to what is currently available there.
>
>Most of my IPC with DPDK has involved passing packets around; I'm not
>sure what the strategy is for hash tables. Synchronization issues
>aside, I think you will need to put the hash table in its entirety in
>shared memory, and avoid global pointers: either offset into the
>shared memory, or have a structure with no pointers at all. From that,
>you can probably build up local pointers.
>
>Maybe somebody else can correct me or come up with a better idea.
>
>Hope that helps,
>
>Kyle
>
>
>> struct onvm_ft*
>>
>> onvm_ft_create(int cnt, int entry_size) {
>>
>> struct rte_hash* hash;
>>
>> struct onvm_ft* ft;
>>
>> struct rte_hash_parameters ipv4_hash_params = {
>>
>> .name = NULL,
>>
>> .entries = cnt,
>>
>> .key_len = sizeof(struct onvm_ft_ipv4_5tuple),
>>
>> .hash_func = NULL,
>>
>> .hash_func_init_val = 0,
>>
>> };
>>
>>
>>
>>
>> char s[64];
>>
>> /* create ipv4 hash table. use core number and cycle counter to get 
>> a unique name. */
>>
>> ipv4_hash_params.name = s;
>>
>> ipv4_hash_params.socket_id = rte_socket_id();
>>
>> snprintf(s, sizeof(s), "onvm_ft_%d-%"PRIu64, rte_lcore_id(), 
>> rte_get_tsc_cycles());
>>
>> hash = rte_hash_create(_hash_params);
>>
>> if (hash == NULL) {
>>
>> return NULL;
>>
>> }
>>
>> ft = (struct onvm_ft*)rte_calloc("table", 1, sizeof(struct onvm_ft), 
>> 0);
>>
>> if (ft == NULL) {
>>
>> rte_hash_free(hash);
>>
>> return NULL;
>>
>> }
>>
>> ft->hash = hash;
>>
>> ft->cnt = cnt;
>>
>> ft->entry_size = entry_size;
>>
>> /* Create data array for storing values */
>>
>> ft->data = rte_calloc("entry", cnt, entry_size, 0);
>>
>> if (ft->data == NULL) {
>>
>> rte_hash_free(hash);
>>
>> rte_free(ft);
>>
>> return NULL;
>>
>> }
>>
>> return ft;
>>
>> }
>>
>>
>>
>>
>> related structure:
>>
>> struct onvm_ft {
>>
>> struct rte_hash* hash;
>>
>> char* data;
>>
>> int cnt;
>>
>> int entry_size;
>>
>> };
>>
>>
>>
>>
>> in server side, I will call the create function, use memzone share it to 
>> client. The following is what I do:
>>
>> related variables:
>>
>> struct onvm_ft *sdn_ft;
>>
>> struct onvm_ft **sdn_ft_p;
>>
>> const struct rte_memzone *mz_ftp;
>>
>>
>>
>>
>> sdn_ft = onvm_ft_create(1024, sizeof(struct onvm_flow_entry));
>>
>> if(sdn_ft == NULL) {
>>
>> rte_exit(EXIT_FAILURE, "Unable to create flow table\n");
>>
>> }
>>
>> mz_ftp = rte_memzone_reserve(MZ_FTP_INFO, sizeof(struct onvm_ft *),
>>
>>   rte_socket_id(), NO_FLAGS);
>>
>> if (mz_ftp == NULL) {
>>
>> rte_exit(EXIT_FAILURE, "Canot reserve memory zone for flow 
>> table pointer\n");
>>
>> }
>>
>> memset(mz_ftp->addr, 0, sizeof(struct onvm_ft *));
>>
>> sdn_ft_p = mz_ftp->addr;
>>
>> *sdn_ft_p = sdn_ft;
>>
>>
>>
>>
>> In client side:
>>
>> struct onvm_ft *sdn_ft;
>>
>> static void
>>

[dpdk-dev] [PATCH] hash: fix memcmp function pointer in multi-process environment

2016-03-14 Thread
BTW, the following is my backtrace when the system crashes. 

Program received signal SIGSEGV, Segmentation fault.

0x004883ab in rte_hash_reset (h=0x0)

at 
/home/zhangwei1984/timopenNetVM/dpdk-2.2.0/lib/librte_hash/rte_cuckoo_hash.c:444

444while (rte_ring_dequeue(h->free_slots, ) == 0)

(gdb) bt

#0  0x004883ab in rte_hash_reset (h=0x0)

at 
/home/zhangwei1984/timopenNetVM/dpdk-2.2.0/lib/librte_hash/rte_cuckoo_hash.c:444

#1  0x0048fdfb in rte_hash_lookup_with_hash (h=0x7fff32cce740, 
key=0x7fffe220, sig=403183624)

at 
/home/zhangwei1984/timopenNetVM/dpdk-2.2.0/lib/librte_hash/rte_cuckoo_hash.c:771

#2  0x0042b551 in onvm_ft_lookup_with_hash (table=0x7fff32cbe4c0, 
pkt=0x7fff390ea9c0, 

data=0x7fffe298) at 
/home/zhangwei1984/openNetVM-master/openNetVM/onvm/shared/onvm_flow_table.c:104

#3  0x0042b8c3 in onvm_flow_dir_get_with_hash (table=0x7fff32cbe4c0, 
pkt=0x7fff390ea9c0, 

flow_entry=0x7fffe298)

at 
/home/zhangwei1984/openNetVM-master/openNetVM/onvm/shared/onvm_flow_dir.c:14

#4  0x004251d7 in packet_handler (pkt=0x7fff390ea9c0, 
meta=0x7fff390eaa00)

at 
/home/zhangwei1984/openNetVM-master/openNetVM/examples/flow_table/flow_table.c:212

#5  0x00429502 in onvm_nf_run ()

#6  0x004253f1 in main (argc=1, argv=0x7fffe648)

at 
/home/zhangwei1984/openNetVM-master/openNetVM/examples/flow_table/flow_table.c:272

(gdb) 








I met a problem which I used the DPDK hash table for multi processes. One 
started as primary process and the other as secondary process.


I based on the client and server multiprocess example. My aim is that server 
creates a hash table, then share it to the client. The client will read and 
write the hash table, and the server will read the hash table. I use rte_calloc 
allocate the space for hash table, use memzone tells the client the hash table 
address.
But once I add an entry into the hash table, calling "lookup" function will 
have the segment fault. But for the lookup function, I have exactly the same 
parameters for lookup when the first time calls the lookup.
If I create the hash table inside the client, everything works correctly.
I put pieces of codes for server and client codes related to the hash table. I 
have spent almost 3 days on this bug. But there is no any clue which can help 
to solve this bug. If any of you can give some suggestions, I will be 
appreciated. I post the question into the mail list, but have not yet got any 
reply.


This problem is that in dpdk multi process - client and server example, 
dpdk-2.2.0/examples/multi_process/client_server_mp
My aim is that server create a hash table, then share it to client. Client will 
write the hash  table, server will read the hash table.  I am using dpdk hash 
table.  What I did is that server create a hash table (table and array 
entries), return the table address.  I use memzone pass the table address to 
client.  In client, the second lookup gets segment fault. The system gets 
crashed.  I will put some related code here. 
create hash table function:

struct onvm_ft*

onvm_ft_create(int cnt, int entry_size) {

struct rte_hash* hash;

struct onvm_ft* ft;

struct rte_hash_parameters ipv4_hash_params = {

.name = NULL,

.entries = cnt,

.key_len = sizeof(struct onvm_ft_ipv4_5tuple),

.hash_func = NULL,

.hash_func_init_val = 0,

};




char s[64];

/* create ipv4 hash table. use core number and cycle counter to get a 
unique name. */

ipv4_hash_params.name = s;

ipv4_hash_params.socket_id = rte_socket_id();

snprintf(s, sizeof(s), "onvm_ft_%d-%"PRIu64, rte_lcore_id(), 
rte_get_tsc_cycles());

hash = rte_hash_create(_hash_params);

if (hash == NULL) {

return NULL;

}

ft = (struct onvm_ft*)rte_calloc("table", 1, sizeof(struct onvm_ft), 0);

if (ft == NULL) {

rte_hash_free(hash);

return NULL;

}

ft->hash = hash;

ft->cnt = cnt;

ft->entry_size = entry_size;

/* Create data array for storing values */

ft->data = rte_calloc("entry", cnt, entry_size, 0);

if (ft->data == NULL) {

rte_hash_free(hash);

rte_free(ft);

return NULL;

}

return ft;

}




related structure:

struct onvm_ft {

struct rte_hash* hash;

char* data;

int cnt;

int entry_size;

};




in server side, I will call the create function, use memzone share it to 
client. The following is what I do:

related variables:

struct onvm_ft *sdn_ft;

struct onvm_ft **sdn_ft_p;

const struct rte_memzone *mz_ftp;




sdn_ft = onvm_ft_create(1024, sizeof(struct onvm_flow_entry));

if(sdn_ft == NULL) {

rte_exit(EXIT_FAILURE, "Unable to create 

[dpdk-dev] [PATCH] hash: fix memcmp function pointer in multi-process environment

2016-03-14 Thread
I met a problem which I used the DPDK hash table for multi processes. One 
started as primary process and the other as secondary process.


I based on the client and server multiprocess example. My aim is that server 
creates a hash table, then share it to the client. The client will read and 
write the hash table, and the server will read the hash table. I use rte_calloc 
allocate the space for hash table, use memzone tells the client the hash table 
address.
But once I add an entry into the hash table, calling "lookup" function will 
have the segment fault. But for the lookup function, I have exactly the same 
parameters for lookup when the first time calls the lookup.
If I create the hash table inside the client, everything works correctly.
I put pieces of codes for server and client codes related to the hash table. I 
have spent almost 3 days on this bug. But there is no any clue which can help 
to solve this bug. If any of you can give some suggestions, I will be 
appreciated. I post the question into the mail list, but have not yet got any 
reply.


This problem is that in dpdk multi process - client and server example, 
dpdk-2.2.0/examples/multi_process/client_server_mp
My aim is that server create a hash table, then share it to client. Client will 
write the hash  table, server will read the hash table.  I am using dpdk hash 
table.  What I did is that server create a hash table (table and array 
entries), return the table address.  I use memzone pass the table address to 
client.  In client, the second lookup gets segment fault. The system gets 
crashed.  I will put some related code here. 
create hash table function:

struct onvm_ft*

onvm_ft_create(int cnt, int entry_size) {

struct rte_hash* hash;

struct onvm_ft* ft;

struct rte_hash_parameters ipv4_hash_params = {

.name = NULL,

.entries = cnt,

.key_len = sizeof(struct onvm_ft_ipv4_5tuple),

.hash_func = NULL,

.hash_func_init_val = 0,

};




char s[64];

/* create ipv4 hash table. use core number and cycle counter to get a 
unique name. */

ipv4_hash_params.name = s;

ipv4_hash_params.socket_id = rte_socket_id();

snprintf(s, sizeof(s), "onvm_ft_%d-%"PRIu64, rte_lcore_id(), 
rte_get_tsc_cycles());

hash = rte_hash_create(_hash_params);

if (hash == NULL) {

return NULL;

}

ft = (struct onvm_ft*)rte_calloc("table", 1, sizeof(struct onvm_ft), 0);

if (ft == NULL) {

rte_hash_free(hash);

return NULL;

}

ft->hash = hash;

ft->cnt = cnt;

ft->entry_size = entry_size;

/* Create data array for storing values */

ft->data = rte_calloc("entry", cnt, entry_size, 0);

if (ft->data == NULL) {

rte_hash_free(hash);

rte_free(ft);

return NULL;

}

return ft;

}




related structure:

struct onvm_ft {

struct rte_hash* hash;

char* data;

int cnt;

int entry_size;

};




in server side, I will call the create function, use memzone share it to 
client. The following is what I do:

related variables:

struct onvm_ft *sdn_ft;

struct onvm_ft **sdn_ft_p;

const struct rte_memzone *mz_ftp;




sdn_ft = onvm_ft_create(1024, sizeof(struct onvm_flow_entry));

if(sdn_ft == NULL) {

rte_exit(EXIT_FAILURE, "Unable to create flow table\n");

}

mz_ftp = rte_memzone_reserve(MZ_FTP_INFO, sizeof(struct onvm_ft *),

  rte_socket_id(), NO_FLAGS);

if (mz_ftp == NULL) {

rte_exit(EXIT_FAILURE, "Canot reserve memory zone for flow 
table pointer\n");

}

memset(mz_ftp->addr, 0, sizeof(struct onvm_ft *));

sdn_ft_p = mz_ftp->addr;

*sdn_ft_p = sdn_ft;




In client side:

struct onvm_ft *sdn_ft;

static void

map_flow_table(void) {

const struct rte_memzone *mz_ftp;

struct onvm_ft **ftp;




mz_ftp = rte_memzone_lookup(MZ_FTP_INFO);

if (mz_ftp == NULL)

rte_exit(EXIT_FAILURE, "Cannot get flow table pointer\n");

ftp = mz_ftp->addr;

sdn_ft = *ftp;

}




The following is my debug message: I set a breakpoint in lookup table line. To 
narrow down the problem, I just send one flow. So the second time and the first 
time, the packets are the same.  

For the first time, it works. I print out the parameters: inside the 
onvm_ft_lookup function, if there is a related entry, it will return the 
address by flow_entry. 

Breakpoint 1, datapath_handle_read (dp=0x78c0) at 
/home/zhangwei1984/openNetVM-master/openNetVM/examples/flow_table/sdn.c:191

191 ret = onvm_ft_lookup(sdn_ft, fk, 
(char**)_entry);

(gdb) print *sdn_ft 

$1 = {hash = 0x7fff32cce740, data = 0x7fff32cb0480 

[dpdk-dev] dpdk hash lookup function crashed (segment fault)

2016-03-13 Thread
Hi all, 
When I use the dpdk lookup function, I met the segment fault problem. Can  
anybody help to look at why this happens. I will put the aim what I want to do 
and the related piece of code, and my debug message, 


This problem is that in dpdk multi process - client and server example, 
dpdk-2.2.0/examples/multi_process/client_server_mp
My aim is that server create a hash table, then share it to client. Client will 
write the hash  table, server will read the hash table.  I am using dpdk hash 
table.  What I did is that server create a hash table (table and array 
entries), return the table address.  I use memzone pass the table address to 
client.  In client, the second lookup gets segment fault. The system gets 
crashed.  I will put some related code here. 
create hash table function:

struct onvm_ft*

onvm_ft_create(int cnt, int entry_size) {

struct rte_hash* hash;

struct onvm_ft* ft;

struct rte_hash_parameters ipv4_hash_params = {

.name = NULL,

.entries = cnt,

.key_len = sizeof(struct onvm_ft_ipv4_5tuple),

.hash_func = NULL,

.hash_func_init_val = 0,

};




char s[64];

/* create ipv4 hash table. use core number and cycle counter to get a 
unique name. */

ipv4_hash_params.name = s;

ipv4_hash_params.socket_id = rte_socket_id();

snprintf(s, sizeof(s), "onvm_ft_%d-%"PRIu64, rte_lcore_id(), 
rte_get_tsc_cycles());

hash = rte_hash_create(_hash_params);

if (hash == NULL) {

return NULL;

}

ft = (struct onvm_ft*)rte_calloc("table", 1, sizeof(struct onvm_ft), 0);

if (ft == NULL) {

rte_hash_free(hash);

return NULL;

}

ft->hash = hash;

ft->cnt = cnt;

ft->entry_size = entry_size;

/* Create data array for storing values */

ft->data = rte_calloc("entry", cnt, entry_size, 0);

if (ft->data == NULL) {

rte_hash_free(hash);

rte_free(ft);

return NULL;

}

return ft;

}




related structure:

struct onvm_ft {

struct rte_hash* hash;

char* data;

int cnt;

int entry_size;

};




in server side, I will call the create function, use memzone share it to 
client. The following is what I do:

related variables:

struct onvm_ft *sdn_ft;

struct onvm_ft **sdn_ft_p;

const struct rte_memzone *mz_ftp;




sdn_ft = onvm_ft_create(1024, sizeof(struct onvm_flow_entry));

if(sdn_ft == NULL) {

rte_exit(EXIT_FAILURE, "Unable to create flow table\n");

}

mz_ftp = rte_memzone_reserve(MZ_FTP_INFO, sizeof(struct onvm_ft *),

  rte_socket_id(), NO_FLAGS);

if (mz_ftp == NULL) {

rte_exit(EXIT_FAILURE, "Canot reserve memory zone for flow 
table pointer\n");

}

memset(mz_ftp->addr, 0, sizeof(struct onvm_ft *));

sdn_ft_p = mz_ftp->addr;

*sdn_ft_p = sdn_ft;




In client side:

struct onvm_ft *sdn_ft;

static void

map_flow_table(void) {

const struct rte_memzone *mz_ftp;

struct onvm_ft **ftp;




mz_ftp = rte_memzone_lookup(MZ_FTP_INFO);

if (mz_ftp == NULL)

rte_exit(EXIT_FAILURE, "Cannot get flow table pointer\n");

ftp = mz_ftp->addr;

sdn_ft = *ftp;

}




The following is my debug message: I set a breakpoint in lookup table line. To 
narrow down the problem, I just send one flow. So the second time and the first 
time, the packets are the same.  

For the first time, it works. I print out the parameters: inside the 
onvm_ft_lookup function, if there is a related entry, it will return the 
address by flow_entry. 

Breakpoint 1, datapath_handle_read (dp=0x78c0) at 
/home/zhangwei1984/openNetVM-master/openNetVM/examples/flow_table/sdn.c:191

191 ret = onvm_ft_lookup(sdn_ft, fk, 
(char**)_entry);

(gdb) print *sdn_ft 

$1 = {hash = 0x7fff32cce740, data = 0x7fff32cb0480 "", cnt = 1024, entry_size = 
56}

(gdb) print *fk

$2 = {src_addr = 419496202, dst_addr = 453050634, src_port = 53764, dst_port = 
11798, proto = 17 '\021'}

(gdb) s

onvm_ft_lookup (table=0x7fff32cbe4c0, key=0x7fff32b99d00, data=0x768d2b00) 
at 
/home/zhangwei1984/openNetVM-master/openNetVM/onvm/shared/onvm_flow_table.c:151

151 softrss = onvm_softrss(key);

(gdb) n

152 printf("software rss %d\n", softrss);

(gdb) 

software rss 403183624

154 tbl_index = rte_hash_lookup_with_hash(table->hash, (const void 
*)key, softrss);

(gdb) print table->hash

$3 = (struct rte_hash *) 0x7fff32cce740

(gdb) print *key

$4 = {src_addr = 419496202, dst_addr = 453050634, src_port = 53764, dst_port = 
11798, proto = 17 '\021'}

(gdb) print softrss 

$5 = 403183624

(gdb) c




After I hit c, it will do the second lookup,

Breakpoint 

[dpdk-dev] share a table

2016-03-13 Thread
Hi all, 


Now I use the DPDK multiple process example : client -server. I want the server 
create a hash table, then share it to client.  Currently what I do is to 
create_hash_table function returns a pointer which points to the flow table 
(inside the create_hash_table function, I use rte_calloc function allocates the 
space for the flow table and entries). I use memzone stores the table address, 
then client maps the memzone can find the table address. But this method does 
not work. When I access the flow table, it will have a segment fault problem.  
Does anybody know what will be a good way which can allow the server create a 
hash table share to the client? Any suggestion will be appreciated. 


[dpdk-dev] dpdk multi process increase the number of mbufs, throughput gets dropped

2015-12-17 Thread
Hi all, 


When running the multi process example, does anybody know that why increasing 
the number of mbufs, the performance gets dropped. 


In multi process example, there are two macros which are related to the number 
of mbufs


#defineMBUFS_PER_CLIENT1536
|
| #defineMBUFS_PER_PORT1536 |
| |


If increasing these two numbers by 8 times, the performance drops about 10%. 
Does anybody know why?

| constunsigned num_mbufs = (num_clients * MBUFS_PER_CLIENT) \ |
| | + (ports->num_ports * MBUFS_PER_PORT); |
| pktmbuf_pool = rte_mempool_create(PKTMBUF_POOL_NAME, num_mbufs, |
| | MBUF_SIZE, MBUF_CACHE_SIZE, |
| | sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init, |
| | NULL, rte_pktmbuf_init, NULL, rte_socket_id(), NO_FLAGS ); |


[dpdk-dev] How kernel can share the mem from dpdk hugepage?

2015-10-02 Thread
Hi all, 


I want to ask does anybody know how kernel can share the info from dpdk 
hugepage. My project has a requirement which kernel needs to get some info from 
dpdk application. Eg, in multi-process example, every client has a shared ring 
buffer with server. The shared ring contains some meta data of packets. Is it 
possible that dpdk share this info to kernel, then kernel can access it? What 
are the key points that can help to achieve the goal? 


[dpdk-dev] dpdk 1.8.0 disable burst problem

2015-09-10 Thread
Hi all, 


I am using the dpdk example dpdk-1.8.0/examples/multi_process/client_server_mp 
on ubuntu 14.04.  I need to disable the batch. At first, I just change the 
macro  in mp_server/main.c and mp_client/client.c
#define PACKET_READ_SIZE 32 to 1 
The server and the client can not receive any packets.  Almost of the packets 
are counted to err from the port stat.

Port:0, rx:511, rx_err:33011882, rx_nombuf:0, tx:0, tx_err:0

Port:0, rx_rate:0, rx_err_rate:782253,rx_nombuf_rate:0, tx_rate:0, tx_err_rate:0

However, DPDK 1.4.1 works only changing the batch size from 32 to 1 in server 
and client. 

What I did in the next step is 

disable the vector PMD burst on DPDK 1.8.0 version. 

disable the macro from config file 

CONFIG_RTE_IXGBE_INC_VECTOR=n

However, nothing is changed. Port still reports packets errs. 

Can anyone help to look at this problem? I will be very appreciated. 

BTW, why DPDK 1.4.1 can not be compiled on ubuntu 14.04?