[dpdk-dev] Does DPDK i40e driver support 'Toeplitz hash'
Hi, We want to use 'Toeplitz hash' for RSS hash on our server equipped with 'Intel X710-DA4' card. However, seemed that we hit the exact same issue as this: Why only rx queue "0" can receive network packet by i40e NIC http://dpdk.org/ml/archives/dev/2015-July/022453.html ... We are using the v16.11 release and would like to confirm if the issue above has been address. If so, could anyone post here how to configure? Appreciate, Alex Wang,
[dpdk-dev] [PATCH 2/2 v3] kni: add documentation for the mempool capacity
Just to confirm, should I do anything before it gets merged? On Thu, Jun 9, 2016 at 5:03 AM, Mcnamara, John wrote: > > -Original Message- > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Alex Wang > > Sent: Saturday, May 21, 2016 8:59 AM > > To: dev at dpdk.org > > Cc: Yigit, Ferruh ; Alex Wang > > > > Subject: [dpdk-dev] [PATCH 2/2 v3] kni: add documentation for the mempool > > capacity > > > > From: Alex Wang > > > > Function like 'rte_kni_rx_burst()' keeps allocating 'MAX_MBUF_BURST_NUM' > > mbufs to kni fifo queue unless the queue's capacity > > ('KNI_FIFO_COUNT_MAX') is reached. So, if the mempool is under- > > provisioned, user may run into "Out of Memory" logs from KNI code. > > This commit documents the need to provision mempool capacity of more than > > "2 x KNI_FIFO_COUNT_MAX" for each KNI interface. > > > > Signed-off-by: Alex Wang > > Acked-by: Ferruh Yigit > > Acked-by: John McNamara > >
[dpdk-dev] [PATCH 2/2 v2] kni: Add documentation for the mempool capacity
Sht, sorry for missing that, sending V3, On Mon, May 23, 2016 at 10:10 AM, Ferruh Yigit wrote: > On 5/23/2016 6:00 PM, Ferruh Yigit wrote: > > On 5/21/2016 8:25 AM, Alex Wang wrote: > >> From: Alex Wang > >> > >> Function like 'rte_kni_rx_burst()' keeps > >> allocating 'MAX_MBUF_BURST_NUM' mbufs to > >> kni fifo queue unless the queue's capacity > >> ('KNI_FIFO_COUNT_MAX') is reached. So, if > >> the mempool is under-provisioned, user may > >> run into "Out of Memory" logs from KNI code. > >> This commit documents the need to provision > >> mempool capacity of more than > >> "2 x KNI_FIFO_COUNT_MAX" for each KNI interface. > >> > >> Signed-off-by: Alex Wang > > > > Acked-by: Ferruh Yigit > > > > Hi Alex, > > This is detail but I just recognized patch subject after tag starts with > uppercase. Would you mind sending another patch? You can keep my ack > with it. > > Thanks, > ferruh >
[dpdk-dev] [PATCH 1/2] rte_kni: Fix documentation.
Sorry for the delayed reply, just sent V2~ On 18 May 2016 at 03:33, Ferruh Yigit wrote: > On 5/14/2016 7:22 PM, Alex Wang wrote: > > From: Alex Wang > > > > The 'mbufs' alloc/free descriptions for 'rte_kni_tx_burst()' > > and 'rte_kni_rx_burst()' should be inverted. > > > > Signed-off-by: Alex Wang > > --- > > lib/librte_kni/rte_kni.h | 8 > > 1 file changed, 4 insertions(+), 4 deletions(-) > > > > diff --git a/lib/librte_kni/rte_kni.h b/lib/librte_kni/rte_kni.h > > index ef9faa9..25fa45e 100644 > > --- a/lib/librte_kni/rte_kni.h > > +++ b/lib/librte_kni/rte_kni.h > > @@ -161,8 +161,8 @@ extern int rte_kni_handle_request(struct rte_kni > *kni); > > /** > > * Retrieve a burst of packets from a KNI interface. The retrieved > packets are > > * stored in rte_mbuf structures whose pointers are supplied in the > array of > > - * mbufs, and the maximum number is indicated by num. It handles the > freeing of > > - * the mbufs in the free queue of KNI interface. > > + * mbufs, and the maximum number is indicated by num. It handles > allocating > > + * the mbufs for KNI interface alloc queue. > > * > > * @param kni > > * The KNI interface context. > > @@ -180,8 +180,8 @@ extern unsigned rte_kni_rx_burst(struct rte_kni *kni, > > /** > > * Send a burst of packets to a KNI interface. The packets to be sent > out are > > * stored in rte_mbuf structures whose pointers are supplied in the > array of > > - * mbufs, and the maximum number is indicated by num. It handles > allocating > > - * the mbufs for KNI interface alloc queue. > > + * mbufs, and the maximum number is indicated by num. It handles the > freeing of > > + * the mbufs in the free queue of KNI interface. > > * > > * @param kni > > * The KNI interface context. > > > > Hi Alex, > > Can you please update the patch subject, > - replace "rte_kni" tag with a "kni", > - after space start with lowercase, > - remove the "." at the end of the sentences, > like: > "kni: fix documentation" > (these are defined in > > http://dpdk.org/doc/guides/contributing/patches.html#commit-messages-subject-line > ) > > Also can you please add a "Fixes" line, more details on: > http://dpdk.org/doc/guides/contributing/patches.html#commit-messages-body > > Although this information converted into documentation, this is not > really the documentation, and the patch title gives little information, > if possible can you please add more information while keeping it around > 50 chars limit. > > finally, patch content is OK. > > Thanks, > ferruh > > -- Alex Wang, Open vSwitch developer
[dpdk-dev] [PATCH 2/2 v3] kni: add documentation for the mempool capacity
From: Alex Wang Function like 'rte_kni_rx_burst()' keeps allocating 'MAX_MBUF_BURST_NUM' mbufs to kni fifo queue unless the queue's capacity ('KNI_FIFO_COUNT_MAX') is reached. So, if the mempool is under-provisioned, user may run into "Out of Memory" logs from KNI code. This commit documents the need to provision mempool capacity of more than "2 x KNI_FIFO_COUNT_MAX" for each KNI interface. Signed-off-by: Alex Wang Acked-by: Ferruh Yigit --- lib/librte_kni/rte_kni.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/lib/librte_kni/rte_kni.h b/lib/librte_kni/rte_kni.h index 25fa45e..ac11148 100644 --- a/lib/librte_kni/rte_kni.h +++ b/lib/librte_kni/rte_kni.h @@ -113,6 +113,9 @@ extern void rte_kni_init(unsigned int max_kni_ifaces); * The rte_kni_alloc shall not be called before rte_kni_init() has been * called. rte_kni_alloc is thread safe. * + * The mempool should have capacity of more than "2 x KNI_FIFO_COUNT_MAX" + * elements for each KNI interface allocated. + * * @param pktmbuf_pool * The mempool for allocting mbufs for packets. * @param conf -- 2.1.4
[dpdk-dev] [PATCH 1/2 v3] kni: fix inverted function comments
From: Alex Wang The 'mbufs' alloc/free descriptions for 'rte_kni_tx_burst()' and 'rte_kni_rx_burst()' should be inverted. Fixes: 3fc5ca2 (kni: initial import) Signed-off-by: Alex Wang Acked-by: Ferruh Yigit --- lib/librte_kni/rte_kni.h | 8 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/lib/librte_kni/rte_kni.h b/lib/librte_kni/rte_kni.h index ef9faa9..25fa45e 100644 --- a/lib/librte_kni/rte_kni.h +++ b/lib/librte_kni/rte_kni.h @@ -161,8 +161,8 @@ extern int rte_kni_handle_request(struct rte_kni *kni); /** * Retrieve a burst of packets from a KNI interface. The retrieved packets are * stored in rte_mbuf structures whose pointers are supplied in the array of - * mbufs, and the maximum number is indicated by num. It handles the freeing of - * the mbufs in the free queue of KNI interface. + * mbufs, and the maximum number is indicated by num. It handles allocating + * the mbufs for KNI interface alloc queue. * * @param kni * The KNI interface context. @@ -180,8 +180,8 @@ extern unsigned rte_kni_rx_burst(struct rte_kni *kni, /** * Send a burst of packets to a KNI interface. The packets to be sent out are * stored in rte_mbuf structures whose pointers are supplied in the array of - * mbufs, and the maximum number is indicated by num. It handles allocating - * the mbufs for KNI interface alloc queue. + * mbufs, and the maximum number is indicated by num. It handles the freeing of + * the mbufs in the free queue of KNI interface. * * @param kni * The KNI interface context. -- 2.1.4
[dpdk-dev] [PATCH 2/2 v2] kni: Add documentation for the mempool capacity
From: Alex Wang Function like 'rte_kni_rx_burst()' keeps allocating 'MAX_MBUF_BURST_NUM' mbufs to kni fifo queue unless the queue's capacity ('KNI_FIFO_COUNT_MAX') is reached. So, if the mempool is under-provisioned, user may run into "Out of Memory" logs from KNI code. This commit documents the need to provision mempool capacity of more than "2 x KNI_FIFO_COUNT_MAX" for each KNI interface. Signed-off-by: Alex Wang --- lib/librte_kni/rte_kni.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/lib/librte_kni/rte_kni.h b/lib/librte_kni/rte_kni.h index 25fa45e..ac11148 100644 --- a/lib/librte_kni/rte_kni.h +++ b/lib/librte_kni/rte_kni.h @@ -113,6 +113,9 @@ extern void rte_kni_init(unsigned int max_kni_ifaces); * The rte_kni_alloc shall not be called before rte_kni_init() has been * called. rte_kni_alloc is thread safe. * + * The mempool should have capacity of more than "2 x KNI_FIFO_COUNT_MAX" + * elements for each KNI interface allocated. + * * @param pktmbuf_pool * The mempool for allocting mbufs for packets. * @param conf -- 2.1.4
[dpdk-dev] [PATCH 1/2 v2] kni: fix inverted function comments
From: Alex Wang The 'mbufs' alloc/free descriptions for 'rte_kni_tx_burst()' and 'rte_kni_rx_burst()' should be inverted. Fixes: 3fc5ca2 (kni: initial import) Signed-off-by: Alex Wang --- lib/librte_kni/rte_kni.h | 8 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/lib/librte_kni/rte_kni.h b/lib/librte_kni/rte_kni.h index ef9faa9..25fa45e 100644 --- a/lib/librte_kni/rte_kni.h +++ b/lib/librte_kni/rte_kni.h @@ -161,8 +161,8 @@ extern int rte_kni_handle_request(struct rte_kni *kni); /** * Retrieve a burst of packets from a KNI interface. The retrieved packets are * stored in rte_mbuf structures whose pointers are supplied in the array of - * mbufs, and the maximum number is indicated by num. It handles the freeing of - * the mbufs in the free queue of KNI interface. + * mbufs, and the maximum number is indicated by num. It handles allocating + * the mbufs for KNI interface alloc queue. * * @param kni * The KNI interface context. @@ -180,8 +180,8 @@ extern unsigned rte_kni_rx_burst(struct rte_kni *kni, /** * Send a burst of packets to a KNI interface. The packets to be sent out are * stored in rte_mbuf structures whose pointers are supplied in the array of - * mbufs, and the maximum number is indicated by num. It handles allocating - * the mbufs for KNI interface alloc queue. + * mbufs, and the maximum number is indicated by num. It handles the freeing of + * the mbufs in the free queue of KNI interface. * * @param kni * The KNI interface context. -- 2.1.4
[dpdk-dev] possible kni bug and proposed fix
On 17 May 2016 at 03:07, Ferruh Yigit wrote: > On 5/16/2016 4:31 PM, ALeX Wang wrote: > > Hi Ferruh, > > > > Thx for pointing out the 'fill alloc_q with these mubfs _until it gets > > full_.', > > > > I saw the size of all queues are 'KNI_FIFO_COUNT_MAX (1024)'... > > The corresponding memory required is more than what I specify as > > 'socket_mem' (since i'm using VM)... > > > > If the mempool is not big enough to fill the ring, this explains the > error log. Logic is to fill the alloc_q, but if you are missing the > required mbufs, in each rte_kni_rx_burst() it will complain about memory. > > But the required memory for mbufs to fill the ring is not too much. It > should be ~2Mbytes, are you sure you are missing this much memory? > > Thx for reminding me about this, by default, i only allocate 2048 mbufs for my pool... once i raise it to 4096, the issue is gone... will the `KNI_FIFO_COUNT_MAX ` ever change? I want to try adding some documentation,... > > Also, in my use case, I only `tcpreplay` through the kni interface, and > > my application only do rx and then free the 'mbufs'. So there is no tx > > at all. > > > > So, in my case, I still think this is a bug/defect, or somewhere i still > > misunderstand, > > > > P.S. The description here seems to be inverted, > > > http://dpdk.org/doc/api/rte__kni_8h.html#a0cdd727cdc227d005fef22c0189f3dfe > > 'rte_kni_rx_burst' does the 'It handles allocating the mbufs for KNI > > interface alloc queue.' > > > > You are right. That part of the description for rte_kni_rx_burst and > rte_kni_tx_burst needs to be replaced. Do you want to send a patch? > > Sure, will do that, Thanks, Alex Wang, > > Thanks, > > Alex Wang, > > > > On 16 May 2016 at 04:20, Ferruh Yigit > <mailto:ferruh.yigit at intel.com>> wrote: > > > > On 5/15/2016 5:48 AM, ALeX Wang wrote: > > > Hi, > > > > > > When using the kni module to test my application inside > > > debian (virtualbox) VM (kernel version 4.4), I get the > > > > > > "KNI: Out of memory" > > > > > > from syslog every time I `tcpreply` packets through > > > the kni interface. > > > > > > After checking source code, I saw that when I call > > > 'rte_kni_rx_burst()', no matter how many packets > > > are actually retrieved, we always call 'kni_allocate_mbufs()' > > > and try allocate 'MAX_MBUF_BURST_NUM' more > > > mbufs... I fix the issue via using this patch below, > > > > > > Could you confirm if this is an actual bug? > > > > > > > Hi Alex, > > > > I don't think this is a bug. > > > > kni_allocate_mbufs() will allocate MAX_MBUF_BURST_NUM mbufs as you > > mentioned. And will fill alloc_q with these mubfs _until it gets > full_. > > And if the alloc_q is full and there are remaining mbufs, they will > be > > freed. So for some cases this isn't the most optimized way, but > there is > > no defect. > > > > Since you are getting "KNI: Out of memory", somewhere else can be > > missing freeing mbufs. > > > > mbufs freeing done in rte_kni_tx_burst(), I can guess two cases that > can > > cause problem: > > a) not calling rte_kni_tx_burst() frequent, so that all free mbufs > > consumed. > > b) calling rte_kni_tx_burst() with number of mbufs bigger than > > MAX_MBUF_BURST_NUM, because this function frees at most > > MAX_MBUF_BURST_NUM of mbufs, but if you are calling calling > > rte_kni_tx_burst() with bigger numbers, this will cause mbufs to > stuck > > in free_q > > > > > > Regards, > > ferruh > > > > > > > > > > > > -- > > Alex Wang, > > Open vSwitch developer > > -- Alex Wang, Open vSwitch developer
[dpdk-dev] possible kni bug and proposed fix
Hi Ferruh, Thx for pointing out the 'fill alloc_q with these mubfs _until it gets full_.', I saw the size of all queues are 'KNI_FIFO_COUNT_MAX (1024)'... The corresponding memory required is more than what I specify as 'socket_mem' (since i'm using VM)... Also, in my use case, I only `tcpreplay` through the kni interface, and my application only do rx and then free the 'mbufs'. So there is no tx at all. So, in my case, I still think this is a bug/defect, or somewhere i still misunderstand, P.S. The description here seems to be inverted, http://dpdk.org/doc/api/rte__kni_8h.html#a0cdd727cdc227d005fef22c0189f3dfe 'rte_kni_rx_burst' does the 'It handles allocating the mbufs for KNI interface alloc queue.' Thanks, Alex Wang, On 16 May 2016 at 04:20, Ferruh Yigit wrote: > On 5/15/2016 5:48 AM, ALeX Wang wrote: > > Hi, > > > > When using the kni module to test my application inside > > debian (virtualbox) VM (kernel version 4.4), I get the > > > > "KNI: Out of memory" > > > > from syslog every time I `tcpreply` packets through > > the kni interface. > > > > After checking source code, I saw that when I call > > 'rte_kni_rx_burst()', no matter how many packets > > are actually retrieved, we always call 'kni_allocate_mbufs()' > > and try allocate 'MAX_MBUF_BURST_NUM' more > > mbufs... I fix the issue via using this patch below, > > > > Could you confirm if this is an actual bug? > > > > Hi Alex, > > I don't think this is a bug. > > kni_allocate_mbufs() will allocate MAX_MBUF_BURST_NUM mbufs as you > mentioned. And will fill alloc_q with these mubfs _until it gets full_. > And if the alloc_q is full and there are remaining mbufs, they will be > freed. So for some cases this isn't the most optimized way, but there is > no defect. > > Since you are getting "KNI: Out of memory", somewhere else can be > missing freeing mbufs. > > mbufs freeing done in rte_kni_tx_burst(), I can guess two cases that can > cause problem: > a) not calling rte_kni_tx_burst() frequent, so that all free mbufs > consumed. > b) calling rte_kni_tx_burst() with number of mbufs bigger than > MAX_MBUF_BURST_NUM, because this function frees at most > MAX_MBUF_BURST_NUM of mbufs, but if you are calling calling > rte_kni_tx_burst() with bigger numbers, this will cause mbufs to stuck > in free_q > > > Regards, > ferruh > > > -- Alex Wang, Open vSwitch developer
[dpdk-dev] possible kni bug and proposed fix
Hi, When using the kni module to test my application inside debian (virtualbox) VM (kernel version 4.4), I get the "KNI: Out of memory" from syslog every time I `tcpreply` packets through the kni interface. After checking source code, I saw that when I call 'rte_kni_rx_burst()', no matter how many packets are actually retrieved, we always call 'kni_allocate_mbufs()' and try allocate 'MAX_MBUF_BURST_NUM' more mbufs... I fix the issue via using this patch below, Could you confirm if this is an actual bug? diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c index ea9baf4..5d7c1ce 100644 --- a/lib/librte_kni/rte_kni.c +++ b/lib/librte_kni/rte_kni.c @@ -129,6 +129,7 @@ struct rte_kni_memzone_pool { static void kni_free_mbufs(struct rte_kni *kni); static void kni_allocate_mbufs(struct rte_kni *kni); +static void kni_allocate_n_mbufs(struct rte_kni *kni, int size); static volatile int kni_fd = -1; static struct rte_kni_memzone_pool kni_memzone_pool = { @@ -556,7 +557,7 @@ rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, unsigned num) /* If buffers removed, allocate mbufs and then put them into alloc_q */ if (ret) - kni_allocate_mbufs(kni); + kni_allocate_n_mbufs(kni, (int)ret); return ret; } @@ -577,6 +578,12 @@ kni_free_mbufs(struct rte_kni *kni) static void kni_allocate_mbufs(struct rte_kni *kni) { + kni_allocate_n_mbufs(kni, MAX_MBUF_BURST_NUM); +} + +static void +kni_allocate_n_mbufs(struct rte_kni *kni, int size) +{ int i, ret; struct rte_mbuf *pkts[MAX_MBUF_BURST_NUM]; @@ -595,13 +602,18 @@ kni_allocate_mbufs(struct rte_kni *kni) RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) != offsetof(struct rte_kni_mbuf, ol_flags)); + if (size > MAX_MBUF_BURST_NUM) { + RTE_LOG(ERR, KNI, "Invalid mbufs size\n"); + return; + } + /* Check if pktmbuf pool has been configured */ if (kni->pktmbuf_pool == NULL) { RTE_LOG(ERR, KNI, "No valid mempool for allocating mbufs\n"); return; } - for (i = 0; i < MAX_MBUF_BURST_NUM; i++) { + for (i = 0; i < size; i++) { pkts[i] = rte_pktmbuf_alloc(kni->pktmbuf_pool); if (unlikely(pkts[i] == NULL)) { /* Out of memory */ Thanks, -- Alex Wang,
[dpdk-dev] [PATCH 2/2] rte_kni: Add documentation for the mempool capacity.
From: Alex Wang Function like 'rte_kni_rx_burst()' keeps allocating 'MAX_MBUF_BURST_NUM' mbufs to kni fifo queue unless the queue's capacity ('KNI_FIFO_COUNT_MAX') is reached. So, if the mempool is under-provisioned, user may run into "Out of Memory" logs from KNI code. This commit documents the need to provision mempool capacity of couple thousand elements for each KNI interface. Signed-off-by: Alex Wang --- lib/librte_kni/rte_kni.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/lib/librte_kni/rte_kni.h b/lib/librte_kni/rte_kni.h index 25fa45e..05d2d39 100644 --- a/lib/librte_kni/rte_kni.h +++ b/lib/librte_kni/rte_kni.h @@ -113,6 +113,9 @@ extern void rte_kni_init(unsigned int max_kni_ifaces); * The rte_kni_alloc shall not be called before rte_kni_init() has been * called. rte_kni_alloc is thread safe. * + * The mempool should have capacity of couple thousand elements for each + * KNI interface allocated. + * * @param pktmbuf_pool * The mempool for allocting mbufs for packets. * @param conf -- 2.1.4
[dpdk-dev] [PATCH 1/2] rte_kni: Fix documentation.
From: Alex Wang The 'mbufs' alloc/free descriptions for 'rte_kni_tx_burst()' and 'rte_kni_rx_burst()' should be inverted. Signed-off-by: Alex Wang --- lib/librte_kni/rte_kni.h | 8 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/lib/librte_kni/rte_kni.h b/lib/librte_kni/rte_kni.h index ef9faa9..25fa45e 100644 --- a/lib/librte_kni/rte_kni.h +++ b/lib/librte_kni/rte_kni.h @@ -161,8 +161,8 @@ extern int rte_kni_handle_request(struct rte_kni *kni); /** * Retrieve a burst of packets from a KNI interface. The retrieved packets are * stored in rte_mbuf structures whose pointers are supplied in the array of - * mbufs, and the maximum number is indicated by num. It handles the freeing of - * the mbufs in the free queue of KNI interface. + * mbufs, and the maximum number is indicated by num. It handles allocating + * the mbufs for KNI interface alloc queue. * * @param kni * The KNI interface context. @@ -180,8 +180,8 @@ extern unsigned rte_kni_rx_burst(struct rte_kni *kni, /** * Send a burst of packets to a KNI interface. The packets to be sent out are * stored in rte_mbuf structures whose pointers are supplied in the array of - * mbufs, and the maximum number is indicated by num. It handles allocating - * the mbufs for KNI interface alloc queue. + * mbufs, and the maximum number is indicated by num. It handles the freeing of + * the mbufs in the free queue of KNI interface. * * @param kni * The KNI interface context. -- 2.1.4
[dpdk-dev] [4.4 kernel] kni lockup, kernel dump
Thx, Ferruh and Thomas for the confirmation and pointer! On 14 April 2016 at 07:43, Thomas Monjalon wrote: > 2016-04-14 15:29, Ferruh Yigit: > > On 4/13/2016 11:26 PM, ALeX Wang wrote: > > > Did more experiment, found that it has nothing to do with the kernel > > > version, > > > > > > It only happens when using kni module with '--no-huge' eal flag... > > > > > > Is that expected? > > > > > > > Yes. > > KNI kernel module expects mempool is physically continuous, with > > '--no-huge' flag this is no more true, and as a result KNI module can > > access to incorrect address. > > This is a bug. > The memory API should allow to explicit this restriction and return > an error if it cannot offer the requested continuous memory. > See this thread for discussion: > http://dpdk.org/ml/archives/dev/2016-April/037444.html > -- Alex Wang, Open vSwitch developer
[dpdk-dev] [4.4 kernel] kni lockup, kernel dump
Did more experiment, found that it has nothing to do with the kernel version, It only happens when using kni module with '--no-huge' eal flag... Is that expected? Thanks, Alex Wang, On 12 April 2016 at 17:28, ALeX Wang wrote: > I tried compiling both dpdk-2.2 and dpdk-16.04, all have the issue, > > Thanks, > > On 12 April 2016 at 17:19, ALeX Wang wrote: > >> Hi, >> >> I recently upgraded my debian/jessie to 4.4 kernel, and my application >> uses kni to >> create test interface, >> >> However, when doing 'rte_kni_alloc()', i observed the following log in >> syslog... >> If i go on trying setting interface via ifconfig, the execution locks >> up... >> >> [ 888.051427] BUG: unable to handle kernel paging request at >> 073b010bcb88 >> [ 888.051972] IP: [] kni_net_rx_normal+0x3a/0x290 >> [rte_kni] >> [ 888.052371] PGD 0 >> [ 888.052638] Oops: [#1] SMP >> [ 888.053041] Modules linked in: rte_kni(O) xt_conntrack ipt_MASQUERADE >> nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 >> nf_nat_ipv4 xt_addrtype iptable_filter ip_tables x_tables nf_nat >> nf_conntrack br_netfilter bridge stp llc dm_thin_pool dm_persistent_data >> dm_bio_prison dm_bufio libcrc32c crc32c_generic loop dm_mod nfsd >> auth_rpcgss oid_registry nfs_acl nfs lockd grace fscache sunrpc >> crct10dif_pclmul crc32_pclmul jitterentropy_rng sha256_ssse3 sha256_generic >> hmac drbg ansi_cprng ppdev parport_pc i2c_piix4 aesni_intel aes_x86_64 >> parport acpi_cpufreq tpm_tis tpm lrw gf128mul glue_helper ablk_helper evdev >> cryptd 8250_fintek psmouse processor serio_raw video battery button ac >> pcspkr autofs4 ext4 crc16 mbcache jbd2 sg sd_mod ata_generic ahci >> crc32c_intel libahci ata_piix fjes libata >> [ 888.065118] e1000 scsi_mod >> [ 888.065476] CPU: 1 PID: 2226 Comm: kni_single Tainted: G O >> 4.4.0-0.bpo.1-amd64 #1 Debian 4.4.6-1~bpo8+1 >> [ 888.065861] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS >> VirtualBox 12/01/2006 >> [ 888.066183] task: 88011ab9e0c0 ti: 8800b2fb8000 task.ti: >> 8800b2fb8000 >> [ 888.06] RIP: 0010:[] [] >> kni_net_rx_normal+0x3a/0x290 [rte_kni] >> [ 888.094268] RSP: 0018:8800b2fbbd40 EFLAGS: 00010246 >> [ 888.094879] RAX: 8800da073000 RBX: 8800d88bc2f8 RCX: >> >> [ 888.095392] RDX: 073b010bcb80 RSI: 0282 RDI: >> 8800da073840 >> [ 888.095929] RBP: 03e8 R08: 8800b2fb8000 R09: >> 00cec3fd1e2c >> [ 888.096425] R10: 0002 R11: R12: >> 8800d88bc2c0 >> [ 888.096913] R13: 8800d88bc2d0 R14: 8800da073840 R15: >> 8800da073840 >> [ 888.097382] FS: () GS:88011fc8() >> knlGS: >> [ 888.098052] CS: 0010 DS: ES: CR0: 8005003b >> [ 888.098490] CR2: 073b010bcb88 CR3: 00044000 CR4: >> 000406e0 >> [ 888.098958] Stack: >> [ 888.099294] 8800b2fb8000 88011fc95d80 88011ab9e0c0 >> >> [ 888.100428] 810bc311 0002 >> 00cec3fd1e2c >> [ 888.101629] 8800b2fb8000 88011fc8df80 0282 >> 0001 >> [ 888.103134] Call Trace: >> [ 888.103607] [] ? >> __raw_callee_save___pv_queued_spin_unlock+0x11/0x20 >> [ 888.104553] [] ? try_to_del_timer_sync+0x59/0x80 >> [ 888.105290] [] ? del_timer_sync+0x44/0x50 >> [ 888.105722] [] ? schedule_timeout+0x169/0x2d0 >> [ 888.106159] [] ? >> trace_event_raw_event_tick_stop+0x100/0x100 >> [ 888.106816] [] ? kni_thread_single+0x4c/0xa0 >> [rte_kni] >> [ 888.107279] [] ? kni_init_net+0x50/0x50 [rte_kni] >> [ 888.107754] [] ? kthread+0xdf/0x100 >> [ 888.108177] [] ? kthread_park+0x50/0x50 >> [ 888.108600] [] ? ret_from_fork+0x3f/0x70 >> [ 888.109026] [] ? kthread_park+0x50/0x50 >> [ 888.109448] Code: 54 55 53 48 81 ec 28 01 00 00 48 8b 97 78 01 00 00 >> 65 48 8b 04 25 28 00 00 00 48 89 84 24 20 01 00 00 31 c0 48 8b 87 48 01 00 >> 00 <44> 8b 72 08 48 89 04 24 8b 42 04 8b 0a 41 83 ee 01 83 e8 01 29 >> [ 888.145305] RIP [] kni_net_rx_normal+0x3a/0x290 >> [rte_kni] >> [ 888.145904] RSP >> [ 888.146337] CR2: 073b010bcb88 >> [ 888.146705] ---[ end trace ef3848430517129b ]--- >> >> -- >> Alex Wang, >> >> > > > -- > Alex Wang, > Open vSwitch developer > -- Alex Wang, Open vSwitch developer
[dpdk-dev] [4.4 kernel] kni lockup, kernel dump
I tried compiling both dpdk-2.2 and dpdk-16.04, all have the issue, Thanks, On 12 April 2016 at 17:19, ALeX Wang wrote: > Hi, > > I recently upgraded my debian/jessie to 4.4 kernel, and my application > uses kni to > create test interface, > > However, when doing 'rte_kni_alloc()', i observed the following log in > syslog... > If i go on trying setting interface via ifconfig, the execution locks up... > > [ 888.051427] BUG: unable to handle kernel paging request at > 073b010bcb88 > [ 888.051972] IP: [] kni_net_rx_normal+0x3a/0x290 > [rte_kni] > [ 888.052371] PGD 0 > [ 888.052638] Oops: [#1] SMP > [ 888.053041] Modules linked in: rte_kni(O) xt_conntrack ipt_MASQUERADE > nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 > nf_nat_ipv4 xt_addrtype iptable_filter ip_tables x_tables nf_nat > nf_conntrack br_netfilter bridge stp llc dm_thin_pool dm_persistent_data > dm_bio_prison dm_bufio libcrc32c crc32c_generic loop dm_mod nfsd > auth_rpcgss oid_registry nfs_acl nfs lockd grace fscache sunrpc > crct10dif_pclmul crc32_pclmul jitterentropy_rng sha256_ssse3 sha256_generic > hmac drbg ansi_cprng ppdev parport_pc i2c_piix4 aesni_intel aes_x86_64 > parport acpi_cpufreq tpm_tis tpm lrw gf128mul glue_helper ablk_helper evdev > cryptd 8250_fintek psmouse processor serio_raw video battery button ac > pcspkr autofs4 ext4 crc16 mbcache jbd2 sg sd_mod ata_generic ahci > crc32c_intel libahci ata_piix fjes libata > [ 888.065118] e1000 scsi_mod > [ 888.065476] CPU: 1 PID: 2226 Comm: kni_single Tainted: G O > 4.4.0-0.bpo.1-amd64 #1 Debian 4.4.6-1~bpo8+1 > [ 888.065861] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS > VirtualBox 12/01/2006 > [ 888.066183] task: 88011ab9e0c0 ti: 8800b2fb8000 task.ti: > 8800b2fb8000 > [ 888.06] RIP: 0010:[] [] > kni_net_rx_normal+0x3a/0x290 [rte_kni] > [ 888.094268] RSP: 0018:8800b2fbbd40 EFLAGS: 00010246 > [ 888.094879] RAX: 8800da073000 RBX: 8800d88bc2f8 RCX: > > [ 888.095392] RDX: 073b010bcb80 RSI: 0282 RDI: > 8800da073840 > [ 888.095929] RBP: 03e8 R08: 8800b2fb8000 R09: > 00cec3fd1e2c > [ 888.096425] R10: 0002 R11: R12: > 8800d88bc2c0 > [ 888.096913] R13: 8800d88bc2d0 R14: 8800da073840 R15: > 8800da073840 > [ 888.097382] FS: () GS:88011fc8() > knlGS: > [ 888.098052] CS: 0010 DS: ES: CR0: 8005003b > [ 888.098490] CR2: 073b010bcb88 CR3: 00044000 CR4: > 000406e0 > [ 888.098958] Stack: > [ 888.099294] 8800b2fb8000 88011fc95d80 88011ab9e0c0 > > [ 888.100428] 810bc311 0002 > 00cec3fd1e2c > [ 888.101629] 8800b2fb8000 88011fc8df80 0282 > 0001 > [ 888.103134] Call Trace: > [ 888.103607] [] ? > __raw_callee_save___pv_queued_spin_unlock+0x11/0x20 > [ 888.104553] [] ? try_to_del_timer_sync+0x59/0x80 > [ 888.105290] [] ? del_timer_sync+0x44/0x50 > [ 888.105722] [] ? schedule_timeout+0x169/0x2d0 > [ 888.106159] [] ? > trace_event_raw_event_tick_stop+0x100/0x100 > [ 888.106816] [] ? kni_thread_single+0x4c/0xa0 > [rte_kni] > [ 888.107279] [] ? kni_init_net+0x50/0x50 [rte_kni] > [ 888.107754] [] ? kthread+0xdf/0x100 > [ 888.108177] [] ? kthread_park+0x50/0x50 > [ 888.108600] [] ? ret_from_fork+0x3f/0x70 > [ 888.109026] [] ? kthread_park+0x50/0x50 > [ 888.109448] Code: 54 55 53 48 81 ec 28 01 00 00 48 8b 97 78 01 00 00 65 > 48 8b 04 25 28 00 00 00 48 89 84 24 20 01 00 00 31 c0 48 8b 87 48 01 00 00 > <44> 8b 72 08 48 89 04 24 8b 42 04 8b 0a 41 83 ee 01 83 e8 01 29 > [ 888.145305] RIP [] kni_net_rx_normal+0x3a/0x290 > [rte_kni] > [ 888.145904] RSP > [ 888.146337] CR2: 073b010bcb88 > [ 888.146705] ---[ end trace ef3848430517129b ]--- > > -- > Alex Wang, > > -- Alex Wang, Open vSwitch developer
[dpdk-dev] [4.4 kernel] kni lockup, kernel dump
Hi, I recently upgraded my debian/jessie to 4.4 kernel, and my application uses kni to create test interface, However, when doing 'rte_kni_alloc()', i observed the following log in syslog... If i go on trying setting interface via ifconfig, the execution locks up... [ 888.051427] BUG: unable to handle kernel paging request at 073b010bcb88 [ 888.051972] IP: [] kni_net_rx_normal+0x3a/0x290 [rte_kni] [ 888.052371] PGD 0 [ 888.052638] Oops: [#1] SMP [ 888.053041] Modules linked in: rte_kni(O) xt_conntrack ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 xt_addrtype iptable_filter ip_tables x_tables nf_nat nf_conntrack br_netfilter bridge stp llc dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio libcrc32c crc32c_generic loop dm_mod nfsd auth_rpcgss oid_registry nfs_acl nfs lockd grace fscache sunrpc crct10dif_pclmul crc32_pclmul jitterentropy_rng sha256_ssse3 sha256_generic hmac drbg ansi_cprng ppdev parport_pc i2c_piix4 aesni_intel aes_x86_64 parport acpi_cpufreq tpm_tis tpm lrw gf128mul glue_helper ablk_helper evdev cryptd 8250_fintek psmouse processor serio_raw video battery button ac pcspkr autofs4 ext4 crc16 mbcache jbd2 sg sd_mod ata_generic ahci crc32c_intel libahci ata_piix fjes libata [ 888.065118] e1000 scsi_mod [ 888.065476] CPU: 1 PID: 2226 Comm: kni_single Tainted: G O 4.4.0-0.bpo.1-amd64 #1 Debian 4.4.6-1~bpo8+1 [ 888.065861] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006 [ 888.066183] task: 88011ab9e0c0 ti: 8800b2fb8000 task.ti: 8800b2fb8000 [ 888.06] RIP: 0010:[] [] kni_net_rx_normal+0x3a/0x290 [rte_kni] [ 888.094268] RSP: 0018:8800b2fbbd40 EFLAGS: 00010246 [ 888.094879] RAX: 8800da073000 RBX: 8800d88bc2f8 RCX: [ 888.095392] RDX: 073b010bcb80 RSI: 0282 RDI: 8800da073840 [ 888.095929] RBP: 03e8 R08: 8800b2fb8000 R09: 00cec3fd1e2c [ 888.096425] R10: 0002 R11: R12: 8800d88bc2c0 [ 888.096913] R13: 8800d88bc2d0 R14: 8800da073840 R15: 8800da073840 [ 888.097382] FS: () GS:88011fc8() knlGS: [ 888.098052] CS: 0010 DS: ES: CR0: 8005003b [ 888.098490] CR2: 073b010bcb88 CR3: 00044000 CR4: 000406e0 [ 888.098958] Stack: [ 888.099294] 8800b2fb8000 88011fc95d80 88011ab9e0c0 [ 888.100428] 810bc311 0002 00cec3fd1e2c [ 888.101629] 8800b2fb8000 88011fc8df80 0282 0001 [ 888.103134] Call Trace: [ 888.103607] [] ? __raw_callee_save___pv_queued_spin_unlock+0x11/0x20 [ 888.104553] [] ? try_to_del_timer_sync+0x59/0x80 [ 888.105290] [] ? del_timer_sync+0x44/0x50 [ 888.105722] [] ? schedule_timeout+0x169/0x2d0 [ 888.106159] [] ? trace_event_raw_event_tick_stop+0x100/0x100 [ 888.106816] [] ? kni_thread_single+0x4c/0xa0 [rte_kni] [ 888.107279] [] ? kni_init_net+0x50/0x50 [rte_kni] [ 888.107754] [] ? kthread+0xdf/0x100 [ 888.108177] [] ? kthread_park+0x50/0x50 [ 888.108600] [] ? ret_from_fork+0x3f/0x70 [ 888.109026] [] ? kthread_park+0x50/0x50 [ 888.109448] Code: 54 55 53 48 81 ec 28 01 00 00 48 8b 97 78 01 00 00 65 48 8b 04 25 28 00 00 00 48 89 84 24 20 01 00 00 31 c0 48 8b 87 48 01 00 00 <44> 8b 72 08 48 89 04 24 8b 42 04 8b 0a 41 83 ee 01 83 e8 01 29 [ 888.145305] RIP [] kni_net_rx_normal+0x3a/0x290 [rte_kni] [ 888.145904] RSP [ 888.146337] CR2: 073b010bcb88 [ 888.146705] ---[ end trace ef3848430517129b ]--- -- Alex Wang,
[dpdk-dev] Must kni be associated with a dpdk port?
Thx a lot, for the answer,! Exactly what I'm looking for,~ On 31 March 2016 at 02:55, Ferruh Yigit wrote: > On 3/30/2016 6:20 PM, ALeX Wang wrote: > > Hi, > > > > I want to use 'rte_kni_alloc()' to create a kernel iface and > > use it to test application rx. From the api and example in > > 'examples/kni/main.c', i saw the 'conf' argument is assigned > > with pci info of a dpdk port. > > > > Want to ask if this is compulsory... Must kni always be > > used together with a dpdk port? > > > > Thanks, > > > > Hi Alex, > > You don't have to associate kni with dpdk port. > > pci info is required for ethtool support, if you are only interested in > data transfer, you don't have to provide pci information. > > Regards, > ferruh > -- Alex Wang, Open vSwitch developer
[dpdk-dev] Must kni be associated with a dpdk port?
Hi, I want to use 'rte_kni_alloc()' to create a kernel iface and use it to test application rx. From the api and example in 'examples/kni/main.c', i saw the 'conf' argument is assigned with pci info of a dpdk port. Want to ask if this is compulsory... Must kni always be used together with a dpdk port? Thanks, -- Alex Wang,
[dpdk-dev] [ovs-discuss] does vswitchd runs multiple threads when i added dpdk devices
Hey Srinivas, Right now, ovs has only one dpdk polling thread. We are working on creating multiple polling threads and pinning polling threads to the same cpu socket as dpdk interface. Thanks, Alex Wang, On Mon, Jul 28, 2014 at 9:15 AM, Ben Pfaff wrote: > On Mon, Jul 28, 2014 at 07:33:35AM +, Srinivas Reddi wrote: > > As per my understanding each dpdk device is polled on different thread > . > > But in my case vswithcd is running in only single thread [on core 0] , > I expected to run on 3 cores .. > > > > One thing I want to clarify that .. does ovs-vswitchd runs on single > core only .. or multiple thereads .. when I added dpdk devices . > > If vswitchd runs on multiple threads , when I added dpdk devices .. pls > let me know how can I run . > > It looks like right now OVS has only one dpdk polling thread. > ___ > discuss mailing list > discuss at openvswitch.org > http://openvswitch.org/mailman/listinfo/discuss >