From: Ryan Wilson wr...@nicira.com
Since the use of xcache, the netdev struct can be freed by the
revalidator threads. This fact also makes the following race possible:
1. Consider there is a gre tunnel, and datapath flows that go through
the tunnel. Now, assume user deletes the tunnel
I removed call to rte_mempool_put_bulk() from dpdk_queue_flush__().
Also removed unnecessary dpdk_buf pointer from ofpbuf. Then Pushed to
master.
Thanks.
On Thu, Jul 17, 2014 at 2:29 PM, Daniele Di Proietto
ddiproie...@vmware.com wrote:
DPDK mempools rely on rte_lcore_id() to implement a
Thanks for removing the extra rte_mempool_put_bulk().
About the dpdk_buf pointer in ofpbuf, I felt that it might come in handy for
the patch 'dpif-netdev: Polling threads directly call ofproto upcall functions’,
but we might change the interface as well.
Thanks again,
Daniele
On Jul 20, 2014,
Hi,
Would you like to show different new LED lighting satisfied with your customer
?
Do you want to sell safe LED lighting for your customer?
If you need, we have complete LED equipments to make ssure safe product for
your customer.
And we send the catalogue for you.
Below the newest LED
On 19 July 2014 06:13, Jesse Gross je...@nicira.com wrote:
It's true, the field ordering is not significant if you understand all
of the fields. The concern is that userspace may want to keep track of
flows even if it doesn't understand all of the fields (for example, it
can determine that
Typically, kernel datapath threads send upcalls to userspace where
handler threads process the upcalls. For TAP and DPDK devices, the
datapath threads operate in userspace, so there is no need for
separate handler threads.
This patch allows userspace datapath threads to directly call the
ofproto
The message was undeliverable due to the following reason(s):
Your message could not be delivered because the destination server was
not reachable within the allowed queue period. The amount of time
a message is queued before it is returned depends on local configura-
tion parameters.
Most
Converting the flow key and mask back into netlink format during each
flow dump is fairly expensive. By caching the netlink versions of these
the first time a flow is dumped, and copying the memory directly during
subsequent dumps, we are able to support up to 15% more flows in the
datapath.
Perf
On 21 July 2014 10:28, Joe Stringer joestrin...@nicira.com wrote:
On 19 July 2014 06:13, Jesse Gross je...@nicira.com wrote:
It's true, the field ordering is not significant if you understand all
of the fields. The concern is that userspace may want to keep track of
flows even if it doesn't
On 16 July 2014 12:15, Joe Stringer joestrin...@nicira.com wrote:
Signed-off-by: Joe Stringer joestrin...@nicira.com
This patch was previously required for the series datapath: Cache netlink
flow key, mask on flow_put., but is no longer required for newer versions
of that, as I've changed the
10 matches
Mail list logo