[dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions

2016-09-09 Thread Ferruh Yigit
Hi Bernard,

This is an old patch, sorry for commenting after this long.

On 6/12/2016 6:11 PM, Bernard Iremonger wrote:
> Use rte_spinlock_trylock() in the rx/tx burst functions to
> take the queue spinlock.
> 
> Signed-off-by: Bernard Iremonger 
> Acked-by: Konstantin Ananyev 
> ---

...

>  static uint16_t
> @@ -143,8 +154,10 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf 
> **bufs,
> uint8_t i, j, k;
> 
> rte_eth_macaddr_get(internals->port_id, _mac);
> -   /* Copy slave list to protect against slave up/down changes during tx
> -* bursting */

This piece,

...

> for (i = 0; i < num_of_slaves; i++) {
> -   struct port *port = _8023ad_ports[slaves[i]];
> +   struct port *port;
> +
> +   port = _8023ad_ports[internals->active_slaves[i]];

And this piece seems needs to be moved into next patch in the patchset.

...

And if you will send new version of the patchset, there are a few
warnings from check-git-log.sh:

Wrong headline prefix:
bonding: remove memcpy from burst functions
bonding: take queue spinlock in rx/tx burst functions
bonding: grab queue spinlocks in slave add and remove
bonding: add spinlock to rx and tx queues
Wrong headline lowercase:
bonding: take queue spinlock in rx/tx burst functions
bonding: add spinlock to rx and tx queues

Thanks,
ferruh



[dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions

2016-06-16 Thread Thomas Monjalon
2016-06-16 16:41, Iremonger, Bernard:
> Hi Thomas,
> 
> > 2016-06-16 15:32, Bruce Richardson:
> > > On Mon, Jun 13, 2016 at 01:28:08PM +0100, Iremonger, Bernard wrote:
> > > > > Why does this particular PMD need spinlocks when doing RX and TX,
> > > > > while other device types do not? How is adding/removing devices
> > > > > from a bonded device different to other control operations that
> > > > > can be done on physical PMDs? Is this not similar to say bringing
> > > > > down or hotplugging out a physical port just before an RX or TX
> > operation takes place?
> > > > > For all other PMDs we rely on the app to synchronise control and
> > > > > data plane operation - why not here?
> > > > >
> > > > > /Bruce
> > > >
> > > > This issue arose during VM live migration testing.
> > > > For VM live migration it is necessary (while traffic is running) to be 
> > > > able to
> > remove a bonded slave device, stop it, close it and detach it.
> > > > It a slave device is removed from a bonded device while traffic is 
> > > > running
> > a segmentation fault may occur in the rx/tx burst function. The spinlock has
> > been added to prevent this occurring.
> > > >
> > > > The bonding device already uses a spinlock to synchronise between the
> > add and remove functionality and the slave_link_status_change_monitor
> > code.
> > > >
> > > > Previously testpmd did not allow, stop, close or detach of PMD while
> > > > traffic was running. Testpmd has been modified with the following
> > > > patchset
> > > >
> > > > http://dpdk.org/dev/patchwork/patch/13472/
> > > >
> > > > It now allows stop, close and detach of a PMD provided in it is not
> > forwarding and is not a slave of bonded PMD.
> > > >
> > > I will admit to not being fully convinced, but if nobody else has any
> > > serious objections, and since this patch has been reviewed and acked,
> > > I'm ok to merge it in. I'll do so shortly.
> > 
> > Please hold on.
> > Seeing locks introduced in the Rx/Tx path is an alert.
> > We clearly need a design document to explain where locks can be used and
> > what are the responsibility of the control plane.
> > If everybody agrees in this document that DPDK can have some locks in the
> > fast path, then OK to merge it.
> > 
> > So I would say NACK for 16.07 and maybe postpone to 16.11.
> 
> Looking at the documentation for the bonding PMD.
> 
> http://dpdk.org/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.html
> 
> In section 10.2 it states the following:
> 
> Bonded devices support the dynamical addition and removal of slave devices 
> using the rte_eth_bond_slave_add / rte_eth_bond_slave_remove APIs.
> 
> If a slave device is added or removed while traffic is running, there is the 
> possibility of a segmentation fault in the rx/tx burst functions. This is 
> most likely to occur in the round robin bonding mode.
> 
> This patch set fixes what appears to be a bug in the bonding PMD.

It can be fixed by removing this statement in the doc.

One of the design principle of DPDK is to avoid locks.

> Performance measurements have been made with this patch set applied and 
> without the patches applied using 64 byte packets. 
> 
> With the patches applied the following drop in performance was observed:
> 
> % drop for fwd+io:0.16%
> % drop for fwd+mac:   0.39%
> 
> This patch set has been reviewed and ack'ed, so I think it should be applied 
> in 16.07

I understand your point of view and I gave mine.
Now we need more opinions from others.


[dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions

2016-06-16 Thread Thomas Monjalon
2016-06-16 15:32, Bruce Richardson:
> On Mon, Jun 13, 2016 at 01:28:08PM +0100, Iremonger, Bernard wrote:
> > > Why does this particular PMD need spinlocks when doing RX and TX, while
> > > other device types do not? How is adding/removing devices from a bonded
> > > device different to other control operations that can be done on physical
> > > PMDs? Is this not similar to say bringing down or hotplugging out a 
> > > physical
> > > port just before an RX or TX operation takes place?
> > > For all other PMDs we rely on the app to synchronise control and data 
> > > plane
> > > operation - why not here?
> > > 
> > > /Bruce
> > 
> > This issue arose during VM live migration testing. 
> > For VM live migration it is necessary (while traffic is running) to be able 
> > to remove a bonded slave device, stop it, close it and detach it.
> > It a slave device is removed from a bonded device while traffic is running 
> > a segmentation fault may occur in the rx/tx burst function. The spinlock 
> > has been added to prevent this occurring.
> > 
> > The bonding device already uses a spinlock to synchronise between the add 
> > and remove functionality and the slave_link_status_change_monitor code. 
> > 
> > Previously testpmd did not allow, stop, close or detach of PMD while 
> > traffic was running. Testpmd has been modified with the following patchset 
> > 
> > http://dpdk.org/dev/patchwork/patch/13472/
> > 
> > It now allows stop, close and detach of a PMD provided in it is not 
> > forwarding and is not a slave of bonded PMD.
> > 
> I will admit to not being fully convinced, but if nobody else has any serious
> objections, and since this patch has been reviewed and acked, I'm ok to merge 
> it
> in. I'll do so shortly.

Please hold on.
Seeing locks introduced in the Rx/Tx path is an alert.
We clearly need a design document to explain where locks can be used
and what are the responsibility of the control plane.
If everybody agrees in this document that DPDK can have some locks
in the fast path, then OK to merge it.

So I would say NACK for 16.07 and maybe postpone to 16.11.



[dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions

2016-06-16 Thread Iremonger, Bernard
Hi Thomas,

> 2016-06-16 15:32, Bruce Richardson:
> > On Mon, Jun 13, 2016 at 01:28:08PM +0100, Iremonger, Bernard wrote:
> > > > Why does this particular PMD need spinlocks when doing RX and TX,
> > > > while other device types do not? How is adding/removing devices
> > > > from a bonded device different to other control operations that
> > > > can be done on physical PMDs? Is this not similar to say bringing
> > > > down or hotplugging out a physical port just before an RX or TX
> operation takes place?
> > > > For all other PMDs we rely on the app to synchronise control and
> > > > data plane operation - why not here?
> > > >
> > > > /Bruce
> > >
> > > This issue arose during VM live migration testing.
> > > For VM live migration it is necessary (while traffic is running) to be 
> > > able to
> remove a bonded slave device, stop it, close it and detach it.
> > > It a slave device is removed from a bonded device while traffic is running
> a segmentation fault may occur in the rx/tx burst function. The spinlock has
> been added to prevent this occurring.
> > >
> > > The bonding device already uses a spinlock to synchronise between the
> add and remove functionality and the slave_link_status_change_monitor
> code.
> > >
> > > Previously testpmd did not allow, stop, close or detach of PMD while
> > > traffic was running. Testpmd has been modified with the following
> > > patchset
> > >
> > > http://dpdk.org/dev/patchwork/patch/13472/
> > >
> > > It now allows stop, close and detach of a PMD provided in it is not
> forwarding and is not a slave of bonded PMD.
> > >
> > I will admit to not being fully convinced, but if nobody else has any
> > serious objections, and since this patch has been reviewed and acked,
> > I'm ok to merge it in. I'll do so shortly.
> 
> Please hold on.
> Seeing locks introduced in the Rx/Tx path is an alert.
> We clearly need a design document to explain where locks can be used and
> what are the responsibility of the control plane.
> If everybody agrees in this document that DPDK can have some locks in the
> fast path, then OK to merge it.
> 
> So I would say NACK for 16.07 and maybe postpone to 16.11.

Looking at the documentation for the bonding PMD.

http://dpdk.org/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.html

In section 10.2 it states the following:

Bonded devices support the dynamical addition and removal of slave devices 
using the rte_eth_bond_slave_add / rte_eth_bond_slave_remove APIs.

If a slave device is added or removed while traffic is running, there is the 
possibility of a segmentation fault in the rx/tx burst functions. This is most 
likely to occur in the round robin bonding mode.

This patch set fixes what appears to be a bug in the bonding PMD.

Performance measurements have been made with this patch set applied and without 
the patches applied using 64 byte packets. 

With the patches applied the following drop in performance was observed:

% drop for fwd+io:  0.16%
% drop for fwd+mac: 0.39%

This patch set has been reviewed and ack'ed, so I think it should be applied in 
16.07

Regards,

Bernard.




[dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions

2016-06-16 Thread Bruce Richardson
On Mon, Jun 13, 2016 at 01:28:08PM +0100, Iremonger, Bernard wrote:
> Hi Bruce,
> 
> 
> 
> > Subject: Re: [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx
> > burst functions
> > 
> > On Sun, Jun 12, 2016 at 06:11:28PM +0100, Bernard Iremonger wrote:
> > > Use rte_spinlock_trylock() in the rx/tx burst functions to take the
> > > queue spinlock.
> > >
> > > Signed-off-by: Bernard Iremonger 
> > > Acked-by: Konstantin Ananyev 
> > > ---
> > 
> > Why does this particular PMD need spinlocks when doing RX and TX, while
> > other device types do not? How is adding/removing devices from a bonded
> > device different to other control operations that can be done on physical
> > PMDs? Is this not similar to say bringing down or hotplugging out a physical
> > port just before an RX or TX operation takes place?
> > For all other PMDs we rely on the app to synchronise control and data plane
> > operation - why not here?
> > 
> > /Bruce
> 
> This issue arose during VM live migration testing. 
> For VM live migration it is necessary (while traffic is running) to be able 
> to remove a bonded slave device, stop it, close it and detach it.
> It a slave device is removed from a bonded device while traffic is running a 
> segmentation fault may occur in the rx/tx burst function. The spinlock has 
> been added to prevent this occurring.
> 
> The bonding device already uses a spinlock to synchronise between the add and 
> remove functionality and the slave_link_status_change_monitor code. 
> 
> Previously testpmd did not allow, stop, close or detach of PMD while traffic 
> was running. Testpmd has been modified with the following patchset 
> 
> http://dpdk.org/dev/patchwork/patch/13472/
> 
> It now allows stop, close and detach of a PMD provided in it is not 
> forwarding and is not a slave of bonded PMD.
> 
I will admit to not being fully convinced, but if nobody else has any serious
objections, and since this patch has been reviewed and acked, I'm ok to merge it
in. I'll do so shortly.

/Bruce


[dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions

2016-06-13 Thread Iremonger, Bernard
Hi Bruce,



> Subject: Re: [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx
> burst functions
> 
> On Sun, Jun 12, 2016 at 06:11:28PM +0100, Bernard Iremonger wrote:
> > Use rte_spinlock_trylock() in the rx/tx burst functions to take the
> > queue spinlock.
> >
> > Signed-off-by: Bernard Iremonger 
> > Acked-by: Konstantin Ananyev 
> > ---
> 
> Why does this particular PMD need spinlocks when doing RX and TX, while
> other device types do not? How is adding/removing devices from a bonded
> device different to other control operations that can be done on physical
> PMDs? Is this not similar to say bringing down or hotplugging out a physical
> port just before an RX or TX operation takes place?
> For all other PMDs we rely on the app to synchronise control and data plane
> operation - why not here?
> 
> /Bruce

This issue arose during VM live migration testing. 
For VM live migration it is necessary (while traffic is running) to be able to 
remove a bonded slave device, stop it, close it and detach it.
It a slave device is removed from a bonded device while traffic is running a 
segmentation fault may occur in the rx/tx burst function. The spinlock has been 
added to prevent this occurring.

The bonding device already uses a spinlock to synchronise between the add and 
remove functionality and the slave_link_status_change_monitor code. 

Previously testpmd did not allow, stop, close or detach of PMD while traffic 
was running. Testpmd has been modified with the following patchset 

http://dpdk.org/dev/patchwork/patch/13472/

It now allows stop, close and detach of a PMD provided in it is not forwarding 
and is not a slave of bonded PMD.

 Regards,

Bernard.



[dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions

2016-06-13 Thread Bruce Richardson
On Sun, Jun 12, 2016 at 06:11:28PM +0100, Bernard Iremonger wrote:
> Use rte_spinlock_trylock() in the rx/tx burst functions to
> take the queue spinlock.
> 
> Signed-off-by: Bernard Iremonger 
> Acked-by: Konstantin Ananyev 
> ---

Why does this particular PMD need spinlocks when doing RX and TX, while other
device types do not? How is adding/removing devices from a bonded device 
different
to other control operations that can be done on physical PMDs? Is this not
similar to say bringing down or hotplugging out a physical port just before an
RX or TX operation takes place?
For all other PMDs we rely on the app to synchronise control and data plane
operation - why not here? 

/Bruce



[dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions

2016-06-12 Thread Bernard Iremonger
Use rte_spinlock_trylock() in the rx/tx burst functions to
take the queue spinlock.

Signed-off-by: Bernard Iremonger 
Acked-by: Konstantin Ananyev 
---
 drivers/net/bonding/rte_eth_bond_pmd.c | 116 -
 1 file changed, 84 insertions(+), 32 deletions(-)

diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c 
b/drivers/net/bonding/rte_eth_bond_pmd.c
index 2e624bb..93043ef 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1,7 +1,7 @@
 /*-
  *   BSD LICENSE
  *
- *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
  *   All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
@@ -92,16 +92,22 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, 
uint16_t nb_pkts)

internals = bd_rx_q->dev_private;

-
-   for (i = 0; i < internals->active_slave_count && nb_pkts; i++) {
-   /* Offset of pointer to *bufs increases as packets are received
-* from other slaves */
-   num_rx_slave = rte_eth_rx_burst(internals->active_slaves[i],
-   bd_rx_q->queue_id, bufs + num_rx_total, 
nb_pkts);
-   if (num_rx_slave) {
-   num_rx_total += num_rx_slave;
-   nb_pkts -= num_rx_slave;
+   if (rte_spinlock_trylock(_rx_q->lock)) {
+   for (i = 0; i < internals->active_slave_count && nb_pkts; i++) {
+   /* Offset of pointer to *bufs increases as packets
+* are received from other slaves
+*/
+   num_rx_slave = rte_eth_rx_burst(
+   internals->active_slaves[i],
+   bd_rx_q->queue_id,
+   bufs + num_rx_total,
+   nb_pkts);
+   if (num_rx_slave) {
+   num_rx_total += num_rx_slave;
+   nb_pkts -= num_rx_slave;
+   }
}
+   rte_spinlock_unlock(_rx_q->lock);
}

return num_rx_total;
@@ -112,14 +118,19 @@ bond_ethdev_rx_burst_active_backup(void *queue, struct 
rte_mbuf **bufs,
uint16_t nb_pkts)
 {
struct bond_dev_private *internals;
+   uint16_t ret = 0;

/* Cast to structure, containing bonded device's port id and queue id */
struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)queue;

internals = bd_rx_q->dev_private;

-   return rte_eth_rx_burst(internals->current_primary_port,
-   bd_rx_q->queue_id, bufs, nb_pkts);
+   if (rte_spinlock_trylock(_rx_q->lock)) {
+   ret = rte_eth_rx_burst(internals->current_primary_port,
+   bd_rx_q->queue_id, bufs, nb_pkts);
+   rte_spinlock_unlock(_rx_q->lock);
+   }
+   return ret;
 }

 static uint16_t
@@ -143,8 +154,10 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf 
**bufs,
uint8_t i, j, k;

rte_eth_macaddr_get(internals->port_id, _mac);
-   /* Copy slave list to protect against slave up/down changes during tx
-* bursting */
+
+   if (rte_spinlock_trylock(_rx_q->lock) == 0)
+   return num_rx_total;
+
slave_count = internals->active_slave_count;
memcpy(slaves, internals->active_slaves,
sizeof(internals->active_slaves[0]) * slave_count);
@@ -190,7 +203,7 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf 
**bufs,
j++;
}
}
-
+   rte_spinlock_unlock(_rx_q->lock);
return num_rx_total;
 }

@@ -406,14 +419,19 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct 
rte_mbuf **bufs,
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;

+   if (rte_spinlock_trylock(_tx_q->lock) == 0)
+   return num_tx_total;
+
/* Copy slave list to protect against slave up/down changes during tx
 * bursting */
num_of_slaves = internals->active_slave_count;
memcpy(slaves, internals->active_slaves,
sizeof(internals->active_slaves[0]) * num_of_slaves);

-   if (num_of_slaves < 1)
+   if (num_of_slaves < 1) {
+   rte_spinlock_unlock(_tx_q->lock);
return num_tx_total;
+   }

/* Populate slaves mbuf with which packets are to be sent on it  */
for (i = 0; i < nb_pkts; i++) {
@@ -444,7 +462,7 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct 
rte_mbuf **bufs,
num_tx_total += num_tx_slave;
}
}
-
+   rte_spinlock_unlock(_tx_q->lock);
return