Re: [Xen-devel] [PATCH 4/4] [Net] Support accelerated network plugin modules

2007-06-18 Thread Kieran Mansley
On Fri, 2007-06-15 at 14:03 -0400, Zhu Han wrote:
> On 6/15/07, Kieran Mansley <[EMAIL PROTECTED]> wrote:
> >
> > The lock protects the use_count variable.  The use_count variable
> > prevents the plugin module unloading while it is being used.  I couldn't
> > just use the lock to prevent the module unloading as the hook function
> > (i) might block (and holding a spin_lock would be rather antisocial)
> > (ii) might call back into netfront and try to take the lock again, which
> > would deadlock.
> >
> 
> If the hook routine blocks on the other code path instead of tx/rx
> path,  why not use a simple atomic reference count. When the reference
> count reachs zero, free it. Considering you can synchronzie on tx/rx
> path, the free will not happen under the critical code path. So the
> uninitialize work could be done inside the free routine even if it
> blocks.

Switching to atomics could be of benefit.  This would make the
hooks_usecount a kref, and due to the third rule of krefs (from the kref
docs) we'd still need synchronisation around most of the kref_get calls,
but as in some of the cases we have this lock for the list access
already, I'm guessing that would be OK.  I can prepare another version
of the patches with this change, as I'm currently making a number of
other changes as suggested by Keir.

I suspect that some would still prefer a full switch to using RCU
however.  I hope you don't mind me following up to all the locking-
related questions in this one email.  The use of RCU seems to centre
around whether or not the hooks functions can (or indeed should) block,
and whether it will result in a useful increase in performance.  I've
taken another look at RCU today to see if I could make it work.

The reason for hooks blocking is not well defined as there is only one
implementation of an accelerator, and so I'm not sure what other
accelerator modules might do.  However, the one we've written makes use
of xenbus during a number of the callbacks, and I suspect this is likely
to be pretty common.  Many xenbus calls can block.  For example, during
the probe hook call, it accesses xenstore to gather information from the
backend.  During a suspend or resume hook call, it may need to do things
such as unregister or register a xenbus watch.  These are just examples
rather than a definitive list.

If RCU is the way to go, these calls would all have to be made non-
blocking, by for example using a work queue to perform the blocking work
later.  This would be OK, but would need an additional few functions on
the interface between netfront and the accelerator, and complicate
netfront a little more.  For example, the accelerator's suspend hook
returning would no longer signify that the plugin had completed it's
suspended-related work, so netfront would have to wait for a
"suspend_done" call from the plugin before it could itself return.  In
these cases the total locking overhead is likely to be similar to the
current case, while the code would have become more complex.

None of this would affect the data path locking overhead (which is
already zero by design).  One thing I note from looking into RCU is that
the call_rcu callback function may be invoked from bottom-half context.
To give us the zero-additional-locking-overhead-on-the-data-path-
property that the current approach has, we call the following when
trying to disable the hooks:
netif_poll_disable(vif_state->np->netdev);
netif_tx_lock_bh(vif_state->np->netdev);
As both of these can block and so are not suitable for bottom-half
execution we'd either have to find an alternative (e.g. use RCU on the
data path, which would clearly increase the locking overhead) or defer
this work to another context, which would again complicate matters.

In all I feel that RCU is not the right solution here: I can see it
resulting in more code, harder to write plugins and little benefit as
the data path performance would (at best) not be affected.  

Perhaps a good way forward is for me to produce another iteration of the
patches using atomics/kref in place of the hooks_usecount and fewer
spin_locks as described at the start, and then see if that is
acceptable.

Thanks for taking the time to look at the patches,

Kieran

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 4/4] [Net] Support accelerated network plugin modules

2007-06-15 Thread Zhu Han

On 6/15/07, Kieran Mansley <[EMAIL PROTECTED]> wrote:


The lock protects the use_count variable.  The use_count variable
prevents the plugin module unloading while it is being used.  I couldn't
just use the lock to prevent the module unloading as the hook function
(i) might block (and holding a spin_lock would be rather antisocial)
(ii) might call back into netfront and try to take the lock again, which
would deadlock.



If the hook routine blocks on the other code path instead of tx/rx
path,  why not use a simple atomic reference count. When the reference
count reachs zero, free it. Considering you can synchronzie on tx/rx
path, the free will not happen under the critical code path. So the
uninitialize work could be done inside the free routine even if it
blocks.


I think that RCU would only work in this situation if the hook functions
didn't block,.


I agree.



Kieran









--
best regards,
hanzhu
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 4/4] [Net] Support accelerated network plugin modules

2007-06-15 Thread Keir Fraser
On 15/6/07 17:22, "Kieran Mansley" <[EMAIL PROTECTED]> wrote:

> The lock protects the use_count variable.

Yes, that's one thing I noticed -- can you use atomic_t for reference counts
and hence reduce the number of times you need to lock/unlock? At least the
open-coded lock-decrement-test-maybe-free-unlock sequences could be
abstracted into a put_foo() function.

 -- Keir


-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 4/4] [Net] Support accelerated network plugin modules

2007-06-15 Thread Kieran Mansley
On Fri, 2007-06-15 at 11:59 -0400, Zhu Han wrote:
> Hi, Kieran,
> 
> I'm just wonder why you try to acquire the lock and increase the
> hooks_usecount each time when you use the hook routine. Is there any
> generic ways to synchronze the code path using hook routines and
> netfront_accelerator_unloaded, considering you can synchronize the
> tx/rx data path easily.

The lock protects the use_count variable.  The use_count variable
prevents the plugin module unloading while it is being used.  I couldn't
just use the lock to prevent the module unloading as the hook function
(i) might block (and holding a spin_lock would be rather antisocial)
(ii) might call back into netfront and try to take the lock again, which
would deadlock.

The data path hooks do not block, and are already protected by locks, so
these are also taken when trying to unload the plugin module.  For this
reason it's not necessary to use the hooks_usecount on the data path.

I think that RCU would only work in this situation if the hook functions
didn't block, and wouldn't affect the data path locking overhead as it
wouldn't be necessary there.  

Kieran





-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 4/4] [Net] Support accelerated network plugin modules

2007-06-15 Thread Stephen Hemminger
On Fri, 15 Jun 2007 11:59:43 -0400
"Zhu Han" <[EMAIL PROTECTED]> wrote:

> Hi, Kieran,
> 
> I'm just wonder why you try to acquire the lock and increase the
> hooks_usecount each time when you use the hook routine. Is there any
> generic ways to synchronze the code path using hook routines and
> netfront_accelerator_unloaded, considering you can synchronize the
> 

Learn to use RCU for this. It would reduce the lock overhead.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 4/4] [Net] Support accelerated network plugin modules

2007-06-15 Thread Zhu Han

Hi, Kieran,

I'm just wonder why you try to acquire the lock and increase the
hooks_usecount each time when you use the hook routine. Is there any
generic ways to synchronze the code path using hook routines and
netfront_accelerator_unloaded, considering you can synchronize the
tx/rx data path easily.

On 6/15/07, Kieran Mansley <[EMAIL PROTECTED]> wrote:

Frontend net driver acceleration

diff -r cd3ade350f3f drivers/xen/netfront/netfront.c
--- a/drivers/xen/netfront/netfront.c   Thu Jun 14 15:04:32 2007 +0100
+++ b/drivers/xen/netfront/netfront.c   Fri Jun 15 09:34:41 2007 +0100
@@ -3,6 +3,7 @@
  *
  * Copyright (c) 2002-2005, K A Fraser
  * Copyright (c) 2005, XenSource Ltd
+ * Copyright (C) 2007 Solarflare Communications, Inc.
  *
  * This program is free software; you can redistribute it and/or
  * modify it under the terms of the GNU General Public License version 2
@@ -47,6 +48,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -73,6 +75,8 @@ struct netfront_cb {
 };

 #define NETFRONT_SKB_CB(skb)   ((struct netfront_cb *)((skb)->cb))
+
+#include "netfront.h"

 /*
  * Mutually-exclusive module options to select receive data path:
@@ -144,57 +148,6 @@ static inline int netif_needs_gso(struct

 #define GRANT_INVALID_REF  0

-#define NET_TX_RING_SIZE __RING_SIZE((struct netif_tx_sring *)0, PAGE_SIZE)
-#define NET_RX_RING_SIZE __RING_SIZE((struct netif_rx_sring *)0, PAGE_SIZE)
-
-struct netfront_info {
-   struct list_head list;
-   struct net_device *netdev;
-
-   struct net_device_stats stats;
-
-   struct netif_tx_front_ring tx;
-   struct netif_rx_front_ring rx;
-
-   spinlock_t   tx_lock;
-   spinlock_t   rx_lock;
-
-   unsigned int irq;
-   unsigned int copying_receiver;
-   unsigned int carrier;
-
-   /* Receive-ring batched refills. */
-#define RX_MIN_TARGET 8
-#define RX_DFL_MIN_TARGET 64
-#define RX_MAX_TARGET min_t(int, NET_RX_RING_SIZE, 256)
-   unsigned rx_min_target, rx_max_target, rx_target;
-   struct sk_buff_head rx_batch;
-
-   struct timer_list rx_refill_timer;
-
-   /*
-* {tx,rx}_skbs store outstanding skbuffs. The first entry in tx_skbs
-* is an index into a chain of free entries.
-*/
-   struct sk_buff *tx_skbs[NET_TX_RING_SIZE+1];
-   struct sk_buff *rx_skbs[NET_RX_RING_SIZE];
-
-#define TX_MAX_TARGET min_t(int, NET_RX_RING_SIZE, 256)
-   grant_ref_t gref_tx_head;
-   grant_ref_t grant_tx_ref[NET_TX_RING_SIZE + 1];
-   grant_ref_t gref_rx_head;
-   grant_ref_t grant_rx_ref[NET_RX_RING_SIZE];
-
-   struct xenbus_device *xbdev;
-   int tx_ring_ref;
-   int rx_ring_ref;
-   u8 mac[ETH_ALEN];
-
-   unsigned long rx_pfn_array[NET_RX_RING_SIZE];
-   struct multicall_entry rx_mcl[NET_RX_RING_SIZE+1];
-   struct mmu_update rx_mmu[NET_RX_RING_SIZE];
-};
-
 struct netfront_rx_info {
struct netif_rx_response rx;
struct netif_extra_info extras[XEN_NETIF_EXTRA_TYPE_MAX - 1];
@@ -278,6 +231,369 @@ static void xennet_sysfs_delif(struct ne
 #define xennet_sysfs_delif(dev) do { } while(0)
 #endif

+/*
+ * List of all netfront accelerator plugin modules available.  Each
+ * list entry is of type struct netfront_accelerator.
+ */
+static struct list_head accelerators_list;
+/*
+ * Lock to protect access to accelerators_list, and also used to
+ * protect the hooks_usecount field in struct netfront_accelerator
+ * against concurrent access
+ */
+static spinlock_t accelerators_lock;
+
+/*
+ * Safely remove the accelerator function hooks from a netfront state.
+ * Must only be called when there are no current users of the hooks.
+ */
+static void accelerator_remove_hooks(struct netfront_accelerator *accelerator)
+{
+struct netfront_accel_vif_state *vif_state;
+
+list_for_each_entry( vif_state,
+ &accelerator->vif_states,
+ link ) {
+/* Make sure there are no data path operations going on */
+netif_poll_disable(vif_state->np->netdev);
+netif_tx_lock_bh(vif_state->np->netdev);
+
+/*
+ * Remove the hooks, but leave the vif_state on the
+ * accelerator's list as that signifies this vif is
+ * interested in using that accelerator if it becomes
+ * available again
+ */
+vif_state->hooks = NULL;
+
+netif_tx_unlock_bh(vif_state->np->netdev);
+netif_poll_enable(vif_state->np->netdev);
+}
+
+accelerator->hooks = NULL;
+
+/* Signal that all users of hooks are done */
+up(&accelerator->exit_semaphore);
+}
+
+
+/*
+ * Compare a frontend description string against an accelerator to see
+ * if they match.  Would ultimately be nice to replace the string with
+ * a unique numeric identifier for each accelerator.
+ */
+static int match_acceler