Re: [PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Paul E. McKenney
On Wed, Aug 12, 2009 at 06:51:54PM +0300, Michael S. Tsirkin wrote:
> On Wed, Aug 12, 2009 at 08:26:39AM -0700, Paul E. McKenney wrote:
> > On Wed, Aug 12, 2009 at 05:15:59PM +0300, Michael S. Tsirkin wrote:
> > > On Wed, Aug 12, 2009 at 07:11:07AM -0700, Paul E. McKenney wrote:
> > > > On Wed, Aug 12, 2009 at 04:25:40PM +0300, Michael S. Tsirkin wrote:
> > > > > On Wed, Aug 12, 2009 at 09:01:35AM -0400, Gregory Haskins wrote:
> > > > > > I think I understand what your comment above meant:  You don't need 
> > > > > > to
> > > > > > do synchronize_rcu() because you can flush the workqueue instead to
> > > > > > ensure that all readers have completed.
> > > > > 
> > > > > Yes.
> > > > > 
> > > > > >  But if thats true, to me, the
> > > > > > rcu_dereference itself is gratuitous,
> > > > > 
> > > > > Here's a thesis on what rcu_dereference does (besides documentation):
> > > > > 
> > > > > reader does this
> > > > > 
> > > > >   A: sock = n->sock
> > > > >   B: use *sock
> > > > > 
> > > > > Say writer does this:
> > > > > 
> > > > >   C: newsock = allocate socket
> > > > >   D: initialize(newsock)
> > > > >   E: n->sock = newsock
> > > > >   F: flush
> > > > > 
> > > > > 
> > > > > On Alpha, reads could be reordered.  So, on smp, command A could get
> > > > > data from point F, and command B - from point D (uninitialized, from
> > > > > cache).  IOW, you get fresh pointer but stale data.
> > > > > So we need to stick a barrier in there.
> > > > > 
> > > > > > and that pointer is *not* actually
> > > > > > RCU protected (nor does it need to be).
> > > > > 
> > > > > Heh, if readers are lockless and writer does init/update/sync,
> > > > > this to me spells rcu.
> > > > 
> > > > If you are using call_rcu(), synchronize_rcu(), or one of the
> > > > similar primitives, then you absolutely need rcu_read_lock() and
> > > > rcu_read_unlock(), or one of the similar pairs of primitives.
> > > 
> > > Right. I don't use any of these though.
> > > 
> > > > If you -don't- use rcu_read_lock(), then you are pretty much restricted
> > > > to adding data, but never removing it.
> > > > 
> > > > Make sense?  ;-)
> > > 
> > > Since I only access data from a workqueue, I replaced synchronize_rcu
> > > with workqueue flush. That's why I don't need rcu_read_lock.
> > 
> > Well, you -do- need -something- that takes on the role of rcu_read_lock(),
> > and in your case you in fact actually do.  Your equivalent of
> > rcu_read_lock() is the beginning of execution of a workqueue item, and
> > the equivalent of rcu_read_unlock() is the end of execution of that same
> > workqueue item.  Implicit, but no less real.
> 
> Well put. I'll add this to comments in my code.

Very good, thank you!!!

> > If a couple more uses like this show up, I might need to add this to
> > Documentation/RCU.  ;-)

And I idly wonder if this approach could replace SRCU.  Probably not
for protecting the CPU-hotplug notifier chains, but worth some thought.

Thanx, Paul
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Michael S. Tsirkin
On Wed, Aug 12, 2009 at 08:26:39AM -0700, Paul E. McKenney wrote:
> On Wed, Aug 12, 2009 at 05:15:59PM +0300, Michael S. Tsirkin wrote:
> > On Wed, Aug 12, 2009 at 07:11:07AM -0700, Paul E. McKenney wrote:
> > > On Wed, Aug 12, 2009 at 04:25:40PM +0300, Michael S. Tsirkin wrote:
> > > > On Wed, Aug 12, 2009 at 09:01:35AM -0400, Gregory Haskins wrote:
> > > > > I think I understand what your comment above meant:  You don't need to
> > > > > do synchronize_rcu() because you can flush the workqueue instead to
> > > > > ensure that all readers have completed.
> > > > 
> > > > Yes.
> > > > 
> > > > >  But if thats true, to me, the
> > > > > rcu_dereference itself is gratuitous,
> > > > 
> > > > Here's a thesis on what rcu_dereference does (besides documentation):
> > > > 
> > > > reader does this
> > > > 
> > > > A: sock = n->sock
> > > > B: use *sock
> > > > 
> > > > Say writer does this:
> > > > 
> > > > C: newsock = allocate socket
> > > > D: initialize(newsock)
> > > > E: n->sock = newsock
> > > > F: flush
> > > > 
> > > > 
> > > > On Alpha, reads could be reordered.  So, on smp, command A could get
> > > > data from point F, and command B - from point D (uninitialized, from
> > > > cache).  IOW, you get fresh pointer but stale data.
> > > > So we need to stick a barrier in there.
> > > > 
> > > > > and that pointer is *not* actually
> > > > > RCU protected (nor does it need to be).
> > > > 
> > > > Heh, if readers are lockless and writer does init/update/sync,
> > > > this to me spells rcu.
> > > 
> > > If you are using call_rcu(), synchronize_rcu(), or one of the
> > > similar primitives, then you absolutely need rcu_read_lock() and
> > > rcu_read_unlock(), or one of the similar pairs of primitives.
> > 
> > Right. I don't use any of these though.
> > 
> > > If you -don't- use rcu_read_lock(), then you are pretty much restricted
> > > to adding data, but never removing it.
> > > 
> > > Make sense?  ;-)
> > 
> > Since I only access data from a workqueue, I replaced synchronize_rcu
> > with workqueue flush. That's why I don't need rcu_read_lock.
> 
> Well, you -do- need -something- that takes on the role of rcu_read_lock(),
> and in your case you in fact actually do.  Your equivalent of
> rcu_read_lock() is the beginning of execution of a workqueue item, and
> the equivalent of rcu_read_unlock() is the end of execution of that same
> workqueue item.  Implicit, but no less real.

Well put. I'll add this to comments in my code.

> If a couple more uses like this show up, I might need to add this to
> Documentation/RCU.  ;-)
> 
>   Thanx, Paul
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Paul E. McKenney
On Wed, Aug 12, 2009 at 04:25:40PM +0300, Michael S. Tsirkin wrote:
> On Wed, Aug 12, 2009 at 09:01:35AM -0400, Gregory Haskins wrote:
> > I think I understand what your comment above meant:  You don't need to
> > do synchronize_rcu() because you can flush the workqueue instead to
> > ensure that all readers have completed.
> 
> Yes.
> 
> >  But if thats true, to me, the
> > rcu_dereference itself is gratuitous,
> 
> Here's a thesis on what rcu_dereference does (besides documentation):
> 
> reader does this
> 
>   A: sock = n->sock
>   B: use *sock
> 
> Say writer does this:
> 
>   C: newsock = allocate socket
>   D: initialize(newsock)
>   E: n->sock = newsock
>   F: flush
> 
> 
> On Alpha, reads could be reordered.  So, on smp, command A could get
> data from point F, and command B - from point D (uninitialized, from
> cache).  IOW, you get fresh pointer but stale data.
> So we need to stick a barrier in there.
> 
> > and that pointer is *not* actually
> > RCU protected (nor does it need to be).
> 
> Heh, if readers are lockless and writer does init/update/sync,
> this to me spells rcu.

If you are using call_rcu(), synchronize_rcu(), or one of the
similar primitives, then you absolutely need rcu_read_lock() and
rcu_read_unlock(), or one of the similar pairs of primitives.

If you -don't- use rcu_read_lock(), then you are pretty much restricted
to adding data, but never removing it.

Make sense?  ;-)

Thanx, Paul
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Paul E. McKenney
On Wed, Aug 12, 2009 at 05:15:59PM +0300, Michael S. Tsirkin wrote:
> On Wed, Aug 12, 2009 at 07:11:07AM -0700, Paul E. McKenney wrote:
> > On Wed, Aug 12, 2009 at 04:25:40PM +0300, Michael S. Tsirkin wrote:
> > > On Wed, Aug 12, 2009 at 09:01:35AM -0400, Gregory Haskins wrote:
> > > > I think I understand what your comment above meant:  You don't need to
> > > > do synchronize_rcu() because you can flush the workqueue instead to
> > > > ensure that all readers have completed.
> > > 
> > > Yes.
> > > 
> > > >  But if thats true, to me, the
> > > > rcu_dereference itself is gratuitous,
> > > 
> > > Here's a thesis on what rcu_dereference does (besides documentation):
> > > 
> > > reader does this
> > > 
> > >   A: sock = n->sock
> > >   B: use *sock
> > > 
> > > Say writer does this:
> > > 
> > >   C: newsock = allocate socket
> > >   D: initialize(newsock)
> > >   E: n->sock = newsock
> > >   F: flush
> > > 
> > > 
> > > On Alpha, reads could be reordered.  So, on smp, command A could get
> > > data from point F, and command B - from point D (uninitialized, from
> > > cache).  IOW, you get fresh pointer but stale data.
> > > So we need to stick a barrier in there.
> > > 
> > > > and that pointer is *not* actually
> > > > RCU protected (nor does it need to be).
> > > 
> > > Heh, if readers are lockless and writer does init/update/sync,
> > > this to me spells rcu.
> > 
> > If you are using call_rcu(), synchronize_rcu(), or one of the
> > similar primitives, then you absolutely need rcu_read_lock() and
> > rcu_read_unlock(), or one of the similar pairs of primitives.
> 
> Right. I don't use any of these though.
> 
> > If you -don't- use rcu_read_lock(), then you are pretty much restricted
> > to adding data, but never removing it.
> > 
> > Make sense?  ;-)
> 
> Since I only access data from a workqueue, I replaced synchronize_rcu
> with workqueue flush. That's why I don't need rcu_read_lock.

Well, you -do- need -something- that takes on the role of rcu_read_lock(),
and in your case you in fact actually do.  Your equivalent of
rcu_read_lock() is the beginning of execution of a workqueue item, and
the equivalent of rcu_read_unlock() is the end of execution of that same
workqueue item.  Implicit, but no less real.

If a couple more uses like this show up, I might need to add this to
Documentation/RCU.  ;-)

Thanx, Paul
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Michael S. Tsirkin
On Wed, Aug 12, 2009 at 07:11:07AM -0700, Paul E. McKenney wrote:
> On Wed, Aug 12, 2009 at 04:25:40PM +0300, Michael S. Tsirkin wrote:
> > On Wed, Aug 12, 2009 at 09:01:35AM -0400, Gregory Haskins wrote:
> > > I think I understand what your comment above meant:  You don't need to
> > > do synchronize_rcu() because you can flush the workqueue instead to
> > > ensure that all readers have completed.
> > 
> > Yes.
> > 
> > >  But if thats true, to me, the
> > > rcu_dereference itself is gratuitous,
> > 
> > Here's a thesis on what rcu_dereference does (besides documentation):
> > 
> > reader does this
> > 
> > A: sock = n->sock
> > B: use *sock
> > 
> > Say writer does this:
> > 
> > C: newsock = allocate socket
> > D: initialize(newsock)
> > E: n->sock = newsock
> > F: flush
> > 
> > 
> > On Alpha, reads could be reordered.  So, on smp, command A could get
> > data from point F, and command B - from point D (uninitialized, from
> > cache).  IOW, you get fresh pointer but stale data.
> > So we need to stick a barrier in there.
> > 
> > > and that pointer is *not* actually
> > > RCU protected (nor does it need to be).
> > 
> > Heh, if readers are lockless and writer does init/update/sync,
> > this to me spells rcu.
> 
> If you are using call_rcu(), synchronize_rcu(), or one of the
> similar primitives, then you absolutely need rcu_read_lock() and
> rcu_read_unlock(), or one of the similar pairs of primitives.

Right. I don't use any of these though.

> If you -don't- use rcu_read_lock(), then you are pretty much restricted
> to adding data, but never removing it.
> 
> Make sense?  ;-)
> 
>   Thanx, Paul

Since I only access data from a workqueue, I replaced synchronize_rcu
with workqueue flush. That's why I don't need rcu_read_lock.

-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Michael S. Tsirkin
On Wed, Aug 12, 2009 at 09:41:35AM -0400, Gregory Haskins wrote:
> Michael S. Tsirkin wrote:
> > On Wed, Aug 12, 2009 at 09:01:35AM -0400, Gregory Haskins wrote:
> >> I think I understand what your comment above meant:  You don't need to
> >> do synchronize_rcu() because you can flush the workqueue instead to
> >> ensure that all readers have completed.
> > 
> > Yes.
> > 
> >>  But if thats true, to me, the
> >> rcu_dereference itself is gratuitous,
> > 
> > Here's a thesis on what rcu_dereference does (besides documentation):
> > 
> > reader does this
> > 
> > A: sock = n->sock
> > B: use *sock
> > 
> > Say writer does this:
> > 
> > C: newsock = allocate socket
> > D: initialize(newsock)
> > E: n->sock = newsock
> > F: flush
> > 
> > 
> > On Alpha, reads could be reordered.  So, on smp, command A could get
> > data from point F, and command B - from point D (uninitialized, from
> > cache).  IOW, you get fresh pointer but stale data.
> > So we need to stick a barrier in there.
> 
> Yes, that is understood.  Perhaps you should just use a normal barrier,
> however.  (Or at least a comment that says "I am just using this for its
> barrier").
> 
> > 
> >> and that pointer is *not* actually
> >> RCU protected (nor does it need to be).
> > 
> > Heh, if readers are lockless and writer does init/update/sync,
> > this to me spells rcu.
> 
> More correctly: it "smells like" RCU, but its not. ;)  It's rcu-like,
> but you are not really using the rcu facilities.  I think anyone that
> knows RCU and reads your code will likely be scratching their heads as well.
> 
> Its probably not a big deal, as I understand your code now.  Just a
> suggestion to help clarify it.
> 
> Regards,
> -Greg
> 

OK, I'll add some comments about that.
Thanks for the review!

-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Gregory Haskins
Michael S. Tsirkin wrote:
> On Wed, Aug 12, 2009 at 09:01:35AM -0400, Gregory Haskins wrote:
>> I think I understand what your comment above meant:  You don't need to
>> do synchronize_rcu() because you can flush the workqueue instead to
>> ensure that all readers have completed.
> 
> Yes.
> 
>>  But if thats true, to me, the
>> rcu_dereference itself is gratuitous,
> 
> Here's a thesis on what rcu_dereference does (besides documentation):
> 
> reader does this
> 
>   A: sock = n->sock
>   B: use *sock
> 
> Say writer does this:
> 
>   C: newsock = allocate socket
>   D: initialize(newsock)
>   E: n->sock = newsock
>   F: flush
> 
> 
> On Alpha, reads could be reordered.  So, on smp, command A could get
> data from point F, and command B - from point D (uninitialized, from
> cache).  IOW, you get fresh pointer but stale data.
> So we need to stick a barrier in there.

Yes, that is understood.  Perhaps you should just use a normal barrier,
however.  (Or at least a comment that says "I am just using this for its
barrier").

> 
>> and that pointer is *not* actually
>> RCU protected (nor does it need to be).
> 
> Heh, if readers are lockless and writer does init/update/sync,
> this to me spells rcu.

More correctly: it "smells like" RCU, but its not. ;)  It's rcu-like,
but you are not really using the rcu facilities.  I think anyone that
knows RCU and reads your code will likely be scratching their heads as well.

Its probably not a big deal, as I understand your code now.  Just a
suggestion to help clarify it.

Regards,
-Greg



signature.asc
Description: OpenPGP digital signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization

Re: [PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Michael S. Tsirkin
On Wed, Aug 12, 2009 at 09:01:35AM -0400, Gregory Haskins wrote:
> I think I understand what your comment above meant:  You don't need to
> do synchronize_rcu() because you can flush the workqueue instead to
> ensure that all readers have completed.

Yes.

>  But if thats true, to me, the
> rcu_dereference itself is gratuitous,

Here's a thesis on what rcu_dereference does (besides documentation):

reader does this

A: sock = n->sock
B: use *sock

Say writer does this:

C: newsock = allocate socket
D: initialize(newsock)
E: n->sock = newsock
F: flush


On Alpha, reads could be reordered.  So, on smp, command A could get
data from point F, and command B - from point D (uninitialized, from
cache).  IOW, you get fresh pointer but stale data.
So we need to stick a barrier in there.

> and that pointer is *not* actually
> RCU protected (nor does it need to be).

Heh, if readers are lockless and writer does init/update/sync,
this to me spells rcu.

-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Gregory Haskins
Michael S. Tsirkin wrote:
> On Tue, Aug 11, 2009 at 08:06:02PM -0400, Gregory Haskins wrote:
>> Michael S. Tsirkin wrote:
>>> What it is: vhost net is a character device that can be used to reduce
>>> the number of system calls involved in virtio networking.
>>> Existing virtio net code is used in the guest without modification.
>>>
>>> There's similarity with vringfd, with some differences and reduced scope
>>> - uses eventfd for signalling
>>> - structures can be moved around in memory at any time (good for migration)
>>> - support memory table and not just an offset (needed for kvm)
>>>
>>> common virtio related code has been put in a separate file vhost.c and
>>> can be made into a separate module if/when more backends appear.  I used
>>> Rusty's lguest.c as the source for developing this part : this supplied
>>> me with witty comments I wouldn't be able to write myself.
>>>
>>> What it is not: vhost net is not a bus, and not a generic new system
>>> call. No assumptions are made on how guest performs hypercalls.
>>> Userspace hypervisors are supported as well as kvm.
>>>
>>> How it works: Basically, we connect virtio frontend (configured by
>>> userspace) to a backend. The backend could be a network device, or a
>>> tun-like device. In this version I only support raw socket as a backend,
>>> which can be bound to e.g. SR IOV, or to macvlan device.  Backend is
>>> also configured by userspace, including vlan/mac etc.
>>>
>>> Status:
>>> This works for me, and I haven't see any crashes.
>>> I have not run any benchmarks yet, compared to userspace, I expect to
>>> see improved latency (as I save up to 4 system calls per packet) but not
>>> bandwidth/CPU (as TSO and interrupt mitigation are not supported).
>>>
>>> Features that I plan to look at in the future:
>>> - TSO
>>> - interrupt mitigation
>>> - zero copy
>> Only a quick review for now.  Will look closer later.
>>
>> (see inline)
>>
>>> Signed-off-by: Michael S. Tsirkin 
>>>
>>> v2
>>> ---
>>>  MAINTAINERS|   10 +
>>>  arch/x86/kvm/Kconfig   |1 +
>>>  drivers/Makefile   |1 +
>>>  drivers/vhost/Kconfig  |   11 +
>>>  drivers/vhost/Makefile |2 +
>>>  drivers/vhost/net.c|  411 +++
>>>  drivers/vhost/vhost.c  |  663 
>>> 
>>>  drivers/vhost/vhost.h  |  108 +++
>>>  include/linux/Kbuild   |1 +
>>>  include/linux/miscdevice.h |1 +
>>>  include/linux/vhost.h  |  100 +++
>>>  11 files changed, 1309 insertions(+), 0 deletions(-)
>>>  create mode 100644 drivers/vhost/Kconfig
>>>  create mode 100644 drivers/vhost/Makefile
>>>  create mode 100644 drivers/vhost/net.c
>>>  create mode 100644 drivers/vhost/vhost.c
>>>  create mode 100644 drivers/vhost/vhost.h
>>>  create mode 100644 include/linux/vhost.h
>>>
>>> diff --git a/MAINTAINERS b/MAINTAINERS
>>> index ebc2691..eb0c1da 100644
>>> --- a/MAINTAINERS
>>> +++ b/MAINTAINERS
>>> @@ -6312,6 +6312,16 @@ S:   Maintained
>>>  F: Documentation/filesystems/vfat.txt
>>>  F: fs/fat/
>>>  
>>> +VIRTIO HOST (VHOST)
>>> +P: Michael S. Tsirkin
>>> +M: m...@redhat.com
>>> +L: k...@vger.kernel.org
>>> +L: virtualizat...@lists.osdl.org
>>> +L: net...@vger.kernel.org
>>> +S: Maintained
>>> +F: drivers/vhost/
>>> +F: include/linux/vhost.h
>>> +
>>>  VIA RHINE NETWORK DRIVER
>>>  P: Roger Luethi
>>>  M: r...@hellgate.ch
>>> diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
>>> index b84e571..94f44d9 100644
>>> --- a/arch/x86/kvm/Kconfig
>>> +++ b/arch/x86/kvm/Kconfig
>>> @@ -64,6 +64,7 @@ config KVM_AMD
>>>  
>>>  # OK, it's a little counter-intuitive to do this, but it puts it neatly 
>>> under
>>>  # the virtualization menu.
>>> +source drivers/vhost/Kconfig
>>>  source drivers/lguest/Kconfig
>>>  source drivers/virtio/Kconfig
>>>  
>>> diff --git a/drivers/Makefile b/drivers/Makefile
>>> index bc4205d..1551ae1 100644
>>> --- a/drivers/Makefile
>>> +++ b/drivers/Makefile
>>> @@ -105,6 +105,7 @@ obj-$(CONFIG_HID)   += hid/
>>>  obj-$(CONFIG_PPC_PS3)  += ps3/
>>>  obj-$(CONFIG_OF)   += of/
>>>  obj-$(CONFIG_SSB)  += ssb/
>>> +obj-$(CONFIG_VHOST_NET)+= vhost/
>>>  obj-$(CONFIG_VIRTIO)   += virtio/
>>>  obj-$(CONFIG_VLYNQ)+= vlynq/
>>>  obj-$(CONFIG_STAGING)  += staging/
>>> diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
>>> new file mode 100644
>>> index 000..d955406
>>> --- /dev/null
>>> +++ b/drivers/vhost/Kconfig
>>> @@ -0,0 +1,11 @@
>>> +config VHOST_NET
>>> +   tristate "Host kernel accelerator for virtio net"
>>> +   depends on NET && EVENTFD
>>> +   ---help---
>>> + This kernel module can be loaded in host kernel to accelerate
>>> + guest networking with virtio_net. Not to be confused with virtio_net
>>> + module itself which needs to be loaded in guest kernel.
>>> +
>>> + To compile this driver as a module, choose M here: the module will
>>

Re: [PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Michael S. Tsirkin
On Tue, Aug 11, 2009 at 08:06:02PM -0400, Gregory Haskins wrote:
> > diff --git a/include/linux/miscdevice.h b/include/linux/miscdevice.h
> > index 0521177..781a8bb 100644
> > --- a/include/linux/miscdevice.h
> > +++ b/include/linux/miscdevice.h
> > @@ -30,6 +30,7 @@
> >  #define HPET_MINOR 228
> >  #define FUSE_MINOR 229
> >  #define KVM_MINOR  232
> > +#define VHOST_NET_MINOR233
> 
> Would recommend using DYNAMIC-MINOR.

Good idea. Thanks!

-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Michael S. Tsirkin
On Tue, Aug 11, 2009 at 08:06:02PM -0400, Gregory Haskins wrote:
> Michael S. Tsirkin wrote:
> > What it is: vhost net is a character device that can be used to reduce
> > the number of system calls involved in virtio networking.
> > Existing virtio net code is used in the guest without modification.
> > 
> > There's similarity with vringfd, with some differences and reduced scope
> > - uses eventfd for signalling
> > - structures can be moved around in memory at any time (good for migration)
> > - support memory table and not just an offset (needed for kvm)
> > 
> > common virtio related code has been put in a separate file vhost.c and
> > can be made into a separate module if/when more backends appear.  I used
> > Rusty's lguest.c as the source for developing this part : this supplied
> > me with witty comments I wouldn't be able to write myself.
> > 
> > What it is not: vhost net is not a bus, and not a generic new system
> > call. No assumptions are made on how guest performs hypercalls.
> > Userspace hypervisors are supported as well as kvm.
> > 
> > How it works: Basically, we connect virtio frontend (configured by
> > userspace) to a backend. The backend could be a network device, or a
> > tun-like device. In this version I only support raw socket as a backend,
> > which can be bound to e.g. SR IOV, or to macvlan device.  Backend is
> > also configured by userspace, including vlan/mac etc.
> > 
> > Status:
> > This works for me, and I haven't see any crashes.
> > I have not run any benchmarks yet, compared to userspace, I expect to
> > see improved latency (as I save up to 4 system calls per packet) but not
> > bandwidth/CPU (as TSO and interrupt mitigation are not supported).
> > 
> > Features that I plan to look at in the future:
> > - TSO
> > - interrupt mitigation
> > - zero copy
> 
> Only a quick review for now.  Will look closer later.
> 
> (see inline)
> 
> > 
> > Signed-off-by: Michael S. Tsirkin 
> > 
> > v2
> > ---
> >  MAINTAINERS|   10 +
> >  arch/x86/kvm/Kconfig   |1 +
> >  drivers/Makefile   |1 +
> >  drivers/vhost/Kconfig  |   11 +
> >  drivers/vhost/Makefile |2 +
> >  drivers/vhost/net.c|  411 +++
> >  drivers/vhost/vhost.c  |  663 
> > 
> >  drivers/vhost/vhost.h  |  108 +++
> >  include/linux/Kbuild   |1 +
> >  include/linux/miscdevice.h |1 +
> >  include/linux/vhost.h  |  100 +++
> >  11 files changed, 1309 insertions(+), 0 deletions(-)
> >  create mode 100644 drivers/vhost/Kconfig
> >  create mode 100644 drivers/vhost/Makefile
> >  create mode 100644 drivers/vhost/net.c
> >  create mode 100644 drivers/vhost/vhost.c
> >  create mode 100644 drivers/vhost/vhost.h
> >  create mode 100644 include/linux/vhost.h
> > 
> > diff --git a/MAINTAINERS b/MAINTAINERS
> > index ebc2691..eb0c1da 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -6312,6 +6312,16 @@ S:   Maintained
> >  F: Documentation/filesystems/vfat.txt
> >  F: fs/fat/
> >  
> > +VIRTIO HOST (VHOST)
> > +P: Michael S. Tsirkin
> > +M: m...@redhat.com
> > +L: k...@vger.kernel.org
> > +L: virtualizat...@lists.osdl.org
> > +L: net...@vger.kernel.org
> > +S: Maintained
> > +F: drivers/vhost/
> > +F: include/linux/vhost.h
> > +
> >  VIA RHINE NETWORK DRIVER
> >  P: Roger Luethi
> >  M: r...@hellgate.ch
> > diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
> > index b84e571..94f44d9 100644
> > --- a/arch/x86/kvm/Kconfig
> > +++ b/arch/x86/kvm/Kconfig
> > @@ -64,6 +64,7 @@ config KVM_AMD
> >  
> >  # OK, it's a little counter-intuitive to do this, but it puts it neatly 
> > under
> >  # the virtualization menu.
> > +source drivers/vhost/Kconfig
> >  source drivers/lguest/Kconfig
> >  source drivers/virtio/Kconfig
> >  
> > diff --git a/drivers/Makefile b/drivers/Makefile
> > index bc4205d..1551ae1 100644
> > --- a/drivers/Makefile
> > +++ b/drivers/Makefile
> > @@ -105,6 +105,7 @@ obj-$(CONFIG_HID)   += hid/
> >  obj-$(CONFIG_PPC_PS3)  += ps3/
> >  obj-$(CONFIG_OF)   += of/
> >  obj-$(CONFIG_SSB)  += ssb/
> > +obj-$(CONFIG_VHOST_NET)+= vhost/
> >  obj-$(CONFIG_VIRTIO)   += virtio/
> >  obj-$(CONFIG_VLYNQ)+= vlynq/
> >  obj-$(CONFIG_STAGING)  += staging/
> > diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
> > new file mode 100644
> > index 000..d955406
> > --- /dev/null
> > +++ b/drivers/vhost/Kconfig
> > @@ -0,0 +1,11 @@
> > +config VHOST_NET
> > +   tristate "Host kernel accelerator for virtio net"
> > +   depends on NET && EVENTFD
> > +   ---help---
> > + This kernel module can be loaded in host kernel to accelerate
> > + guest networking with virtio_net. Not to be confused with virtio_net
> > + module itself which needs to be loaded in guest kernel.
> > +
> > + To compile this driver as a module, choose M here: the module will
> > + be call

Re: [PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-11 Thread Gregory Haskins
Michael S. Tsirkin wrote:
> What it is: vhost net is a character device that can be used to reduce
> the number of system calls involved in virtio networking.
> Existing virtio net code is used in the guest without modification.
> 
> There's similarity with vringfd, with some differences and reduced scope
> - uses eventfd for signalling
> - structures can be moved around in memory at any time (good for migration)
> - support memory table and not just an offset (needed for kvm)
> 
> common virtio related code has been put in a separate file vhost.c and
> can be made into a separate module if/when more backends appear.  I used
> Rusty's lguest.c as the source for developing this part : this supplied
> me with witty comments I wouldn't be able to write myself.
> 
> What it is not: vhost net is not a bus, and not a generic new system
> call. No assumptions are made on how guest performs hypercalls.
> Userspace hypervisors are supported as well as kvm.
> 
> How it works: Basically, we connect virtio frontend (configured by
> userspace) to a backend. The backend could be a network device, or a
> tun-like device. In this version I only support raw socket as a backend,
> which can be bound to e.g. SR IOV, or to macvlan device.  Backend is
> also configured by userspace, including vlan/mac etc.
> 
> Status:
> This works for me, and I haven't see any crashes.
> I have not run any benchmarks yet, compared to userspace, I expect to
> see improved latency (as I save up to 4 system calls per packet) but not
> bandwidth/CPU (as TSO and interrupt mitigation are not supported).
> 
> Features that I plan to look at in the future:
> - TSO
> - interrupt mitigation
> - zero copy

Only a quick review for now.  Will look closer later.

(see inline)

> 
> Signed-off-by: Michael S. Tsirkin 
> 
> v2
> ---
>  MAINTAINERS|   10 +
>  arch/x86/kvm/Kconfig   |1 +
>  drivers/Makefile   |1 +
>  drivers/vhost/Kconfig  |   11 +
>  drivers/vhost/Makefile |2 +
>  drivers/vhost/net.c|  411 +++
>  drivers/vhost/vhost.c  |  663 
> 
>  drivers/vhost/vhost.h  |  108 +++
>  include/linux/Kbuild   |1 +
>  include/linux/miscdevice.h |1 +
>  include/linux/vhost.h  |  100 +++
>  11 files changed, 1309 insertions(+), 0 deletions(-)
>  create mode 100644 drivers/vhost/Kconfig
>  create mode 100644 drivers/vhost/Makefile
>  create mode 100644 drivers/vhost/net.c
>  create mode 100644 drivers/vhost/vhost.c
>  create mode 100644 drivers/vhost/vhost.h
>  create mode 100644 include/linux/vhost.h
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index ebc2691..eb0c1da 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -6312,6 +6312,16 @@ S: Maintained
>  F:   Documentation/filesystems/vfat.txt
>  F:   fs/fat/
>  
> +VIRTIO HOST (VHOST)
> +P:   Michael S. Tsirkin
> +M:   m...@redhat.com
> +L:   k...@vger.kernel.org
> +L:   virtualizat...@lists.osdl.org
> +L:   net...@vger.kernel.org
> +S:   Maintained
> +F:   drivers/vhost/
> +F:   include/linux/vhost.h
> +
>  VIA RHINE NETWORK DRIVER
>  P:   Roger Luethi
>  M:   r...@hellgate.ch
> diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
> index b84e571..94f44d9 100644
> --- a/arch/x86/kvm/Kconfig
> +++ b/arch/x86/kvm/Kconfig
> @@ -64,6 +64,7 @@ config KVM_AMD
>  
>  # OK, it's a little counter-intuitive to do this, but it puts it neatly under
>  # the virtualization menu.
> +source drivers/vhost/Kconfig
>  source drivers/lguest/Kconfig
>  source drivers/virtio/Kconfig
>  
> diff --git a/drivers/Makefile b/drivers/Makefile
> index bc4205d..1551ae1 100644
> --- a/drivers/Makefile
> +++ b/drivers/Makefile
> @@ -105,6 +105,7 @@ obj-$(CONFIG_HID) += hid/
>  obj-$(CONFIG_PPC_PS3)+= ps3/
>  obj-$(CONFIG_OF) += of/
>  obj-$(CONFIG_SSB)+= ssb/
> +obj-$(CONFIG_VHOST_NET)  += vhost/
>  obj-$(CONFIG_VIRTIO) += virtio/
>  obj-$(CONFIG_VLYNQ)  += vlynq/
>  obj-$(CONFIG_STAGING)+= staging/
> diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
> new file mode 100644
> index 000..d955406
> --- /dev/null
> +++ b/drivers/vhost/Kconfig
> @@ -0,0 +1,11 @@
> +config VHOST_NET
> + tristate "Host kernel accelerator for virtio net"
> + depends on NET && EVENTFD
> + ---help---
> +   This kernel module can be loaded in host kernel to accelerate
> +   guest networking with virtio_net. Not to be confused with virtio_net
> +   module itself which needs to be loaded in guest kernel.
> +
> +   To compile this driver as a module, choose M here: the module will
> +   be called vhost_net.
> +
> diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
> new file mode 100644
> index 000..72dd020
> --- /dev/null
> +++ b/drivers/vhost/Makefile
> @@ -0,0 +1,2 @@
> +obj-$(CONFIG_VHOST_NET) += vhost_net.o
> +vhost_net-y := vhost.o net.o
> diff --git a/drive

[PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-11 Thread Michael S. Tsirkin
What it is: vhost net is a character device that can be used to reduce
the number of system calls involved in virtio networking.
Existing virtio net code is used in the guest without modification.

There's similarity with vringfd, with some differences and reduced scope
- uses eventfd for signalling
- structures can be moved around in memory at any time (good for migration)
- support memory table and not just an offset (needed for kvm)

common virtio related code has been put in a separate file vhost.c and
can be made into a separate module if/when more backends appear.  I used
Rusty's lguest.c as the source for developing this part : this supplied
me with witty comments I wouldn't be able to write myself.

What it is not: vhost net is not a bus, and not a generic new system
call. No assumptions are made on how guest performs hypercalls.
Userspace hypervisors are supported as well as kvm.

How it works: Basically, we connect virtio frontend (configured by
userspace) to a backend. The backend could be a network device, or a
tun-like device. In this version I only support raw socket as a backend,
which can be bound to e.g. SR IOV, or to macvlan device.  Backend is
also configured by userspace, including vlan/mac etc.

Status:
This works for me, and I haven't see any crashes.
I have not run any benchmarks yet, compared to userspace, I expect to
see improved latency (as I save up to 4 system calls per packet) but not
bandwidth/CPU (as TSO and interrupt mitigation are not supported).

Features that I plan to look at in the future:
- TSO
- interrupt mitigation
- zero copy

Signed-off-by: Michael S. Tsirkin 

v2
---
 MAINTAINERS|   10 +
 arch/x86/kvm/Kconfig   |1 +
 drivers/Makefile   |1 +
 drivers/vhost/Kconfig  |   11 +
 drivers/vhost/Makefile |2 +
 drivers/vhost/net.c|  411 +++
 drivers/vhost/vhost.c  |  663 
 drivers/vhost/vhost.h  |  108 +++
 include/linux/Kbuild   |1 +
 include/linux/miscdevice.h |1 +
 include/linux/vhost.h  |  100 +++
 11 files changed, 1309 insertions(+), 0 deletions(-)
 create mode 100644 drivers/vhost/Kconfig
 create mode 100644 drivers/vhost/Makefile
 create mode 100644 drivers/vhost/net.c
 create mode 100644 drivers/vhost/vhost.c
 create mode 100644 drivers/vhost/vhost.h
 create mode 100644 include/linux/vhost.h

diff --git a/MAINTAINERS b/MAINTAINERS
index ebc2691..eb0c1da 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6312,6 +6312,16 @@ S:   Maintained
 F: Documentation/filesystems/vfat.txt
 F: fs/fat/
 
+VIRTIO HOST (VHOST)
+P: Michael S. Tsirkin
+M: m...@redhat.com
+L: k...@vger.kernel.org
+L: virtualizat...@lists.osdl.org
+L: net...@vger.kernel.org
+S: Maintained
+F: drivers/vhost/
+F: include/linux/vhost.h
+
 VIA RHINE NETWORK DRIVER
 P: Roger Luethi
 M: r...@hellgate.ch
diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
index b84e571..94f44d9 100644
--- a/arch/x86/kvm/Kconfig
+++ b/arch/x86/kvm/Kconfig
@@ -64,6 +64,7 @@ config KVM_AMD
 
 # OK, it's a little counter-intuitive to do this, but it puts it neatly under
 # the virtualization menu.
+source drivers/vhost/Kconfig
 source drivers/lguest/Kconfig
 source drivers/virtio/Kconfig
 
diff --git a/drivers/Makefile b/drivers/Makefile
index bc4205d..1551ae1 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -105,6 +105,7 @@ obj-$(CONFIG_HID)   += hid/
 obj-$(CONFIG_PPC_PS3)  += ps3/
 obj-$(CONFIG_OF)   += of/
 obj-$(CONFIG_SSB)  += ssb/
+obj-$(CONFIG_VHOST_NET)+= vhost/
 obj-$(CONFIG_VIRTIO)   += virtio/
 obj-$(CONFIG_VLYNQ)+= vlynq/
 obj-$(CONFIG_STAGING)  += staging/
diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
new file mode 100644
index 000..d955406
--- /dev/null
+++ b/drivers/vhost/Kconfig
@@ -0,0 +1,11 @@
+config VHOST_NET
+   tristate "Host kernel accelerator for virtio net"
+   depends on NET && EVENTFD
+   ---help---
+ This kernel module can be loaded in host kernel to accelerate
+ guest networking with virtio_net. Not to be confused with virtio_net
+ module itself which needs to be loaded in guest kernel.
+
+ To compile this driver as a module, choose M here: the module will
+ be called vhost_net.
+
diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
new file mode 100644
index 000..72dd020
--- /dev/null
+++ b/drivers/vhost/Makefile
@@ -0,0 +1,2 @@
+obj-$(CONFIG_VHOST_NET) += vhost_net.o
+vhost_net-y := vhost.o net.o
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
new file mode 100644
index 000..a04ffd0
--- /dev/null
+++ b/drivers/vhost/net.c
@@ -0,0 +1,411 @@
+/* Copyright (C) 2009 Red Hat, Inc.
+ * Author: Michael S. Tsirkin 
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.
+ *
+ * virtio-net server in host