Re: [ovs-dev] Flow Control and Port Mirroring

2010-10-29 Thread Simon Horman
[ CCed VHOST contacts ]

On Thu, Oct 28, 2010 at 01:22:02PM -0700, Jesse Gross wrote:
> On Thu, Oct 28, 2010 at 4:54 AM, Simon Horman  wrote:
> > My reasoning is that in the non-mirroring case the guest is
> > limited by the external interface through wich the packets
> > eventually flow - that is 1Gbit/s. But in the mirrored either
> > there is no flow control or the flow control is acting on the
> > rate of dummy0, which is essentailly infinate.
> >
> > Before investigating this any further I wanted to ask if
> > this behaviour is intentional.
> 
> It's not intentional but I can take a guess at what is happening.
> 
> When we send the packet to a mirror, the skb is cloned but only the
> original skb is charged to the sender.  If the original packet is
> delivered to localhost then it will be freed quickly and no longer
> accounted for, despite the fact that the "real" packet is still
> sitting in the transmit queue on the NIC.  The UDP stack will then
> send the next packet, limited only by the speed of the CPU.

That would explain what I have observed.

> Normally, this would be tracked by accounting for the memory charged
> to the socket.  However, I know that Xen tracks whether the actual
> pages of memory have been freed, which should avoid this problem since
> the memory won't be released util the last packet has been sent.  I
> don't know what KVM virtio does but I'm guessing that it similar to
> the former, since this problem is occurring.

I am also familiar of how Xen tracks pages but less sure of the
virtio side of things.

> While it would be easy to charge the socket for all clones, I also
> want to be careful about over accounting of the same data, leading to
> a very small effective socket buffer.

Agreed, we don't want to see over-charging.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH] xen: xenfs: privcmd: check put_user() return code

2010-10-29 Thread Jeremy Fitzhardinge
 On 10/29/2010 10:44 AM, Ian Campbell wrote:
> On Fri, 2010-10-29 at 18:18 +0100, Jeremy Fitzhardinge wrote:
>> On 10/28/2010 04:39 AM, Vasiliy Kulikov wrote:
>>> put_user() may fail.  In this case propagate error code from
>>> privcmd_ioctl_mmap_batch().
>> Thanks for looking at this.  I'm in two minds about this; the existing
>> logic is such that these put_users can only fail if something else has
>> already failed and its returning an error.  I guess it would be useful
>> to get an EFAULT if you've got a problem writing back the results.
>>
>> IanC, any opinion?
> Not a strong one.
>
> Perhaps what we really want in this case is for traverse_pages to return
> the total number of callback failures it encountered rather than
> aborting after the first failure?
>
> On the other hand you are correct that gather_array() has already
> touched all the pages which we are going to be touching here so how
> likely is a new failure at this point anyway?

I could think of two cases: the array is mapped RO, so only the
writeback fails, or someone changes the mapping under our feet from
another thread.

J

> Ian.
>
>> Thanks,
>> J
>>
>>> Signed-off-by: Vasiliy Kulikov 
>>> ---
>>>  Compile tested.
>>>
>>>  drivers/xen/xenfs/privcmd.c |8 ++--
>>>  1 files changed, 2 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/drivers/xen/xenfs/privcmd.c b/drivers/xen/xenfs/privcmd.c
>>> index f80be7f..2eb04c8 100644
>>> --- a/drivers/xen/xenfs/privcmd.c
>>> +++ b/drivers/xen/xenfs/privcmd.c
>>> @@ -266,9 +266,7 @@ static int mmap_return_errors(void *data, void *state)
>>> xen_pfn_t *mfnp = data;
>>> struct mmap_batch_state *st = state;
>>>  
>>> -   put_user(*mfnp, st->user++);
>>> -
>>> -   return 0;
>>> +   return put_user(*mfnp, st->user++);
>>>  }
>>>  
>>>  static struct vm_operations_struct privcmd_vm_ops;
>>> @@ -323,10 +321,8 @@ static long privcmd_ioctl_mmap_batch(void __user 
>>> *udata)
>>> up_write(&mm->mmap_sem);
>>>  
>>> if (state.err > 0) {
>>> -   ret = 0;
>>> -
>>> state.user = m.arr;
>>> -   traverse_pages(m.num, sizeof(xen_pfn_t),
>>> +   ret = traverse_pages(m.num, sizeof(xen_pfn_t),
>>>&pagelist,
>>>mmap_return_errors, &state);
>>> }
>

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH] xen: xenfs: privcmd: check put_user() return code

2010-10-29 Thread Jeremy Fitzhardinge
 On 10/28/2010 04:39 AM, Vasiliy Kulikov wrote:
> put_user() may fail.  In this case propagate error code from
> privcmd_ioctl_mmap_batch().

Thanks for looking at this.  I'm in two minds about this; the existing
logic is such that these put_users can only fail if something else has
already failed and its returning an error.  I guess it would be useful
to get an EFAULT if you've got a problem writing back the results.

IanC, any opinion?

Thanks,
J

> Signed-off-by: Vasiliy Kulikov 
> ---
>  Compile tested.
>
>  drivers/xen/xenfs/privcmd.c |8 ++--
>  1 files changed, 2 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/xen/xenfs/privcmd.c b/drivers/xen/xenfs/privcmd.c
> index f80be7f..2eb04c8 100644
> --- a/drivers/xen/xenfs/privcmd.c
> +++ b/drivers/xen/xenfs/privcmd.c
> @@ -266,9 +266,7 @@ static int mmap_return_errors(void *data, void *state)
>   xen_pfn_t *mfnp = data;
>   struct mmap_batch_state *st = state;
>  
> - put_user(*mfnp, st->user++);
> -
> - return 0;
> + return put_user(*mfnp, st->user++);
>  }
>  
>  static struct vm_operations_struct privcmd_vm_ops;
> @@ -323,10 +321,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
>   up_write(&mm->mmap_sem);
>  
>   if (state.err > 0) {
> - ret = 0;
> -
>   state.user = m.arr;
> - traverse_pages(m.num, sizeof(xen_pfn_t),
> + ret = traverse_pages(m.num, sizeof(xen_pfn_t),
>  &pagelist,
>  mmap_return_errors, &state);
>   }

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


[PATCH] xen: xenfs: privcmd: check put_user() return code

2010-10-29 Thread Vasiliy Kulikov
put_user() may fail.  In this case propagate error code from
privcmd_ioctl_mmap_batch().

Signed-off-by: Vasiliy Kulikov 
---
 Compile tested.

 drivers/xen/xenfs/privcmd.c |8 ++--
 1 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/drivers/xen/xenfs/privcmd.c b/drivers/xen/xenfs/privcmd.c
index f80be7f..2eb04c8 100644
--- a/drivers/xen/xenfs/privcmd.c
+++ b/drivers/xen/xenfs/privcmd.c
@@ -266,9 +266,7 @@ static int mmap_return_errors(void *data, void *state)
xen_pfn_t *mfnp = data;
struct mmap_batch_state *st = state;
 
-   put_user(*mfnp, st->user++);
-
-   return 0;
+   return put_user(*mfnp, st->user++);
 }
 
 static struct vm_operations_struct privcmd_vm_ops;
@@ -323,10 +321,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
up_write(&mm->mmap_sem);
 
if (state.err > 0) {
-   ret = 0;
-
state.user = m.arr;
-   traverse_pages(m.num, sizeof(xen_pfn_t),
+   ret = traverse_pages(m.num, sizeof(xen_pfn_t),
   &pagelist,
   mmap_return_errors, &state);
}
-- 
1.7.0.4

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-10-29 Thread Ed Tomlinson
On Friday 29 October 2010 07:18:00 Rusty Russell wrote:
> On Wed, 27 Oct 2010 11:30:31 pm Ian Molton wrote:
> > On 19/10/10 11:39, Avi Kivity wrote:
> > > On 10/19/2010 12:31 PM, Ian Molton wrote:
> > 
> > >>> 2. should start with a patch to the virtio-pci spec to document what
> > >>> you're doing
> > >>
> > >> Where can I find that spec?
> > >
> > > http://ozlabs.org/~rusty/virtio-spec/
> > 
> > Ok, but I'm not patching that until theres been some review.
> 
> Fair enough; it's a bit of a PITA to patch, so it makes sense to get the
> details nailed down first.
> 
> > There are links to the associated qemu and guest OS changes in my 
> > original email.
> > 
> > >> It doesnt, at present... It could be changed fairly easily ithout
> > >> breaking anything if that happens though.
> > >
> > > The hypervisor and the guest can be changed independently. The driver
> > > should be coded so that it doesn't depend on hypervisor implementation
> > > details.
> > 
> > Fixed - updated patch tested and attached.
> 
> OK. FWIW, I think this is an awesome idea.  I understand others are skeptical,
> but this seems simple and if it works and you're happy to maintain it I'm
> happy to let you do it :)

>From a kvm user's perspective this is a GREAT idea.  Looks like Ian and 
>friends have
though about better solutions down the line and have made their code easy
to superceed.  If this is really the case I vote to get this in qemu/kvm asap -
software opengl can be a real pain.

Thanks
Ed Tomlinson
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-10-29 Thread Anthony Liguori
On 10/29/2010 06:18 AM, Rusty Russell wrote:
>> Fixed - updated patch tested and attached.
>>  
> OK. FWIW, I think this is an awesome idea.

Paravirtual OpenGL or the actual proposed implementation?  Have you 
looked at the actual code?

If I understand correctly, the implementation is an RPC of opengl 
commands across virtio that are then rendered on the host to an 
offscreen buffer.  The buffer is then sent back to the guest in rendered 
form.

But it's possible to mess around with other guests because the opengl 
commands are not scrubbed. There have been other proposals in the past 
that include a mini JIT that would translate opengl commands into a safe 
form.

Exposing just an opengl RPC transport doesn't seem that useful either.  
It ought to be part of a VGA card in order for it to be integrated with 
the rest of the graphics system without a round trip.

The alternative proposal is Spice which so far noone has mentioned.  
Right now, Spice seems to be taking the right approach to guest 3d support.

Regards,

Anthony Liguori

>I understand others are skeptical,
> but this seems simple and if it works and you're happy to maintain it I'm
> happy to let you do it :)
>
> Cheers,
> Rusty.
>
>

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

2010-10-29 Thread Rusty Russell
On Wed, 27 Oct 2010 11:30:31 pm Ian Molton wrote:
> On 19/10/10 11:39, Avi Kivity wrote:
> > On 10/19/2010 12:31 PM, Ian Molton wrote:
> 
> >>> 2. should start with a patch to the virtio-pci spec to document what
> >>> you're doing
> >>
> >> Where can I find that spec?
> >
> > http://ozlabs.org/~rusty/virtio-spec/
> 
> Ok, but I'm not patching that until theres been some review.

Fair enough; it's a bit of a PITA to patch, so it makes sense to get the
details nailed down first.

> There are links to the associated qemu and guest OS changes in my 
> original email.
> 
> >> It doesnt, at present... It could be changed fairly easily ithout
> >> breaking anything if that happens though.
> >
> > The hypervisor and the guest can be changed independently. The driver
> > should be coded so that it doesn't depend on hypervisor implementation
> > details.
> 
> Fixed - updated patch tested and attached.

OK. FWIW, I think this is an awesome idea.  I understand others are skeptical,
but this seems simple and if it works and you're happy to maintain it I'm
happy to let you do it :)

Cheers,
Rusty.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization