Hello Xiaohui,
On Tue, 2009-11-03 at 09:06 +0800, Xin, Xiaohui wrote:
Hi, Michael,
What's your deferring skb allocation patch mentioned here, may you
elaborate it a little more detailed?
That's my patch. It was submitted a few month ago. Here is the link to
this RFC patch:
On Thu, Oct 29, 2009 at 12:11:33AM -0700, Shirley Ma wrote:
Hello Michael,
I am able to get 63xxMb/s throughput with 10% less cpu utilization when
I apply deferring skb patch on top of your most recent vhost patch. The
userspace TCP_STREAM BW used to be 3xxxMb/s from upper stream git tree.
:47 AM
To: Shirley Ma
Cc: Sridhar Samudrala; Shirley Ma; David Stevens; kvm@vger.kernel.org;
s...@linux.vnet.ibm.com; mashi...@linux.vnet.ibm.com
Subject: Re: vhost-net patches
On Thu, Oct 29, 2009 at 12:11:33AM -0700, Shirley Ma wrote:
Hello Michael,
I am able to get 63xxMb/s throughput with 10
Hello Michael,
I am able to get 63xxMb/s throughput with 10% less cpu utilization when
I apply deferring skb patch on top of your most recent vhost patch. The
userspace TCP_STREAM BW used to be 3xxxMb/s from upper stream git tree.
After applying your recent vhost patch, it goes up to 53xxMb/s.
On 10/28/2009 06:46 PM, Michael S. Tsirkin wrote:
On Tue, Oct 27, 2009 at 09:36:18AM -0700, Shirley Ma wrote:
Hello Michael,
On Tue, 2009-10-27 at 17:27 +0200, Michael S. Tsirkin wrote:
Possibly GFP_ATOMIC allocations in vring_add_indirect are failing?
Is there a chance you are
On 10/29/2009 02:21 PM, Avi Kivity wrote:
On 10/28/2009 06:46 PM, Michael S. Tsirkin wrote:
On Tue, Oct 27, 2009 at 09:36:18AM -0700, Shirley Ma wrote:
Hello Michael,
On Tue, 2009-10-27 at 17:27 +0200, Michael S. Tsirkin wrote:
Possibly GFP_ATOMIC allocations in vring_add_indirect are
On Tue, Oct 27, 2009 at 09:36:18AM -0700, Shirley Ma wrote:
Hello Michael,
On Tue, 2009-10-27 at 17:27 +0200, Michael S. Tsirkin wrote:
Possibly GFP_ATOMIC allocations in vring_add_indirect are failing?
Is there a chance you are tight on guest memory for some reason?
with vhost, virtio
On Tue, 2009-10-27 at 22:58 +0200, Michael S. Tsirkin wrote:
How large is large here? I usually allocate 1G.
I used to have 512, for this run I allocated 1G.
I do see performance improves to 3xxxMb/s, and occasionally
reaches 40xxMb/s.
This is same as userspace, isn't it?
A little bit
Hello Michael,
On Wed, 2009-10-28 at 17:39 +0200, Michael S. Tsirkin wrote:
Here's another hack to try. It will break raw sockets,
but just as a test:
This patch looks better than previous one for guest to host TCP_STREAM
performance. The transmission queue full still exists, but TCP_STREAM
On Tue, Oct 27, 2009 at 09:36:18AM -0700, Shirley Ma wrote:
Hello Michael,
On Tue, 2009-10-27 at 17:27 +0200, Michael S. Tsirkin wrote:
Possibly GFP_ATOMIC allocations in vring_add_indirect are failing?
Is there a chance you are tight on guest memory for some reason?
with vhost, virtio
On Wed, Oct 28, 2009 at 09:45:37AM -0700, Shirley Ma wrote:
Hello Michael,
On Wed, 2009-10-28 at 17:39 +0200, Michael S. Tsirkin wrote:
Here's another hack to try. It will break raw sockets,
but just as a test:
This patch looks better than previous one for guest to host TCP_STREAM
On Monday 26 October 2009, Shirley Ma wrote:
On Sun, 2009-10-25 at 11:11 +0200, Michael S. Tsirkin wrote:
What is vnet0?
That's a tap interface. I am binding raw socket to a tap interface and
it doesn't work. Does it support?
Is the tap device connected to a bridge as you'd normally do
Hello Miachel,
On Wed, 2009-10-28 at 18:53 +0200, Michael S. Tsirkin wrote:
what exactly do you mean by transmission queue size?
tx_queue_len?
I think what should help with transmission queue full is
actually sndbuf parameter for tap in qemu.
I didn't see my email out, I resend the response
Hello Arnd,
On Wed, 2009-10-28 at 18:46 +0100, Arnd Bergmann wrote:
You can probably connect it like this:
qemu - vhost_net - vnet0 == /dev/tun - qemu
To connect two guests.
I've also used a bidirectional pipe before, to connect two tap
interfaces to each other. However, if you want to
Hello Michael,
When I am testing deferring skb allocation patch, I found this problem.
Simply removing and reloading guest virtio_net module would cause guest
exit with errors. It is easy to reproduce it:
[r...@localhost ~]# rmmod virtio_net
[r...@localhost ~]# modprobe virtio_net
On Mon, Oct 26, 2009 at 02:34:49PM -0700, Shirley Ma wrote:
Hello Miachel,
On Mon, 2009-10-26 at 22:05 +0200, Michael S. Tsirkin wrote:
Shirley, could you please test the following patch?
With this patch, the performance has gained from 1xxx to 2xxx Mb/s,
still has some performance gap
Hello Michael,
On Tue, 2009-10-27 at 08:43 +0200, Michael S. Tsirkin wrote:
At some point my guest had a runaway nash-hotplug process
consuming 100% CPU. Could you please verify this
does not happen to you?
What I have found that the start_xmit stopped and restarted too often.
There is no
On Tue, 2009-10-27 at 08:38 +0200, Michael S. Tsirkin wrote:
Yes but you need to make host send packets out to tap as well,
somehow. One way to do this is to assign IP address in
a separate subnet to tap in host and to eth device in guest.
Thanks for the hint, I will make a try.
Shirley
--
On Tue, Oct 27, 2009 at 07:46:59AM -0700, Shirley Ma wrote:
Hello Michael,
On Tue, 2009-10-27 at 08:43 +0200, Michael S. Tsirkin wrote:
At some point my guest had a runaway nash-hotplug process
consuming 100% CPU. Could you please verify this
does not happen to you?
What I have found
On Tue, Oct 27, 2009 at 09:36:18AM -0700, Shirley Ma wrote:
Hello Michael,
On Tue, 2009-10-27 at 17:27 +0200, Michael S. Tsirkin wrote:
Possibly GFP_ATOMIC allocations in vring_add_indirect are failing?
Is there a chance you are tight on guest memory for some reason?
with vhost, virtio
On Fri, Oct 23, 2009 at 09:23:40AM -0700, Shirley Ma wrote:
I also hit guest skb_xmit panic.
If these are the same panics I have seen myself,
they are probably fixed with recent virtio patches
I sent to Rusty. I put them on my vhost.git tree to make
it easier for you to test.
If you see any more
On Fri, Oct 23, 2009 at 09:23:40AM -0700, Shirley Ma wrote:
Hello Michael,
Some initial vhost test netperf results on my T61 laptop from the
working tap device are here, latency has been significant decreased, but
throughput from guest to host has huge regression. I also hit guest
skb_xmit
Hello Miachel,
On Mon, 2009-10-26 at 22:05 +0200, Michael S. Tsirkin wrote:
Shirley, could you please test the following patch?
With this patch, the performance has gained from 1xxx to 2xxx Mb/s,
still has some performance gap compared to without vhost. It was
3xxxMb/s before from guest to host
Pulled your git tree, didn't see the panic.
Thanks
Shirley
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sun, 2009-10-25 at 11:11 +0200, Michael S. Tsirkin wrote:
What is vnet0?
That's a tap interface. I am binding raw socket to a tap interface and
it doesn't work. Does it support?
Thanks
Shirley
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to
On Fri, Oct 23, 2009 at 12:24:15PM -0700, Shirley Ma wrote:
Hello Michael,
Some update,
On Fri, 2009-10-23 at 08:12 -0700, Shirley Ma wrote:
Tested raw packet, it didn't work;
Tested option -net raw,ifname=eth0, attached to a real device, raw works
to remote node. I was expecting raw
On Fri, Oct 23, 2009 at 09:23:40AM -0700, Shirley Ma wrote:
Hello Michael,
Some initial vhost test netperf results on my T61 laptop from the
working tap device are here, latency has been significant decreased, but
throughput from guest to host has huge regression.
Could you please try
On Fri, Oct 23, 2009 at 09:23:40AM -0700, Shirley Ma wrote:
I also hit guest skb_xmit panic.
OK, I have fixed a couple of reasons for panic. Will push and
look at host to guest performance soon.
--
MST
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to
On Thu, Oct 22, 2009 at 11:00:20AM -0700, Sridhar Samudrala wrote:
On Thu, 2009-10-22 at 19:43 +0200, Michael S. Tsirkin wrote:
Possibly we'll have to debug this in vhost in host kernel.
I would debug this directly, it's just that my setup is somehow
different and I do not see this
On Thu, Oct 22, 2009 at 11:00:20AM -0700, Sridhar Samudrala wrote:
On Thu, 2009-10-22 at 19:43 +0200, Michael S. Tsirkin wrote:
Possibly we'll have to debug this in vhost in host kernel.
I would debug this directly, it's just that my setup is somehow
different and I do not see this
On Fri, 2009-10-23 at 13:04 +0200, Michael S. Tsirkin wrote:
Sridhar, Shirley,
Could you please test the following patch?
It should fix a bug on 32 bit hosts - is this what you have?
Yes, it's 32 bit host. I checked out your recent git tree. Looks like
the patch is already there, but vhost
Hello Michael,
Tested raw packet, it didn't work; switching to tap device, it is
working. Qemu command is:
x86_64-softmmu/qemu-system-x86_64 -s /home/xma/images/fedora10-2-vm -m
512 -drive file=/home/xma/images/fedora10-2-vm,if=virtio,index=0,boot=on
-net tap,ifname=vnet0,script=no,downscript=no
Hello Michael,
Some initial vhost test netperf results on my T61 laptop from the
working tap device are here, latency has been significant decreased, but
throughput from guest to host has huge regression. I also hit guest
skb_xmit panic.
netperf TCP_STREAM, default setup, 60 secs run
guest-host
Hello Michael,
Some update,
On Fri, 2009-10-23 at 08:12 -0700, Shirley Ma wrote:
Tested raw packet, it didn't work;
Tested option -net raw,ifname=eth0, attached to a real device, raw works
to remote node. I was expecting raw worked to local host.
Does this option -net raw,ifname=vnet0
kvm@vger.kernel.org,
s...@linux.vnet.ibm.com
PM [cid] *
Subject Re: vhost-net patches
**
On Wed, Oct 21, 2009 at 12:59:50PM -0700, Shirley Ma wrote:
Hello Micahel,
I
On Wed, Oct 21, 2009 at 04:59:20PM -0700, Shirley Ma wrote:
Hello Michael,
There was a recent bugfix in qemu-kvm I pushed.
Could you please verify that you have
cec75e39151e49cc90c849eab5d0d729667c9e68 ?
Yes, I cloned your qemu-kvm and kernel git.
I am posting the errors from
On Wed, Oct 21, 2009 at 04:59:20PM -0700, Shirley Ma wrote:
Hello Michael,
There was a recent bugfix in qemu-kvm I pushed.
Could you please verify that you have
cec75e39151e49cc90c849eab5d0d729667c9e68 ?
Yes, I cloned your qemu-kvm and kernel git.
It seems that the errors you observe
On Thu, Oct 22, 2009 at 02:34:56PM +0200, Michael S. Tsirkin wrote:
On Wed, Oct 21, 2009 at 04:59:20PM -0700, Shirley Ma wrote:
Hello Michael,
There was a recent bugfix in qemu-kvm I pushed.
Could you please verify that you have
cec75e39151e49cc90c849eab5d0d729667c9e68 ?
Yes, I
On Thu, 2009-10-22 at 15:13 +0200, Michael S. Tsirkin wrote:
OK, I sent a patch that should fix the errors for you.
Could you please confirm, preferably on-list, whether
the patch makes the errors go away for you with
userspace virtio?
Confirmed, your patch has fixed irq handler mismatch
On Thu, 2009-10-22 at 10:23 -0700, Shirley Ma wrote:
Yes, agreed. One observation is when I enable PCI MSI in guest kernel,
I
found that even without vhost supportin host kernel the network
doesn't
work either. So I think this is nothing related to vhost. I need to
find
why PCI MSI doesn't
On Thu, Oct 22, 2009 at 10:32:55AM -0700, Shirley Ma wrote:
On Thu, 2009-10-22 at 10:23 -0700, Shirley Ma wrote:
Yes, agreed. One observation is when I enable PCI MSI in guest kernel,
I
found that even without vhost supportin host kernel the network
doesn't
work either. So I think this
On Thu, 2009-10-22 at 19:36 +0200, Michael S. Tsirkin wrote:
Upstream is Avi's qemu-kvm.git?
So, for a moment taking vhost out of the equation, it seems that MSI
was
broken in Avi's tree again, after I forked my tree?
The upper stream qemu git tree never worked for me w/i MSI, the boot
hung
On Thu, Oct 22, 2009 at 10:23:44AM -0700, Shirley Ma wrote:
On Thu, 2009-10-22 at 15:13 +0200, Michael S. Tsirkin wrote:
OK, I sent a patch that should fix the errors for you.
Could you please confirm, preferably on-list, whether
the patch makes the errors go away for you with
userspace
On Thu, Oct 22, 2009 at 10:44:29AM -0700, Shirley Ma wrote:
On Thu, 2009-10-22 at 19:36 +0200, Michael S. Tsirkin wrote:
Upstream is Avi's qemu-kvm.git?
So, for a moment taking vhost out of the equation, it seems that MSI
was
broken in Avi's tree again, after I forked my tree?
The
On Thu, 2009-10-22 at 19:47 +0200, Michael S. Tsirkin wrote:
What happens if you reset my tree to commit
47e465f031fc43c53ea8f08fa55cc3482c6435c8?
I am going to clean up my upperstream git tree and retest first. Then I
will try back up this commit.
Looks like there are 2 issues:
- upstream
On Thu, 2009-10-22 at 19:43 +0200, Michael S. Tsirkin wrote:
Possibly we'll have to debug this in vhost in host kernel.
I would debug this directly, it's just that my setup is somehow
different and I do not see this issue, otherwise I would not
waste your time.
Can we add some printks?
On Thu, 2009-10-22 at 10:56 -0700, Shirley Ma wrote:
On Thu, 2009-10-22 at 19:47 +0200, Michael S. Tsirkin wrote:
What happens if you reset my tree to commit
47e465f031fc43c53ea8f08fa55cc3482c6435c8?
I am going to clean up my upperstream git tree and retest first. Then
I
will try back up
On Wed, Oct 21, 2009 at 12:59:50PM -0700, Shirley Ma wrote:
Hello Micahel,
I have set up guest kernel 2.6.32-rc5 with MSI configured. Here are errors
what
I have got:
1. First, qemu complained extboot.bin not found, I copied the file from
optionrom/ dir to pc-bios/ dir, this problem is
are trying out your vhost-net patches from your git trees on
kernel.org.
I am using mst/vhost.git as host kernel and mst/qemu-kvm.git for qemu.
I am using the following qemu script to start the guest using userspace
tap backend.
home/sridhar/git/mst/qemu-kvm/x86_64
] *
Subject Re: vhost-net patches
**
On Sun, 2009-10-18 at 19:32 +0200, Michael S. Tsirkin wrote:
On Sun, Oct 18, 2009 at 12:53:56PM +0200, Michael S. Tsirkin wrote:
On Fri, Oct 16, 2009 at 12:29:29PM -0700
/Beaverton/i...@ibmus,
kvm@vger.kernel.org
Subject
Re: vhost-net patches
Subject
Re: vhost-net patches
On Sun, 2009-10-18 at 19:32 +0200, Michael S. Tsirkin wrote:
On Sun, Oct 18, 2009 at 12:53:56PM +0200, Michael S. Tsirkin wrote:
On Fri, Oct 16, 2009 at 12:29:29PM -0700, Sridhar Samudrala wrote:
Hi Michael,
We are trying out your vhost-net patches from your git trees
53 matches
Mail list logo