Re: [net-next RFC V4 PATCH 0/4] Multiqueue virtio-net

2012-06-25 Thread Shirley Ma
Hello Jason,

Good work. Do you have local guest to guest results?

Thanks
Shirley

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH RFC net-next] virtio_net: refill buffer right after being used

2011-07-30 Thread Shirley Ma
On Fri, 2011-07-29 at 16:58 -0700, Mike Waychison wrote:
> On Fri, Jul 29, 2011 at 3:55 PM, Shirley Ma 
> wrote:
> > Resubmit it with a typo fix.
> >
> > Signed-off-by: Shirley Ma 
> > ---
> >
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index 0c7321c..c8201d4 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -429,6 +429,22 @@ static int add_recvbuf_mergeable(struct
> virtnet_info *vi, gfp_t gfp)
> >return err;
> >  }
> >
> > +static int fill_one(struct virtnet_info *vi, gfp_t gfp)
> > +{
> > +   int err;
> > +
> > +   if (vi->mergeable_rx_bufs)
> > +   err = add_recvbuf_mergeable(vi, gfp);
> > +   else if (vi->big_packets)
> > +   err = add_recvbuf_big(vi, gfp);
> > +   else
> > +   err = add_recvbuf_small(vi, gfp);
> > +
> > +   if (err >= 0)
> > +   ++vi->num;
> > +   return err;
> > +}
> > +
> >  /* Returns false if we couldn't fill entirely (OOM). */
> >  static bool try_fill_recv(struct virtnet_info *vi, gfp_t gfp)
> >  {
> > @@ -436,17 +452,10 @@ static bool try_fill_recv(struct virtnet_info
> *vi, gfp_t gfp)
> >bool oom;
> >
> >do {
> > -   if (vi->mergeable_rx_bufs)
> > -   err = add_recvbuf_mergeable(vi, gfp);
> > -   else if (vi->big_packets)
> > -   err = add_recvbuf_big(vi, gfp);
> > -   else
> > -   err = add_recvbuf_small(vi, gfp);
> > -
> > +   err = fill_one(vi, gfp);
> >oom = err == -ENOMEM;
> >if (err < 0)
> >break;
> > -   ++vi->num;
> >} while (err > 0);
> >if (unlikely(vi->num > vi->max))
> >vi->max = vi->num;
> > @@ -506,13 +515,13 @@ again:
> >receive_buf(vi->dev, buf, len);
> >--vi->num;
> >received++;
> > -   }
> > -
> > -   if (vi->num < vi->max / 2) {
> > -   if (!try_fill_recv(vi, GFP_ATOMIC))
> > +   if (fill_one(vi, GFP_ATOMIC) < 0)
> >schedule_delayed_work(&vi->refill, 0);
> >}
> >
> > +   /* notify buffers are refilled */
> > +   virtqueue_kick(vi->rvq);
> > +
> 
> How does this reduce latency?   We are doing the same amount of work
> in both cases, and in both cases the newly available buffers are not
> visible to the device until the virtqueue_kick..

It averages the latency between each receive by filling only one set of
buffers vs. either none buffers or 1/2 ring size buffers fill between
receives.

> 
> >/* Out of packets? */
> >if (received < budget) {
> >napi_complete(napi);
> >
> >
> > -- 

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH RFC net-next] virtio_net: refill buffer right after being used

2011-07-29 Thread Shirley Ma
Resubmit it with a typo fix.

Signed-off-by: Shirley Ma 
---

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 0c7321c..c8201d4 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -429,6 +429,22 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi, 
gfp_t gfp)
return err;
 }
 
+static int fill_one(struct virtnet_info *vi, gfp_t gfp)
+{
+   int err;
+
+   if (vi->mergeable_rx_bufs)
+   err = add_recvbuf_mergeable(vi, gfp);
+   else if (vi->big_packets)
+   err = add_recvbuf_big(vi, gfp);
+   else
+   err = add_recvbuf_small(vi, gfp);
+
+   if (err >= 0)
+   ++vi->num;
+   return err;
+}
+
 /* Returns false if we couldn't fill entirely (OOM). */
 static bool try_fill_recv(struct virtnet_info *vi, gfp_t gfp)
 {
@@ -436,17 +452,10 @@ static bool try_fill_recv(struct virtnet_info *vi, gfp_t 
gfp)
bool oom;
 
do {
-   if (vi->mergeable_rx_bufs)
-   err = add_recvbuf_mergeable(vi, gfp);
-   else if (vi->big_packets)
-   err = add_recvbuf_big(vi, gfp);
-   else
-   err = add_recvbuf_small(vi, gfp);
-
+   err = fill_one(vi, gfp);
oom = err == -ENOMEM;
if (err < 0)
break;
-   ++vi->num;
} while (err > 0);
if (unlikely(vi->num > vi->max))
vi->max = vi->num;
@@ -506,13 +515,13 @@ again:
receive_buf(vi->dev, buf, len);
--vi->num;
received++;
-   }
-
-   if (vi->num < vi->max / 2) {
-   if (!try_fill_recv(vi, GFP_ATOMIC))
+   if (fill_one(vi, GFP_ATOMIC) < 0)
schedule_delayed_work(&vi->refill, 0);
}
 
+   /* notify buffers are refilled */
+   virtqueue_kick(vi->rvq);
+
/* Out of packets? */
if (received < budget) {
napi_complete(napi);


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


[PATCH RFC net-next] virtio_net: refill buffer right after being used

2011-07-29 Thread Shirley Ma
To even the latency, refill buffer right after being used.

Sign-off-by: Shirley Ma 
---

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 0c7321c..c8201d4 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -429,6 +429,22 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi, 
gfp_t gfp)
return err;
 }
 
+static bool fill_one(struct virtio_net *vi, gfp_t gfp)
+{
+   int err;
+
+   if (vi->mergeable_rx_bufs)
+   err = add_recvbuf_mergeable(vi, gfp);
+   else if (vi->big_packets)
+   err = add_recvbuf_big(vi, gfp);
+   else
+   err = add_recvbuf_small(vi, gfp);
+
+   if (err >= 0)
+   ++vi->num;
+   return err;
+}
+
 /* Returns false if we couldn't fill entirely (OOM). */
 static bool try_fill_recv(struct virtnet_info *vi, gfp_t gfp)
 {
@@ -436,17 +452,10 @@ static bool try_fill_recv(struct virtnet_info *vi, gfp_t 
gfp)
bool oom;
 
do {
-   if (vi->mergeable_rx_bufs)
-   err = add_recvbuf_mergeable(vi, gfp);
-   else if (vi->big_packets)
-   err = add_recvbuf_big(vi, gfp);
-   else
-   err = add_recvbuf_small(vi, gfp);
-
+   err = fill_one(vi, gfp);
oom = err == -ENOMEM;
if (err < 0)
break;
-   ++vi->num;
} while (err > 0);
if (unlikely(vi->num > vi->max))
vi->max = vi->num;
@@ -506,13 +515,13 @@ again:
receive_buf(vi->dev, buf, len);
--vi->num;
received++;
-   }
-
-   if (vi->num < vi->max / 2) {
-   if (!try_fill_recv(vi, GFP_ATOMIC))
+   if (fill_one(vi, GFP_ATOMIC) < 0)
schedule_delayed_work(&vi->refill, 0);
}
 
+   /* notify buffers are refilled */
+   virtqueue_kick(vi->rvq);
+
/* Out of packets? */
if (received < budget) {
napi_complete(napi);


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PERF RESULTS] virtio and vhost-net performance enhancements

2011-05-27 Thread Shirley Ma

Hello KK,

Could you please try TCP_RRs as well?

Thanks
Shirley


   
 Krishna Kumar2
To
   "Michael S. Tsirkin"
 05/26/2011 08:32  
 AM cc
   Christian Borntraeger   
   , Carsten
   Otte ,
   haban...@linux.vnet.ibm.com, Heiko
   Carstens
   ,
   k...@vger.kernel.org,
   lgu...@lists.ozlabs.org,
   linux-ker...@vger.kernel.org,   
   linux-s...@vger.kernel.org, 
   linux...@de.ibm.com,
   net...@vger.kernel.org, Rusty   
   Russell ,
   Martin Schwidefsky  
   , Steve 
   Dobbelstein/Austin/IBM@IBMUS, Tom
   Lendacky ,
   virtualization@lists.linux-foundati
   on.org, Shirley 
   Ma/Beaverton/IBM@IBMUS  
   Subject
   [PERF RESULTS] virtio and vhost-net
   performance enhancements
   
   
   
   
   
   




"Michael S. Tsirkin"  wrote on 05/20/2011 04:40:07 AM:

> OK, here is the large patchset that implements the virtio spec update
> that I sent earlier (the spec itself needs a minor update, will send
> that out too next week, but I think we are on the same page here
> already). It supercedes the PUBLISH_USED_IDX patches I sent
> out earlier.

I was able to get this tested by applying the v2 patches
to git-next tree (somehow MST's git tree hung on my guest
which never got resolved). Testing was from Guest -> Remote
node, using an ixgbe 10g card. The test results are
*excellent* (table: #netperf sesssions, BW% improvement,
SD% improvement, CPU% improvement):

___
   512 byte I/O
# BW% SD%  CPU%

1 151.6   -65.1-10.7
2 180.6   -66.6-6.4
4 15.5-35.8-26.1
8 1.8 -28.4-26.7
163.1 -29.0-26.5
321.1 -27.4-27.5
643.8 -30.9-26.7
965.4 -21.7-24.2
128   5.7 -24.4-25.5

BW: 16.6%   SD: -24.6%CPU: -25.5%



1K I/O
# BW% SD%  CPU%

1 233.9   -76.5-18.0
2 112.2   -64.0-23.2
4 9.2 -31.6-26.1
8-1.7 -26.8-30.3
163.5 -31.5-30.6
324.8 -25.2-30.5
645.7 -31.0-28.9
965.3 -32.2-31.7
128   4.6 -38.2-33.6

BW: 16.4%   SD: -35.%CPU: -31.5%



 16K I/O
# BW% SD%  CPU%

1 18.8-27.2-18.3
2 14.8-36.7-27.7
4 12.7-45.2-38.1
8 4.4 -56.4-54.4
164.8 -38.3-36.1
32078.0 79.2
643.8 -38.1-37.5
967.3 -35.2-31.1
128   3.4 -31.1-32.1

BW: 7.6%   SD: -30.1%   CPU: -23.7%


I plan to run some more tests tomorrow. Please let
me know if any other scenario will help.

Thanks,

- KK

<><><>___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization

Re: [PATCHv9 3/3] vhost_net: a kernel-level virtio server

2009-12-11 Thread Shirley Ma

On Sun, 2009-11-22 at 12:35 +0200, Michael S. Tsirkin wrote:
> These results where sent by Shirley Ma (Cc'd).
> I think they were with tap, host-to-guest/guest-to-host

Yes, you are right.

Shirley

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: INFO: task kjournal:337 blocked for more than 120 seconds

2009-10-01 Thread Shirley Ma
Switching to different scheduler doesn't make the problem gone away.

Shirley

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: INFO: task kjournal:337 blocked for more than 120 seconds

2009-10-01 Thread Shirley Ma
On Thu, 2009-10-01 at 16:03 -0500, Javier Guerra wrote:
> deadline is the most recommended one for virtualization hosts.  some
> distros set it as default if you select Xen or KVM at installation
> time.  (and noop for the guests)

I spoke too earlier, after a while noop scheduler hit the same issue. I
am switching to deadline to test it again.

Thanks
Shirley

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: INFO: task kjournal:337 blocked for more than 120 seconds

2009-10-01 Thread Shirley Ma
I talked to Mingming, she suggested to use different IO scheduler. The
default scheduler is cfg, after I switch to noop, the problem is gone. 

So there seems a bug in cfg scheduler. It's easily reproduced it when
running the guest kernel, so far I haven't hit this problem on the host
side.

If I need to file a bug for some one to look at, please let me know.

Thanks
Shirley

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: INFO: task journal:337 blocked for more than 120 seconds

2009-10-01 Thread Shirley Ma
On Thu, 2009-10-01 at 10:20 -0300, Marcelo Tosatti wrote:
> I've hit this in the past with ext3, mounting with data=writeback made
> it
> disappear.

Thanks. I will make a try. Someone should fix this.

Shirley

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


INFO: task journal:337 blocked for more than 120 seconds

2009-09-30 Thread Shirley Ma
Hello all,

Anybody found this problem before? I kept hitting this issue for 2.6.31
guest kernel even with a simple network test.

INFO: task kjournal:337 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_sec" disables this message.

kjournald   D 0041  0   337 2 0x

My test is totally being blocked.

Thanks
Shirley

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization