On 05/29/2013 10:36 PM, Nicholas A. Bellinger wrote:
On Wed, 2013-05-29 at 21:29 -0700, Nicholas A. Bellinger wrote:
On Thu, 2013-05-30 at 06:17 +0800, Asias He wrote:
On Wed, May 29, 2013 at 08:10:44AM -0700, Badari Pulavarty wrote:
On 05/29/2013 02:05 AM, Wenchao Xia wrote:
于 2013-5-28 17
On 05/29/2013 02:05 AM, Wenchao Xia wrote:
于 2013-5-28 17:00, Wenchao Xia 写道:
于 2013-5-28 16:33, Asias He 写道:
On Tue, May 28, 2013 at 10:01:14AM +0200, Paolo Bonzini wrote:
Il 28/05/2013 09:13, Wenchao Xia ha scritto:
From: Nicholas Bellinger
The WWPN specified in configfs is passed to "-de
On 05/23/2013 09:19 AM, Paolo Bonzini wrote:
Il 23/05/2013 18:11, Badari Pulavarty ha scritto:
On 05/23/2013 08:30 AM, Paolo Bonzini wrote:
Il 23/05/2013 17:27, Asias He ha scritto:
On Thu, May 23, 2013 at 04:58:05PM +0200, Paolo Bonzini wrote:
Il 23/05/2013 16:48, Badari Pulavarty ha
On 05/23/2013 08:30 AM, Paolo Bonzini wrote:
Il 23/05/2013 17:27, Asias He ha scritto:
On Thu, May 23, 2013 at 04:58:05PM +0200, Paolo Bonzini wrote:
Il 23/05/2013 16:48, Badari Pulavarty ha scritto:
The common virtio-scsi code in QEMU should guard against this. In
virtio-blk data plane I
On 05/23/2013 07:58 AM, Paolo Bonzini wrote:
Il 23/05/2013 16:48, Badari Pulavarty ha scritto:
The common virtio-scsi code in QEMU should guard against this. In
virtio-blk data plane I hit a similar case and ended up starting the
data plane thread (equivalent to vhost here) *before* the status
On 05/23/2013 06:32 AM, Stefan Hajnoczi wrote:
On Thu, May 23, 2013 at 11:48 AM, Gleb Natapov wrote:
On Thu, May 23, 2013 at 08:53:55AM +0800, Asias He wrote:
On Wed, May 22, 2013 at 05:36:08PM -0700, Badari wrote:
Hi,
While testing vhost-scsi in the current qemu git, ran into an earlier iss
On 4/15/2011 4:00 PM, Anthony Liguori wrote:
On 04/15/2011 05:21 PM, pbad...@linux.vnet.ibm.com wrote:
On 4/15/2011 10:29 AM, Christoph Hellwig wrote:
On Fri, Apr 15, 2011 at 09:23:54AM -0700, Badari Pulavarty wrote:
True. That brings up a different question - whether we are doing
enough
On 4/15/2011 10:29 AM, Christoph Hellwig wrote:
On Fri, Apr 15, 2011 at 09:23:54AM -0700, Badari Pulavarty wrote:
True. That brings up a different question - whether we are doing
enough testing on mainline QEMU :(
It seems you're clearly not doing enough testing on any qemu.
On Fri, 2011-04-15 at 13:09 -0500, Anthony Liguori wrote:
> On 04/15/2011 11:23 AM, Badari Pulavarty wrote:
> > On Fri, 2011-04-15 at 17:34 +0200, Christoph Hellwig wrote:
> >> On Fri, Apr 15, 2011 at 04:26:41PM +0100, Stefan Hajnoczi wrote:
> >>> On Fri, Apr 15, 2011
On Fri, 2011-04-15 at 17:34 +0200, Christoph Hellwig wrote:
> On Fri, Apr 15, 2011 at 04:26:41PM +0100, Stefan Hajnoczi wrote:
> > On Fri, Apr 15, 2011 at 4:05 PM, Christoph Hellwig wrote:
> > > NAK. ?Just wait for the bloody NFS client fix to get in instead of
> > > adding crap like that.
> >
>
Hi All,
Here is the latest version of vhost-blk implementation.
Major difference from my previous implementation is that, I
now merge all contiguous requests (both read and write), before
submitting them. This significantly improved IO performance.
I am still collecting performance numbers, I will
Michael S. Tsirkin wrote:
On Tue, Mar 23, 2010 at 12:55:07PM -0700, Badari Pulavarty wrote:
Michael S. Tsirkin wrote:
On Tue, Mar 23, 2010 at 10:57:33AM -0700, Badari Pulavarty wrote:
Michael S. Tsirkin wrote:
On Mon, Mar 22, 2010 at 05:34:04PM -0700, Badari
Michael S. Tsirkin wrote:
On Tue, Mar 23, 2010 at 10:57:33AM -0700, Badari Pulavarty wrote:
Michael S. Tsirkin wrote:
On Mon, Mar 22, 2010 at 05:34:04PM -0700, Badari Pulavarty wrote:
Write Results:
==
I see degraded IO performance when doing sequential IO write
Michael S. Tsirkin wrote:
On Mon, Mar 22, 2010 at 05:34:04PM -0700, Badari Pulavarty wrote:
Write Results:
==
I see degraded IO performance when doing sequential IO write
tests with vhost-blk compared to virtio-blk.
# time dd of=/dev/vda if=/dev/zero bs=2M oflag=direct
I get
Michael S. Tsirkin wrote:
On Mon, Mar 22, 2010 at 05:34:04PM -0700, Badari Pulavarty wrote:
Write Results:
==
I see degraded IO performance when doing sequential IO write
tests with vhost-blk compared to virtio-blk.
# time dd of=/dev/vda if=/dev/zero bs=2M oflag=direct
I get
virtio-blk.
At this time, this is a prototype based on virtio-net.
Lots of error handling and clean up needs to be done.
Read performance is pretty good over QEMU virtio-blk, but
write performance is not anywhere close to QEMU virtio-blk.
Why ?
Signed-off-by: Badari Pulavarty
---
drivers/vhost/bl
16 matches
Mail list logo