Re: Shouldn't cache=none be the default for drives?

2010-04-08 Thread Michael Tokarev

08.04.2010 09:07, Thomas Mueller wrote:
[]

This helped alot:

I enabled deadline block scheduler instead of the default cfq on the
host system. tested with: Host Debian with scheduler deadline, Guest
Win2008 with Virtio and cache=none. (26MB/s to 50MB/s boost measured)
Maybe this is also true for Linux/Linux.

I expect that scheduler noop for linux guests would be good.


Hmm.   I wonder why it helped.  In theory, host scheduler should not
change anything for cache=none case, at least for raw partitions of
LVM volumes.  This is because with cache=none, the virtual disk
image is opened with O_DIRECT flag, which means all I/O bypasses
host scheduler and buffer cache.

I tried a few quick tests here, -- with LVM volumes it makes no
measurable difference.  But if the guest disk images are on
plain files (also raw), scheduler makes some difference, and
indeed deadline works better.  Maybe you were testing with
plain files instead of block devices?

Thanks!

/mjt
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Shouldn't cache=none be the default for drives?

2010-04-08 Thread Thomas Mueller
Am Thu, 08 Apr 2010 10:05:09 +0400 schrieb Michael Tokarev:

 08.04.2010 09:07, Thomas Mueller wrote: []
 This helped alot:

 I enabled deadline block scheduler instead of the default cfq on
 the host system. tested with: Host Debian with scheduler deadline,
 Guest Win2008 with Virtio and cache=none. (26MB/s to 50MB/s boost
 measured) Maybe this is also true for Linux/Linux.

 I expect that scheduler noop for linux guests would be good.
 
 Hmm.   I wonder why it helped.  In theory, host scheduler should not
 change anything for cache=none case, at least for raw partitions of LVM
 volumes.  This is because with cache=none, the virtual disk image is
 opened with O_DIRECT flag, which means all I/O bypasses host scheduler
 and buffer cache.
 
 I tried a few quick tests here, -- with LVM volumes it makes no
 measurable difference.  But if the guest disk images are on plain files
 (also raw), scheduler makes some difference, and indeed deadline works
 better.  Maybe you were testing with plain files instead of block
 devices?

ah yes, qcow2 images. 

- Thomas

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Shouldn't cache=none be the default for drives?

2010-04-08 Thread Thomas Mueller
Am Thu, 08 Apr 2010 06:09:05 + schrieb Thomas Mueller:

 Am Thu, 08 Apr 2010 10:05:09 +0400 schrieb Michael Tokarev:
 
 08.04.2010 09:07, Thomas Mueller wrote: []
 This helped alot:

 I enabled deadline block scheduler instead of the default cfq on
 the host system. tested with: Host Debian with scheduler deadline,
 Guest Win2008 with Virtio and cache=none. (26MB/s to 50MB/s boost
 measured) Maybe this is also true for Linux/Linux.

 I expect that scheduler noop for linux guests would be good.
 
 Hmm.   I wonder why it helped.  In theory, host scheduler should not
 change anything for cache=none case, at least for raw partitions of LVM
 volumes.  This is because with cache=none, the virtual disk image is
 opened with O_DIRECT flag, which means all I/O bypasses host scheduler
 and buffer cache.
 
 I tried a few quick tests here, -- with LVM volumes it makes no
 measurable difference.  But if the guest disk images are on plain files
 (also raw), scheduler makes some difference, and indeed deadline works
 better.  Maybe you were testing with plain files instead of block
 devices?
 
 ah yes, qcow2 images.

... but does the scheduler really now about O_DIRECT? isn't O_DIRECT 
meant to bypass only buffers (aka return write not before it really hit 
the disk)? my understanding is that the scheduler is layer down the 
stack. but only guessing - i'm not a kernel hacker. :)

- Thomas  


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Shouldn't cache=none be the default for drives?

2010-04-08 Thread Christoph Hellwig
On Thu, Apr 08, 2010 at 10:05:09AM +0400, Michael Tokarev wrote:
 LVM volumes.  This is because with cache=none, the virtual disk
 image is opened with O_DIRECT flag, which means all I/O bypasses
 host scheduler and buffer cache.

O_DIRECT does not bypass the I/O scheduler, only the page cache.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Shouldn't cache=none be the default for drives?

2010-04-07 Thread Troels Arvin

Hello,

I'm conducting some performancetests with KVM-virtualized CentOSes. One 
thing I noticed is that guest I/O performance seems to be significantly 
better for virtio-based block devices (drives) if the cache=none 
argument is used. (This was with a rather powerful storage system 
backend which is hard to saturate.)


So: Why isn't cache=none be the default for drives?

--
Troels
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Shouldn't cache=none be the default for drives?

2010-04-07 Thread Gordan Bobic

Troels Arvin wrote:

Hello,

I'm conducting some performancetests with KVM-virtualized CentOSes. One 
thing I noticed is that guest I/O performance seems to be significantly 
better for virtio-based block devices (drives) if the cache=none 
argument is used. (This was with a rather powerful storage system 
backend which is hard to saturate.)


So: Why isn't cache=none be the default for drives?


Is that the right question? Or is the right question Why is cache=none 
faster?


What did you use for measuring the performance? I have found in the past 
that virtio block device was slower than IDE block device emulation.


Gordan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Shouldn't cache=none be the default for drives?

2010-04-07 Thread Thomas Mueller
Am Wed, 07 Apr 2010 16:39:41 +0200 schrieb Troels Arvin:

 Hello,
 
 I'm conducting some performancetests with KVM-virtualized CentOSes. One
 thing I noticed is that guest I/O performance seems to be significantly
 better for virtio-based block devices (drives) if the cache=none
 argument is used. (This was with a rather powerful storage system
 backend which is hard to saturate.)
 
 So: Why isn't cache=none be the default for drives?

while ago i suffered poor performance of virtio and win2008. 

This helped alot:

I enabled deadline block scheduler instead of the default cfq on the 
host system. tested with: Host Debian with scheduler deadline, Guest 
Win2008 with Virtio and cache=none. (26MB/s to 50MB/s boost measured) 
Maybe this is also true for Linux/Linux.

I expect that scheduler noop for linux guests would be good.

- Thomas


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html