On 01/15/2014 10:12 PM, ?? wrote:
Hi Josh
# strings /usr/bin/qemu-system-x86_64 | grep rbd_aio
rbd_aio_write
rbd_aio_flush
rbd_aio_read
rbd_aio_create_completion
rbd_aio_release
rbd_aio_discard
rbd_aio_get_return_value
So, librbd's asynchronous flush is being used.
I set log settings, fetc
On 01/15/2014 01:40 AM, ?? wrote:
Hi Josh
there is some issues
1. use 'none' cache mode in xml, and unset 'rbd cache=true' in
ceph.conf, the network latency issue not show.
2. use 'writethrough' cache mode in xml, and unset 'rbd cache=true' in
ceph.conf, the network latency issue not show.
x.x.x.x:6789;x.x.x.x:6789
Regards
Alan ye
--
??
Alan Ye
-- --
*??:* "Josh Durgin";;
*:* 2014??1??14??(??) 2:24
*??:* "Stefan Hajnoczi"; "??"
;
*????:* "qemu-devel";
*???
On Mon, Jan 06, 2014 at 02:55:54PM +0800, 叶绍琛 wrote:
> hi, all:
>
>
> There is a problem when I use ceph rbd for qemu storage. I launch 4 virtual
> machines, and start 5G random write test at the same time. Under such heavy
> I/O, the network to
> virtual machine almost unusable, the network l
hi, all:
There is a problem when I use ceph rbd for qemu storage. I launch 4 virtual
machines, and start 5G random write test at the same time. Under such heavy
I/O, the network to
virtual machine almost unusable, the network latency is extremely big.
I had test another situation, when I use