On 08/23/2014 01:56 AM, Benoît Canet wrote:
The Friday 22 Aug 2014 à 18:59:38 (-0600), Chris Friesen wrote :
On 07/21/2014 10:10 AM, Benoît Canet wrote:
The Monday 21 Jul 2014 à 09:35:29 (-0600), Chris Friesen wrote :
On 07/21/2014 09:15 AM, Benoît Canet wrote:
The Monday 21 Jul 2014 à 08:59:45 (-0600), Chris Friesen wrote :
On 07/19/2014 02:45 AM, Benoît Canet wrote:
I think in the throttling case the number of in flight operation is limited by
the emulated hardware queue. Else request would pile up and throttling would be
inefective.
So this number should be around: #define VIRTIO_PCI_QUEUE_MAX 64 or something
like than that.
Okay, that makes sense. Do you know how much data can be written as part of
a single operation? We're using 2MB hugepages for the guest memory, and we
saw the qemu RSS numbers jump from 25-30MB during normal operation up to
120-180MB when running dbench. I'd like to know what the worst-case would
Sorry I didn't understood this part at first read.
In the linux guest can you monitor:
benoit@Laure:~$ cat /sys/class/block/xyz/inflight ?
This would give us a faily precise number of the requests actually in flight
between the guest and qemu.
After a bit of a break I'm looking at this again.
Strange.
I would use dd with the flag oflag=nocache to make sure the write request
does not do in the guest cache though.
I set up another test, checking the inflight value every second.
Running just "dd if=/dev/zero of=testfile2 bs=1M count=700
oflag=nocache&" gave a bit over 100 inflight requests.
If I simultaneously run "dd if=testfile of=/dev/null bs=1M count=700
oflag=nocache&" then then number of inflight write requests peaks at 176.
I should point out that the above numbers are with qemu 1.7.0, with a
ceph storage backend. qemu is started with
-drive file=rbd:cinder-volumes/.........
Chris