On 10/30/2013 10:50 AM, Stefan Hajnoczi wrote: > On Fri, Oct 25, 2013 at 05:01:54PM +0200, Jack Wang wrote: >> We've seen guest block io lost in a VM.any response will be helpful >> >> environment is: >> guest os: Ubuntu 1304 >> running busy database workload with xfs on a disk export with virtio-blk >> >> the exported vdb has very high infight io over 300. Some times later a >> lot io process in D state, looks a lot requests is lost in below storage >> stack. > > Is the image file on a local file system or are you using a network > storage system (e.g. NFS, Gluster, Ceph, Sheepdog)? > > If you run "vmstat 5" inside the guest, do you see "bi"/"bo" block I/O > activity? If that number is very low or zero then there may be a > starvation problem. If that number is reasonable then the workload is > simply bottlenecked on disk I/O. > > virtio-blk only has 128 descriptors available so it's not possible to > have 300 requests pending at the virtio-blk layer. > > If you suspect QEMU, try building qemu.git/master from source in case > the bug has already been fixed. > > If you want to trace I/O requests, you might find this blog post on > writing trace analysis scripts useful: > http://blog.vmsplice.net/2011/03/how-to-write-trace-analysis-scripts-for.html > > Stefan > Thanks Stefan for your valuable input.
The image is on device exported with InfiniBand srp/srpt. Will follow your suggestions to do further investigation. The 300 infight ios I memtioned is from the /proc/diskstats Field 9 -- # of I/Os currently in progress. Jack