I have attached a graph that shows the results of my benchmarking. The setup is: -Xeon X5650 -32GB of Ram -Xen 4.2 -Linux 3.5.0 for dom0 and domus -Dom0 has 24 CPUs. -Each guest has a (separate) xvdb backed by a 1GB ramdisk (/dev/ramX) in dom0. No LVM on these. -The setup is that initially 1 guest does an fio sequential read of the ramdisk, then 2, then 3 etc. -The y axis is the sum of the iops, as reported by the guests' fio output.
On Wed, 2012-09-19 at 14:16 +0100, X5650Pasi Kärkkäinen wrote: > On Wed, Sep 19, 2012 at 11:51:27AM +0100, Oliver Chick wrote: > > This patch implements persistent grants for the xen-blk{front,back} > > mechanism. The effect of this change is to reduce the number of unmap > > operations performed, since they cause a (costly) TLB shootdown. This > > allows the I/O performance to scale better when a large number of VMs > > are performing I/O. > > > > Previously, the blkfront driver was supplied a bvec[] from the request > > queue. This was granted to dom0; dom0 performed the I/O and wrote > > directly into the grant-mapped memory and unmapped it; blkfront then > > removed foreign access for that grant. The cost of unmapping scales > > badly with the number of CPUs in Dom0. An experiment showed that when > > Dom0 has 24 VCPUs, and guests are performing parallel I/O to a > > ramdisk, the IPIs from performing unmap's is a bottleneck at 5 guests > > (at which point 650,000 IOPS are being performed in total). If more > > than 5 guests are used, the performance declines. By 10 guests, only > > 400,000 IOPS are being performed. > > > > This patch improves performance by only unmapping when the connection > > between blkfront and back is broken. > > > > So how many IOPS can you get with this patch / persistent grants ? > > -- Pasi >
pers-vs-nonpers.pdf
Description: Adobe PDF document