> > This patch implements persistent grants for the xen-blk{front,back} > mechanism. The effect of this change is to reduce the number of unmap > operations performed, since they cause a (costly) TLB shootdown. This allows > the I/O performance to scale better when a large number of VMs are > performing I/O. > > Previously, the blkfront driver was supplied a bvec[] from the request > queue. This was granted to dom0; dom0 performed the I/O and wrote > directly into the grant-mapped memory and unmapped it; blkfront then > removed foreign access for that grant. The cost of unmapping scales badly > with the number of CPUs in Dom0. An experiment showed that when > Dom0 has 24 VCPUs, and guests are performing parallel I/O to a ramdisk, the > IPIs from performing unmap's is a bottleneck at 5 guests (at which point > 650,000 IOPS are being performed in total). If more than 5 guests are used, > the performance declines. By 10 guests, only > 400,000 IOPS are being performed. > > This patch improves performance by only unmapping when the connection > between blkfront and back is broken.
I assume network drivers would suffer from the same affliction... Would a more general persistent map solution be worth considering (or be possible)? So a common interface to this persistent mapping allowing the persistent pool to be shared between all drivers in the DomU? James -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/