On Thu, Aug 28, 2014 at 11:33:08AM +0100, Stefan Hajnoczi wrote: > On Mon, Aug 18, 2014 at 05:41:03PM +0800, Liu Yuan wrote: > > v6: > > - fix a unused warning introduced by last version > > > > v5: > > - simplify a for loop in quorum_aio_finalize() > > > > v4: > > - swap the patch order > > - update comment for fifo pattern in qaip > > - use qapi enumeration in quorum driver instead of manual parsing > > > > v3: > > - separate patch into two, one for quorum and one for qapi for easier > > review > > - add enumeration for quorum read pattern > > - remove unrelated blank line fix from this patch set > > > > v2: > > - rename single as 'fifo' > > - rename read_all_children as read_quorum_children > > - fix quorum_aio_finalize() for fifo pattern > > > > This patch adds single read pattern to quorum driver and quorum vote is > > default > > pattern. > > > > For now we do a quorum vote on all the reads, it is designed for unreliable > > underlying storage such as non-redundant NFS to make sure data integrity at > > the > > cost of the read performance. > > > > For some use cases as following: > > > > VM > > -------------- > > | | > > v v > > A B > > > > Both A and B has hardware raid storage to justify the data integrity on its > > own. > > So it would help performance if we do a single read instead of on all the > > nodes. > > Further, if we run VM on either of the storage node, we can make a local > > read > > request for better performance. > > > > This patch generalize the above 2 nodes case in the N nodes. That is, > > > > vm -> write to all the N nodes, read just one of them. If single read > > fails, we > > try to read next node in FIFO order specified by the startup command. > > > > The 2 nodes case is very similar to DRBD[1] though lack of auto-sync > > functionality in the single device/node failure for now. But compared with > > DRBD > > we still have some advantages over it: > > > > - Suppose we have 20 VMs running on one(assume A) of 2 nodes' DRBD backed > > storage. And if A crashes, we need to restart all the VMs on node B. But for > > practice case, we can't because B might not have enough resources to setup > > 20 VMs > > at once. So if we run our 20 VMs with quorum driver, and scatter the > > replicated > > images over the data center, we can very likely restart 20 VMs without any > > resource problem. > > > > After all, I think we can build a more powerful replicated image > > functionality > > on quorum and block jobs(block mirror) to meet various High Availibility > > needs. > > > > E.g, Enable single read pattern on 2 children, > > > > -drive driver=quorum,children.0.file.filename=0.qcow2,\ > > children.1.file.filename=1.qcow2,read-pattern=fifo,vote-threshold=1 > > > > [1] http://en.wikipedia.org/wiki/Distributed_Replicated_Block_Device > > > > Cc: Benoit Canet <ben...@irqsave.net> > > Cc: Eric Blake <ebl...@redhat.com> > > Cc: Kevin Wolf <kw...@redhat.com> > > Cc: Stefan Hajnoczi <stefa...@redhat.com> > > Liu Yuan (2): > > qapi: add read-pattern enum for quorum > > block/quorum: add simple read pattern support > > > > block/quorum.c | 177 > > +++++++++++++++++++++++++++++++++++++-------------- > > qapi/block-core.json | 20 +++++- > > 2 files changed, 148 insertions(+), 49 deletions(-) > > I dropped the \n from the error_setg() error message while merging. > Please do not use \n with error_setg().
Thanks for your fix. > Please extend the quorum qemu-iotests to cover the new fifo read > pattern. You can send the tests as a separate patch series. > Okay, will do later. Yuan