On 03/27/2015 11:21 AM, Richard W.M. Jones wrote: > > AIUI: > > We'd issue a drive-backup monitor command with an nbd:... target. > > The custom NBD server receives a stream of blocks (as writes). > > On the other side of this, libguestfs is also talking to the custom > NBD server. Libguestfs (which is really a qemu process) is issuing > random reads. There's no way for the NBD server or anything else to > predict what blocks libguestfs will want to read in advance. > > In the middle of this is our custom NBD server, probably implemented > using nbdkit. It has to save all the writes from qemu. It has to > satisfy the reads from libguestfs, probably by blocking libguestfs > unless we've seen the corresponding write.
Well, it only has to write the sectors touched by the guest in the meantime, not the entire disk. But yeah, a busy guest can cause a lot of sectors to be written in the meantime. > > The NBD server is going to be (a) storing huge quantities of temporary > data which we'll mostly not use, and (b) blocking libguestfs for > arbitrary periods of time. This doesn't sound very lightweight to me. Hmm. Sounds a bit like we want to take advantage of postcopy migration smarts - where the destination receives the full stream of writes as a low-priority, but can interject and request out-of-order reads to satisfy page faults on a high-priority. All reads are guaranteed to resolve to the correct data, even if it means blocking the read until the out-of-order page fault is read in, but the out-of-order processing means that you don't have to wait for the full stream to take place before you get the information you need a the moment. Is NBD bi-directional, in that the target can receive write requests at the same time it is sending read requests? It sounds like that is what we need. Or are we stuck with NBD being uni-directional, where the target can receive read and write commands, but can't send any commands back to the client in charge of the data being written? If that's the case, maybe the work in qemu 2.4 towards persistent dirty bitmaps can help: set up a bitmap before starting the NBD server, to track what the guest is dirtying. With the bitmap in place, then for every sector you want, you first read it directly from the source image, THEN check the persistent dirty bitmap to see if the sector has been marked for transfer to NBD. If so, then you'll have to wait for it to show up on the NBD target side; if not, then the guest hasn't touched it yet so you know what you read is correct. That still doesn't help optimizing out the writes to the NBD target for sectors you don't care about, and doesn't quite address the desire for making random reads take priority over linear streaming of dirty blocks, but it might help. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org
signature.asc
Description: OpenPGP digital signature
_______________________________________________ Libguestfs mailing list Libguestfs@redhat.com https://www.redhat.com/mailman/listinfo/libguestfs