Am 27.02.2012 18:02, schrieb Jeff Cody: >>> + >>> + /* keep the same entry in bdrv_states */ >>> + pstrcpy(tmp.device_name, sizeof(tmp.device_name), bs_top->device_name); >>> + tmp.list = bs_top->list; >>> + >>> + /* swap contents of the fixed new bs and the current top */ >>> + *bs_new = *bs_top; >>> + *bs_top = tmp; >>> + >>> + bdrv_detach_dev(bs_new, bs_new->dev); >>> +} >> >> The last line would actually deserve a comment /* clear the copied >> fields in the new backing file */, which makes clear that there are >> probably some more fields missing in this section. > > OK, added.
Only the comment or also clearing other fields? For some of them it's very obvious that they need to be cleared (copy on read, I/O throttling). >>> + } >>> + >>> + /* >>> + * Now we will drain, flush, & pivot everything - we are committed at >>> this >>> + * point. >>> + */ >>> + bdrv_drain_all(); >> >> I would feel more comfortable if we could do the bdrv_drain_all() at the >> very beginning of the function. Not that I know of a specific scenario >> that would go wrong, but you have a nested main loop here that could do >> more or less anything. > > I can move it up to the beginning if desired... My thought was that it > was best to drain closer to the point of commit. As long as we don't create new AIO requests here, drained is drained. But anyway, I'm not requesting a change here, it was just a feeling. >> >>> + QSIMPLEQ_FOREACH(states, &snap_bdrv_states, entry) { >>> + bdrv_flush(states->old_bs); >> >> This can return an error which must be checked. And of course, we must >> do it before committing to the snapshot (but after bdrv_drain_all). > > Hmm. If the flush returns error, do we abandon at this point? Perhaps it > would be best to loop through and flush each device first, and if no > flushes fail, _then_ loop through and perform the bdrv_append(). Once we > are calling bdrv_append(), we want no possible failure points. Yes, this is what I meant. Sorry for the somewhat vague wording. Kevin