Re: [Qemu-block] backup bug or question
12.08.2019 19:49, Kevin Wolf wrote: > Am 12.08.2019 um 18:09 hat Vladimir Sementsov-Ogievskiy geschrieben: >> 12.08.2019 16:23, Kevin Wolf wrote: >>> Am 09.08.2019 um 15:18 hat Vladimir Sementsov-Ogievskiy geschrieben: Hi! Hmm, hacking around backup I have a question: What prevents guest write request after job_start but before setting write notifier? code path: qmp_drive_backup or transaction with backup job_start aio_co_enter(job_co_entry) /* may only schedule execution, isn't it ? */ job_co_entry job_pause_point() /* it definitely yields, isn't it bad? */ job->driver->run() /* backup_run */ backup_run() bdrv_add_before_write_notifier() ... And what guarantees we give to the user? Is it guaranteed that write notifier is set when qmp command returns? And I guess, if we start several backups in a transaction it should be guaranteed that the set of backups is consistent and correspond to one point in time... >>> >>> Do the patches to switch backup to a filter node solve this >>> automatically because that node would be inserted in >>> backup_job_create()? >>> >> >> Hmm, great, looks like they should. At least it moves scope of the >> problem to do_drive_backup and do_blockdev_backup functions.. >> >> Am I right that aio_context_acquire/aio_context_release guarantees no >> new request created during the section? Or should we add >> drained_begin/drained_end pair, or at least drain() at start of >> qmp_blockdev_backup and qmp_drive_backup? > > Holding the AioContext lock should be enough for this. > > But note that it doesn't make a difference if new requests are actually > incoming. The timing of the QMP command to start a backup job versus the > timing of guest requests is essentially random. QEMU doesn't know what > guest requests you mean to be included in the backup and which you don't > unless you stop sending new requests well ahead of time. > > If you send a QMP request to start a backup, the backup will be > consistent for some arbitrary point in time between the time that you > sent the QMP request and the time that you received the reply to it. > > Draining in the QMP command handler wouldn't change any of this, because > even the drain section starts at some arbitrary point in time. Hmm and it don't guarantee even that requests started before qmp command are taken into backup, as they may be started in guest point of view, but not yet in QEMU.. > >> Assume scenario like the this, >> >> 1. fsfreeze >> 2. qmp backup >> 3. fsthaw >> >> to make sure that backup starting point is consistent. So in our qmp command >> we should: >> 1. complete all current requests to make drives corresponding to fsfreeze >> point >> 2. initialize write-notifiers or filter before any new guest request, i.e. >> before fsthaw, >> i.e. before qmp command return. > > If I understand correctly, fsfreeze only returns success after it has > made sure that the guest has quiesced the device. So at any point > between receiving the successful return of the fsfreeze and calling > fsthaw, the state should be consistent. > >> Transactions should be OK, as they use drained_begin/drained_end >> pairs, and additional aio_context_acquire/aio_context_release pairs. > > Here, draining is actually important because you don't synchronise > against something external that you don't control anyway, but you just > make sure that you start the backup of all disks at the same point in > time (which is still an arbitrary point between the time that you send > the transaction QMP command and the time that you receive success), even > if no fsfreeze/fsthaw was used. > > Kevin > OK, thanks for explanation! -- Best regards, Vladimir
Re: [Qemu-block] backup bug or question
Am 12.08.2019 um 18:09 hat Vladimir Sementsov-Ogievskiy geschrieben: > 12.08.2019 16:23, Kevin Wolf wrote: > > Am 09.08.2019 um 15:18 hat Vladimir Sementsov-Ogievskiy geschrieben: > >> Hi! > >> > >> Hmm, hacking around backup I have a question: > >> > >> What prevents guest write request after job_start but before setting > >> write notifier? > >> > >> code path: > >> > >> qmp_drive_backup or transaction with backup > >> > >> job_start > >> aio_co_enter(job_co_entry) /* may only schedule execution, isn't > >> it ? */ > >> > >> > >> > >> job_co_entry > >> job_pause_point() /* it definitely yields, isn't it bad? */ > >> job->driver->run() /* backup_run */ > >> > >> > >> > >> backup_run() > >> bdrv_add_before_write_notifier() > >> > >> ... > >> > >> And what guarantees we give to the user? Is it guaranteed that write > >> notifier is > >> set when qmp command returns? > >> > >> And I guess, if we start several backups in a transaction it should be > >> guaranteed > >> that the set of backups is consistent and correspond to one point in > >> time... > > > > Do the patches to switch backup to a filter node solve this > > automatically because that node would be inserted in > > backup_job_create()? > > > > Hmm, great, looks like they should. At least it moves scope of the > problem to do_drive_backup and do_blockdev_backup functions.. > > Am I right that aio_context_acquire/aio_context_release guarantees no > new request created during the section? Or should we add > drained_begin/drained_end pair, or at least drain() at start of > qmp_blockdev_backup and qmp_drive_backup? Holding the AioContext lock should be enough for this. But note that it doesn't make a difference if new requests are actually incoming. The timing of the QMP command to start a backup job versus the timing of guest requests is essentially random. QEMU doesn't know what guest requests you mean to be included in the backup and which you don't unless you stop sending new requests well ahead of time. If you send a QMP request to start a backup, the backup will be consistent for some arbitrary point in time between the time that you sent the QMP request and the time that you received the reply to it. Draining in the QMP command handler wouldn't change any of this, because even the drain section starts at some arbitrary point in time. > Assume scenario like the this, > > 1. fsfreeze > 2. qmp backup > 3. fsthaw > > to make sure that backup starting point is consistent. So in our qmp command > we should: > 1. complete all current requests to make drives corresponding to fsfreeze > point > 2. initialize write-notifiers or filter before any new guest request, i.e. > before fsthaw, > i.e. before qmp command return. If I understand correctly, fsfreeze only returns success after it has made sure that the guest has quiesced the device. So at any point between receiving the successful return of the fsfreeze and calling fsthaw, the state should be consistent. > Transactions should be OK, as they use drained_begin/drained_end > pairs, and additional aio_context_acquire/aio_context_release pairs. Here, draining is actually important because you don't synchronise against something external that you don't control anyway, but you just make sure that you start the backup of all disks at the same point in time (which is still an arbitrary point between the time that you send the transaction QMP command and the time that you receive success), even if no fsfreeze/fsthaw was used. Kevin
Re: [Qemu-block] backup bug or question
12.08.2019 16:23, Kevin Wolf wrote: > Am 09.08.2019 um 15:18 hat Vladimir Sementsov-Ogievskiy geschrieben: >> Hi! >> >> Hmm, hacking around backup I have a question: >> >> What prevents guest write request after job_start but before setting >> write notifier? >> >> code path: >> >> qmp_drive_backup or transaction with backup >> >> job_start >> aio_co_enter(job_co_entry) /* may only schedule execution, isn't it >> ? */ >> >> >> >> job_co_entry >> job_pause_point() /* it definitely yields, isn't it bad? */ >> job->driver->run() /* backup_run */ >> >> >> >> backup_run() >> bdrv_add_before_write_notifier() >> >> ... >> >> And what guarantees we give to the user? Is it guaranteed that write >> notifier is >> set when qmp command returns? >> >> And I guess, if we start several backups in a transaction it should be >> guaranteed >> that the set of backups is consistent and correspond to one point in time... > > Do the patches to switch backup to a filter node solve this > automatically because that node would be inserted in > backup_job_create()? > Hmm, great, looks like they should. At least it moves scope of the problem to do_drive_backup and do_blockdev_backup functions.. Am I right that aio_context_acquire/aio_context_release guarantees no new request created during the section? Or should we add drained_begin/drained_end pair, or at least drain() at start of qmp_blockdev_backup and qmp_drive_backup? Assume scenario like the this, 1. fsfreeze 2. qmp backup 3. fsthaw to make sure that backup starting point is consistent. So in our qmp command we should: 1. complete all current requests to make drives corresponding to fsfreeze point 2. initialize write-notifiers or filter before any new guest request, i.e. before fsthaw, i.e. before qmp command return. Transactions should be OK, as they use drained_begin/drained_end pairs, and additional aio_context_acquire/aio_context_release pairs. -- Best regards, Vladimir
Re: [Qemu-block] backup bug or question
Am 09.08.2019 um 15:18 hat Vladimir Sementsov-Ogievskiy geschrieben: > Hi! > > Hmm, hacking around backup I have a question: > > What prevents guest write request after job_start but before setting > write notifier? > > code path: > > qmp_drive_backup or transaction with backup > > job_start >aio_co_enter(job_co_entry) /* may only schedule execution, isn't it ? > */ > > > > job_co_entry > job_pause_point() /* it definitely yields, isn't it bad? */ > job->driver->run() /* backup_run */ > > > > backup_run() > bdrv_add_before_write_notifier() > > ... > > And what guarantees we give to the user? Is it guaranteed that write notifier > is > set when qmp command returns? > > And I guess, if we start several backups in a transaction it should be > guaranteed > that the set of backups is consistent and correspond to one point in time... Do the patches to switch backup to a filter node solve this automatically because that node would be inserted in backup_job_create()? Kevin
[Qemu-block] backup bug or question
Hi! Hmm, hacking around backup I have a question: What prevents guest write request after job_start but before setting write notifier? code path: qmp_drive_backup or transaction with backup job_start aio_co_enter(job_co_entry) /* may only schedule execution, isn't it ? */ job_co_entry job_pause_point() /* it definitely yields, isn't it bad? */ job->driver->run() /* backup_run */ backup_run() bdrv_add_before_write_notifier() ... And what guarantees we give to the user? Is it guaranteed that write notifier is set when qmp command returns? And I guess, if we start several backups in a transaction it should be guaranteed that the set of backups is consistent and correspond to one point in time... -- Best regards, Vladimir