On Thu, Sep 29, 2016 at 01:33:07PM +0200, Kevin Wolf wrote: > Am 28.09.2016 um 14:16 hat Vladimir Sementsov-Ogievskiy geschrieben: > > >I think jobs will need to remain "one coroutine, one job" for now, > > >but there's no reason why drive-backup or blockdev-backup can't > > >just create multiple jobs each if that's what they need to do. > > >(The backup job object could, in theory, just have another job > > >pointer to a helper job if it really became necessary.) > > What's the problem with a job spawning additional coroutines internally? > Jobs already do this all the time (AIO requests are just coroutines in > disguise.) > > A job is basically just the user interface for managing a background > process, so helper jobs that are managed internally rather than by the > user don't seem to make that much sense.
Internal blockjobs do have use cases. The COLO replication code uses them and that's good because it avoids code duplication. While internal blockjob users don't need to be hooked up to the QMP monitor, they might want other aspects like rate-limiting, pause/resume, ready/complete lifecycle, etc which are implemented in blockjob.c and not part of plain old coroutines. A layering model that supports this is: 1. User-initiated blockjobs, exposed via QMP 2. Blockjob core API 3. mirror, commit, etc It should be possible for internal users like COLO replication to talk to blockjob core. That way the functionality mentioned above is available. If there is a valid reason for a blockjob to be composed of other block jobs, then it could be done properly with this layering model. I don't know if there are valid use cases for this though ;). Stefan
signature.asc
Description: PGP signature