Il 18/09/2014 04:36, Fam Zheng ha scritto: > + QTAILQ_FOREACH_SAFE(ref, &req->cancel_deps, next, next) { > + SCSIRequest *r = ref->req; > + assert(r->cancel_dep_count); > + r->cancel_dep_count--; > + if (!r->cancel_dep_count && req->bus->info->cancel_dep_complete) { > + req->bus->info->cancel_dep_complete(r); > + } > + QTAILQ_REMOVE(&req->cancel_deps, ref, next); > + scsi_req_unref(r); > + g_free(ref); > + }
I think there is one problem here. scsi_req_cancel_async can actually be synchronous if you're unlucky (because bdrv_aio_cancel_async can be synchronous too, for example in the case of a linux-aio AIOCB). So you could end up calling cancel_dep_complete even if the caller intends to cancel more requests. I think it's better to track the count in virtio-scsi instead. You can initialize it similar to bdrv_aio_multiwrite: /* Run the aio requests. */ mcb->num_requests = num_reqs; for (i = 0; i < num_reqs; i++) { bdrv_co_aio_rw_vector(bs, reqs[i].sector, reqs[i].qiov, reqs[i].nb_sectors, reqs[i].flags, multiwrite_cb, mcb, true); } return 0; and decrement the count on every call to the notifier. This is independent of the choice to make bdrv_aio_cancel_async semantics stricter. In the case of bdrv_aio_multiwrite, we know that bdrv_co_aio_rw_vector is never synchronous, but the code is simply nicer that way. :) Paolo