Am 08/03/2022 um 15:04 schrieb Stefan Hajnoczi:
> On Tue, Feb 08, 2022 at 09:35:12AM -0500, Emanuele Giuseppe Esposito wrote:
>> diff --git a/include/qemu/job.h b/include/qemu/job.h
>> index ca46e46f5b..574110a1f2 100644
>> --- a/include/qemu/job.h
>> +++ b/include/qemu/job.h
>> @@ -75,11 +75,14 @@ typedef struct Job {
>>      ProgressMeter progress;
>>  
>>  
>> +    /** Protected by job_mutex */
>> +
>>      /**
>>       * AioContext to run the job coroutine in.
>> -     * This field can be read when holding either the BQL (so we are in
>> -     * the main loop) or the job_mutex.
>> -     * Instead, it can be only written when we hold *both* BQL and 
>> job_mutex.
>> +     * The job Aiocontext can be read when holding *either*
>> +     * the BQL (so we are in the main loop) or the job_mutex.
>> +     * Instead, it can only be written when we hold *both* BQL
> 
> s/Instead,//
> 
>> @@ -456,7 +440,10 @@ void job_unref_locked(Job *job)
>>  
>>          if (job->driver->free) {
>>              job_unlock();
>> +            /* FIXME: aiocontext lock is required because cb calls 
>> blk_unref */
>> +            aio_context_acquire(job->aio_context);
> 
> We break the locking rules for job->aio_context here because the job is
> already being destroyed and there can be no races? Can we avoid the
> special case:
> 
>   AioContext *aio_context = job->aio_context;
> 
>   job_unlock();
>   /* FIXME: aiocontext lock is required because cb calls blk_unref */
>   aio_context_acquire(aio_context);
>   job->driver->free(job);
>   aio_context_release(aio_context);
>   job_lock();
> 
> ?
> 
Makes sense.

Emanuele


Reply via email to