On 2/1/23 7:43 AM, Liu, Yi L wrote:
>> From: Jason Gunthorpe <j...@nvidia.com>
>> Sent: Wednesday, February 1, 2023 4:26 AM
>>
>> On Tue, Jan 31, 2023 at 03:06:35PM -0500, Matthew Rosato wrote:
>>> @@ -799,13 +794,14 @@
>> EXPORT_SYMBOL_GPL(vfio_file_enforced_coherent);
>>>  void vfio_file_set_kvm(struct file *file, struct kvm *kvm)
>>>  {
>>>     struct vfio_group *group = file->private_data;
>>> +   unsigned long flags;
>>>
>>>     if (!vfio_file_is_group(file))
>>>             return;
>>>
>>> -   mutex_lock(&group->group_lock);
>>> +   spin_lock_irqsave(&group->kvm_ref_lock, flags);
>>>     group->kvm = kvm;
>>> -   mutex_unlock(&group->group_lock);
>>> +   spin_unlock_irqrestore(&group->kvm_ref_lock, flags);
>>
>> We know we are in a sleeping context here so these are just
>> 'spin_lock()', same with the other one
> 
> a dumb question. Why spinlock is required here? 😊
> 

You mean as opposed to another mutex?  I don't think it's required per se (we 
are replacing a mutex so we could have again used another mutex here), but all 
current users of this new lock hold it over a very short window (e.g. set a 
pointer as above, or refcount++ and copy the pointer as in the first 
device_open)

Reply via email to