Stephen Hemminger <step...@networkplumber.org> writes:

> On Tue, 31 Oct 2017 14:42:02 +0100
> Vitaly Kuznetsov <vkuzn...@redhat.com> wrote:
>
>> @@ -2002,7 +2002,9 @@ static int netvsc_probe(struct hv_device *dev,
>>      device_info.recv_sections = NETVSC_DEFAULT_RX;
>>      device_info.recv_section_size = NETVSC_RECV_SECTION_SIZE;
>>  
>> +    rtnl_lock();
>>      nvdev = rndis_filter_device_add(dev, &device_info);
>> +    rtnl_unlock();
>
> rtnl is not necessary here. probe can not be bothered by other changes.
>

Yes, this is only to support rtnl_dereference() down the stack.

>> --- a/drivers/net/hyperv/rndis_filter.c
>> +++ b/drivers/net/hyperv/rndis_filter.c
>> @@ -402,20 +402,27 @@ int rndis_filter_receive(struct net_device *ndev,
>>                       void *data, u32 buflen)
>>  {
>>      struct net_device_context *net_device_ctx = netdev_priv(ndev);
>> -    struct rndis_device *rndis_dev = net_dev->extension;
>> +    struct rndis_device *rndis_dev;
>>      struct rndis_message *rndis_msg = data;
>> +    int ret = 0;
>> +
>> +    rcu_read_lock_bh();
>> +
>> +    rndis_dev = rcu_dereference_bh(net_dev->extension);
>
> filter_receive is already called only from NAPI only and has RCU lock and soft
> irq disabled. This is not necessary.
>
>> -    net_dev->extension = NULL;
>> +    rcu_assign_pointer(net_dev->extension, NULL);
>> +
>> +    synchronize_rcu();
>
> rcu_assign_pointer with NULL is never a good idea.
> And synchronize_rcu is slow. Since net_device is already protected
> by RCU (for deletion) it should not be necessary.
>

I thought we don't care that much about the speed of this patch as
rndis_filter_device_remove() is only called on device remove/mtu
change/... and we need to interact with the host -- and this is already
slow.

> Thank you for trying to address these races. But it should be
> done carefully not by just slapping RCU everywhere.

Ok, I may have missed something. I'll try reproducing the crash and
finding a better fine-grained solution.

Thanks for the feedback!

-- 
  Vitaly

Reply via email to