Hi Tariq and all

Many thanks for your kindly and detailed response and comment.

On 01/22/2018 12:24 AM, Tariq Toukan wrote:
> 
> 
> On 21/01/2018 11:31 AM, Tariq Toukan wrote:
>>
>>
>> On 19/01/2018 5:49 PM, Eric Dumazet wrote:
>>> On Fri, 2018-01-19 at 23:16 +0800, jianchao.wang wrote:
>>>> Hi Tariq
>>>>
>>>> Very sad that the crash was reproduced again after applied the patch.
> 
> Memory barriers vary for different Archs, can you please share more details 
> regarding arch and repro steps?The hardware is HP ProLiant DL380 
> Gen9/ProLiant DL380 Gen9, BIOS P89 12/27/2015
The xen is installed. The crash occurred in DOM0.
Regarding to the repro steps, it is a customer's test which does heavy disk I/O 
over NFS storage without any guest.

The patch that can fix this issue is as follow:
--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
@@ -1005,6 +1005,7 @@ out:
        wmb(); /* ensure HW sees CQ consumer before we post new buffers */
        ring->cons = cq->mcq.cons_index;
        mlx4_en_refill_rx_buffers(priv, ring);
+       wmb();
        mlx4_en_update_rx_prod_db(ring);
        return polled;
 }

Thanks
Jianchao
> 
>>>>
>>>> --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
>>>> +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
>>>> @@ -252,6 +252,7 @@ static inline bool mlx4_en_is_ring_empty(struct 
>>>> mlx4_en_rx_ring *ring)
>>>>   static inline void mlx4_en_update_rx_prod_db(struct mlx4_en_rx_ring 
>>>> *ring)
>>>>   {
>>>> +    dma_wmb();
>>>
>>> So... is wmb() here fixing the issue ?
>>>
>>>>       *ring->wqres.db.db = cpu_to_be32(ring->prod & 0xffff);
>>>>   }
>>>>
>>>> I analyzed the kdump, it should be a memory corruption.
>>>>
>>>> Thanks
>>>> Jianchao
>>
>> Hmm, this is actually consistent with the example below [1].
>>
>> AIU from the example, it seems that the dma_wmb/dma_rmb barriers are good 
>> for synchronizing cpu/device accesses to the "Streaming DMA mapped" buffers 
>> (the descriptors, went through the dma_map_page() API), but not for the 
>> doorbell (a coherent memory, typically allocated via dma_alloc_coherent) 
>> that requires using the stronger wmb() barrier.
>>
>>
>> [1] Documentation/memory-barriers.txt
>>
>>   (*) dma_wmb();
>>   (*) dma_rmb();
>>
>>       These are for use with consistent memory to guarantee the ordering
>>       of writes or reads of shared memory accessible to both the CPU and a
>>       DMA capable device.
>>
>>       For example, consider a device driver that shares memory with a device
>>       and uses a descriptor status value to indicate if the descriptor 
>> belongs
>>       to the device or the CPU, and a doorbell to notify it when new
>>       descriptors are available:
>>
>>      if (desc->status != DEVICE_OWN) {
>>          /* do not read data until we own descriptor */
>>          dma_rmb();
>>
>>          /* read/modify data */
>>          read_data = desc->data;
>>          desc->data = write_data;
>>
>>          /* flush modifications before status update */
>>          dma_wmb();
>>
>>          /* assign ownership */
>>          desc->status = DEVICE_OWN;
>>
>>          /* force memory to sync before notifying device via MMIO */
>>          wmb();
>>
>>          /* notify device of new descriptors */
>>          writel(DESC_NOTIFY, doorbell);
>>      }
>>
>>       The dma_rmb() allows us guarantee the device has released ownership
>>       before we read the data from the descriptor, and the dma_wmb() allows
>>       us to guarantee the data is written to the descriptor before the device
>>       can see it now has ownership.  The wmb() is needed to guarantee that 
>> the
>>       cache coherent memory writes have completed before attempting a write 
>> to
>>       the cache incoherent MMIO region.
>>
>>       See Documentation/DMA-API.txt for more information on consistent 
>> memory.
>>
>>
>>>> On 01/15/2018 01:50 PM, jianchao.wang wrote:
>>>>> Hi Tariq
>>>>>
>>>>> Thanks for your kindly response.
>>>>>
>>>>> On 01/14/2018 05:47 PM, Tariq Toukan wrote:
>>>>>> Thanks Jianchao for your patch.
>>>>>>
>>>>>> And Thank you guys for your reviews, much appreciated.
>>>>>> I was off-work on Friday and Saturday.
>>>>>>
>>>>>> On 14/01/2018 4:40 AM, jianchao.wang wrote:
>>>>>>> Dear all
>>>>>>>
>>>>>>> Thanks for the kindly response and reviewing. That's really appreciated.
>>>>>>>
>>>>>>> On 01/13/2018 12:46 AM, Eric Dumazet wrote:
>>>>>>>>> Does this need to be dma_wmb(), and should it be in
>>>>>>>>> mlx4_en_update_rx_prod_db ?
>>>>>>>>>
>>>>>>>>
>>>>>>>> +1 on dma_wmb()
>>>>>>>>
>>>>>>>> On what architecture bug was observed ?
>>>>>>>
>>>>>>> This issue was observed on x86-64.
>>>>>>> And I will send a new patch, in which replace wmb() with dma_wmb(), to 
>>>>>>> customer
>>>>>>> to confirm.
>>>>>>
>>>>>> +1 on dma_wmb, let us know once customer confirms.
>>>>>> Please place it within mlx4_en_update_rx_prod_db as suggested.
>>>>>
>>>>> Yes, I have recommended it to customer.
>>>>> Once I get the result, I will share it here.
>>>>>> All other calls to mlx4_en_update_rx_prod_db are in control/slow path so 
>>>>>> I prefer being on the safe side, and care less about bulking the barrier.
>>>>>>
>>>>>> Thanks,
>>>>>> Tariq
>>>>>>
>>> -- 
>>> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
>>> the body of a message to majord...@vger.kernel.org
>>> More majordomo info at  
>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__vger.kernel.org_majordomo-2Dinfo.html&d=DwIDaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=7WdAxUBeiTUTCy8v-7zXyr4qk7sx26ATvfo6QSTvZyQ&m=s8_-sqvK_-1EHwvxh5DBpBIakIb0lpcn0fN6zbFxgpk&s=q3jITeGfYvYPdMo8vqfURwAbUNbSrVi2pkJfmPVGUH8&e=
>>>
> 

Reply via email to