On 1/23/24 02:36, Christoph Hellwig wrote:
> Now that the block layer tracks a separate user max discard limit, there
> is no need to prevent nvme from updating it on controller capability
> changes.
> 
> Signed-off-by: Christoph Hellwig <h...@lst.de>
> ---
>  drivers/nvme/host/core.c | 10 ----------
>  1 file changed, 10 deletions(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 85ab0fcf9e8864..ef70268dccbc5a 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -1754,16 +1754,6 @@ static void nvme_config_discard(struct nvme_ctrl 
> *ctrl, struct gendisk *disk,
>       BUILD_BUG_ON(PAGE_SIZE / sizeof(struct nvme_dsm_range) <
>                       NVME_DSM_MAX_RANGES);
>  
> -     /*
> -      * If discard is already enabled, don't reset queue limits.
> -      *
> -      * This works around the fact that the block layer can't cope well with
> -      * updating the hardware limits when overridden through sysfs.  This is
> -      * harmless because discard limits in NVMe are purely advisory.
> -      */
> -     if (queue->limits.max_discard_sectors)
> -             return;
> -
>       blk_queue_max_discard_sectors(queue, max_discard_sectors);

This function references max_user_discard_sectors but that access is done
without holding the queue limits mutex. Is that safe ?

>       if (ctrl->dmrl)
>               blk_queue_max_discard_segments(queue, ctrl->dmrl);

-- 
Damien Le Moal
Western Digital Research


Reply via email to