I will drop this patch, because badrange_add() is unlikely to be called.
There's no need to care about trivial performance improvements.

On 2020/8/20 22:30, Zhen Lei wrote:
> Currently, the "struct badrange_entry" has three members: start, length,
> list. In append_badrange_entry(), "start" and "length" will be assigned
> later, and "list" does not need to be initialized before calling
> list_add_tail(). That means, the kzalloc() in badrange_add() or
> alloc_and_append_badrange_entry() can be replaced with kmalloc(), because
> the zero initialization is not required.
> 
> Signed-off-by: Zhen Lei <thunder.leiz...@huawei.com>
> ---
>  drivers/nvdimm/badrange.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/nvdimm/badrange.c b/drivers/nvdimm/badrange.c
> index 7f78b659057902d..13145001c52ff39 100644
> --- a/drivers/nvdimm/badrange.c
> +++ b/drivers/nvdimm/badrange.c
> @@ -37,7 +37,7 @@ static int alloc_and_append_badrange_entry(struct badrange 
> *badrange,
>  {
>       struct badrange_entry *bre;
>  
> -     bre = kzalloc(sizeof(*bre), flags);
> +     bre = kmalloc(sizeof(*bre), flags);
>       if (!bre)
>               return -ENOMEM;
>  
> @@ -49,7 +49,7 @@ int badrange_add(struct badrange *badrange, u64 addr, u64 
> length)
>  {
>       struct badrange_entry *bre, *bre_new;
>  
> -     bre_new = kzalloc(sizeof(*bre_new), GFP_KERNEL);
> +     bre_new = kmalloc(sizeof(*bre_new), GFP_KERNEL);
>  
>       spin_lock(&badrange->lock);
>  
> 

Reply via email to