On Wed, Dec 2, 2015 at 9:43 PM, Geyslan G. Bem <[email protected]> wrote:
> Replace dma_pool_alloc and memset with a single call to dma_pool_zalloc.
>
> Caught by coccinelle.
I would mention which script was used, but other than that:
Acked-by: Peter Senna Tschudin <[email protected]>

>
> Signed-off-by: Geyslan G. Bem <[email protected]>
> ---
>  drivers/usb/host/xhci-mem.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
> index c48cbe7..d034f92 100644
> --- a/drivers/usb/host/xhci-mem.c
> +++ b/drivers/usb/host/xhci-mem.c
> @@ -47,13 +47,12 @@ static struct xhci_segment *xhci_segment_alloc(struct 
> xhci_hcd *xhci,
>         if (!seg)
>                 return NULL;
>
> -       seg->trbs = dma_pool_alloc(xhci->segment_pool, flags, &dma);
> +       seg->trbs = dma_pool_zalloc(xhci->segment_pool, flags, &dma);
>         if (!seg->trbs) {
>                 kfree(seg);
>                 return NULL;
>         }
>
> -       memset(seg->trbs, 0, TRB_SEGMENT_SIZE);
>         /* If the cycle state is 0, set the cycle bit to 1 for all the TRBs */
>         if (cycle_state == 0) {
>                 for (i = 0; i < TRBS_PER_SEGMENT; i++)
> @@ -517,12 +516,12 @@ static struct xhci_container_ctx 
> *xhci_alloc_container_ctx(struct xhci_hcd *xhci
>         if (type == XHCI_CTX_TYPE_INPUT)
>                 ctx->size += CTX_SIZE(xhci->hcc_params);
>
> -       ctx->bytes = dma_pool_alloc(xhci->device_pool, flags, &ctx->dma);
> +       ctx->bytes = dma_pool_zalloc(xhci->device_pool, flags, &ctx->dma);
>         if (!ctx->bytes) {
>                 kfree(ctx);
>                 return NULL;
>         }
> -       memset(ctx->bytes, 0, ctx->size);
> +
>         return ctx;
>  }
>
> --
> 2.6.2
>



-- 
Peter
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to