> Hello.
> 
> On 08/18/2014 05:37 PM, Pankaj Gupta wrote:
> 
> > netif_alloc_rx_queues() uses kcalloc() to allocate memory
> > for "struct netdev_queue *_rx" array.
> > If we are doing large rx queue allocation kcalloc() might
> > fail, so this patch does a fallback to vzalloc().
> > Similar implementation is done for tx queue allocation in
> > netif_alloc_netdev_queues().
> 
> > We avoid failure of high order memory allocation
> > with the help of vzalloc(), this allows us to do large
> > rx and tx queue allocation which in turn helps us to
> > increase the number of queues in tun.
> 
> > As vmalloc() adds overhead on a critical network path,
> > __GFP_REPEAT flag is used with kzalloc() to do this fallback
> > only when really needed.
> 
> > Signed-off-by: Pankaj Gupta <pagu...@redhat.com>
> > Reviewed-by: Michael S. Tsirkin <m...@redhat.com>
> > Reviewed-by: David Gibson <dgib...@redhat.com>
> > ---
> >   net/core/dev.c | 20 +++++++++++++-------
> >   1 file changed, 13 insertions(+), 7 deletions(-)
> 
> > diff --git a/net/core/dev.c b/net/core/dev.c
> > index 1c15b18..a455a02 100644
> > --- a/net/core/dev.c
> > +++ b/net/core/dev.c
> > @@ -5942,17 +5942,24 @@ void netif_stacked_transfer_operstate(const struct
> > net_device *rootdev,
> >   EXPORT_SYMBOL(netif_stacked_transfer_operstate);
> >
> >   #ifdef CONFIG_SYSFS
> > +static void netif_free_rx_queues(struct net_device *dev)
> > +{
> > +   kvfree(dev->_rx);
> > +}
> > +
> >   static int netif_alloc_rx_queues(struct net_device *dev)
> >   {
> >     unsigned int i, count = dev->num_rx_queues;
> >     struct netdev_rx_queue *rx;
> > -
> > +   size_t sz = count * sizeof(*rx);
> 
>    Please keep an empty line after declarations.
     Yes, I will add it. Thanks
> 
> >     BUG_ON(count < 1);
> >
> 
> WBR, Sergei
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to