On Wed, Nov 22, 2017 at 04:30:41PM -0800, Solio Sarabia wrote:
> The netdevice gso_max_size is exposed to allow users fine-control on
> systems with multiple NICs with different GSO buffer sizes, and where
> the virtual devices like bridge and veth, need to be aware of the GSO
> size of the underlying devices.
> 
> In a virtualized environment, setting the right GSO sizes for physical
> and virtual devices makes all TSO work to be on physical NIC, improving
> throughput and reducing CPU util. If virtual devices send buffers
> greater than what NIC supports, it forces host to do TSO for buffers
> exceeding the limit, increasing CPU utilization in host.
> 
> Suggested-by: Shiny Sebastian <shiny.sebast...@intel.com>
> Signed-off-by: Solio Sarabia <solio.sara...@intel.com>
> ---
> In one test scenario with Hyper-V host, Ubuntu 16.04 VM, with Docker
> inside VM, and NTttcp sending 40 Gbps from one container, setting the
> right gso_max_size values for all network devices in the chain, reduces
> CPU overhead about 3x (for the sender), since all TSO work is done by
> physical NIC.
> 
>  net/core/net-sysfs.c | 30 ++++++++++++++++++++++++++++++
>  1 file changed, 30 insertions(+)
> 
> diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
> index 799b752..7314bc8 100644
> --- a/net/core/net-sysfs.c
> +++ b/net/core/net-sysfs.c
> @@ -376,6 +376,35 @@ static ssize_t gro_flush_timeout_store(struct device 
> *dev,
>  }
>  NETDEVICE_SHOW_RW(gro_flush_timeout, fmt_ulong);
>  
> +static int change_gso_max_size(struct net_device *dev, unsigned long 
> new_size)
> +{
> +     unsigned int orig_size = dev->gso_max_size;
> +
> +     if (new_size != (unsigned int)new_size)
> +             return -ERANGE;
> +
> +     if (new_size == orig_size)
> +             return 0;
> +
> +     if (new_size <= 0 || new_size > GSO_MAX_SIZE)
> +             return -ERANGE;
> +
> +     dev->gso_max_size = new_size;
> +     return 0;
> +}
Hindsight, we need to re-evaluate the valid range. As it is now, in a
virtualized environment, users could set the gso to a value greater than
what NICs expose, which would inflict the original issue: overhead in
the host os due to a configuration value in the vm.

> +
> +static ssize_t gso_max_size_store(struct device *dev,
> +                               struct device_attribute *attr,
> +                               const char *buf, size_t len)
> +{
> +     if (!capable(CAP_NET_ADMIN))
> +             return -EPERM;
> +
> +     return netdev_store(dev, attr, buf, len, change_gso_max_size);
> +}
> +
> +NETDEVICE_SHOW_RW(gso_max_size, fmt_dec);
> +
>  static ssize_t ifalias_store(struct device *dev, struct device_attribute 
> *attr,
>                            const char *buf, size_t len)
>  {
> @@ -543,6 +572,7 @@ static struct attribute *net_class_attrs[] 
> __ro_after_init = {
>       &dev_attr_flags.attr,
>       &dev_attr_tx_queue_len.attr,
>       &dev_attr_gro_flush_timeout.attr,
> +     &dev_attr_gso_max_size.attr,
>       &dev_attr_phys_port_id.attr,
>       &dev_attr_phys_port_name.attr,
>       &dev_attr_phys_switch_id.attr,
> -- 
> 2.7.4
> 

Reply via email to