Re: [PATCH net-net 0/4] Increase the limit of tuntap queues
> On Sun, Nov 23, 2014 at 08:23:21PM -0500, David Miller wrote: > > From: "Michael S. Tsirkin" > > Date: Sun, 23 Nov 2014 22:30:32 +0200 > > > > > qemu runs in the host, but it's unpriveledged: it gets > > > passed tun FDs by a priveledged daemon, and it only > > > has the rights to some operations, > > > in particular to attach and detach queues. > > > > > > The assumption always was that this operation is safe > > > and can't make kernel run out of resources. > > > > This creates a rather rediculous situation in my opinion. > > > > Configuring a network device is a privileged operation, the daemon > > should be setting this thing up. > > > > In no other context would we have to worry about something like this. > > Right. Jason corrected me. I got it wrong: > what qemu does is TUNSETQUEUE and that needs to get a queue > that's already initialized by the daemon. > > To create new queues daemon calls TUNSETIFF, > and that already can be used to create new devices, > so it's a priveledged operation. > > This means it's safe to just drop the restriction, > exactly as you suggested originally. I will drop patch2 to add sysctl entry and and will send a v2 with other patches. Thanks, Pankaj > -- > MST > -- > To unsubscribe from this list: send the line "unsubscribe netdev" in > the body of a message to majord...@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH net-net 0/4] Increase the limit of tuntap queues
On Sun, Nov 23, 2014 at 08:23:21PM -0500, David Miller wrote: > From: "Michael S. Tsirkin" > Date: Sun, 23 Nov 2014 22:30:32 +0200 > > > qemu runs in the host, but it's unpriveledged: it gets > > passed tun FDs by a priveledged daemon, and it only > > has the rights to some operations, > > in particular to attach and detach queues. > > > > The assumption always was that this operation is safe > > and can't make kernel run out of resources. > > This creates a rather rediculous situation in my opinion. > > Configuring a network device is a privileged operation, the daemon > should be setting this thing up. > > In no other context would we have to worry about something like this. Right. Jason corrected me. I got it wrong: what qemu does is TUNSETQUEUE and that needs to get a queue that's already initialized by the daemon. To create new queues daemon calls TUNSETIFF, and that already can be used to create new devices, so it's a priveledged operation. This means it's safe to just drop the restriction, exactly as you suggested originally. -- MST -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH net-net 0/4] Increase the limit of tuntap queues
On Mon, Nov 24, 2014 at 11:23:05AM +0800, Jason Wang wrote: > On 11/23/2014 06:46 PM, Michael S. Tsirkin wrote: > > On Wed, Nov 19, 2014 at 10:44:27PM +0200, Michael S. Tsirkin wrote: > >> > On Wed, Nov 19, 2014 at 03:16:28PM -0500, David Miller wrote: > >>> > > From: Pankaj Gupta > >>> > > Date: Tue, 18 Nov 2014 21:52:54 +0530 > >>> > > > > > > - Accept maximum number of queues as sysctl param so that any user > > > > space > > > > application like libvirt can use this value to limit number of > > > > queues. Also > > > > Administrators can specify maximum number of queues by updating > > > > this sysctl > > > > entry. > >>> > > > >>> > > This is the only part I don't like. > >>> > > > >>> > > Just let whoever has privileges to configure the tun device shoot > >>> > > themselves in the foot if they want to by configuring "too many" > >>> > > queues. > >>> > > > >>> > > If the virtual entity runs itself out of resources by doing something > >>> > > stupid, it's purely their problem. > >> > > >> > Well it will run host out of kernel, no? > > To clarify: > > > > At the moment attaching/detaching queues is an unpriveledged operation. > > > > Shouldn't we worry that an application can cause large > > allocations, and provide a way to limit these? > > But creating new queues (TUNSETIFF) is privileged. There's no way for > unprivileged user to allocate more resources. So we are safe here? Hmm, that's true, I think I was confused. Thanks for setting me straight. -- MST -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH net-net 0/4] Increase the limit of tuntap queues
On 11/23/2014 06:46 PM, Michael S. Tsirkin wrote: > On Wed, Nov 19, 2014 at 10:44:27PM +0200, Michael S. Tsirkin wrote: >> > On Wed, Nov 19, 2014 at 03:16:28PM -0500, David Miller wrote: >>> > > From: Pankaj Gupta >>> > > Date: Tue, 18 Nov 2014 21:52:54 +0530 >>> > > > > > - Accept maximum number of queues as sysctl param so that any user > > > space > > > application like libvirt can use this value to limit number of > > > queues. Also > > > Administrators can specify maximum number of queues by updating > > > this sysctl > > > entry. >>> > > >>> > > This is the only part I don't like. >>> > > >>> > > Just let whoever has privileges to configure the tun device shoot >>> > > themselves in the foot if they want to by configuring "too many" >>> > > queues. >>> > > >>> > > If the virtual entity runs itself out of resources by doing something >>> > > stupid, it's purely their problem. >> > >> > Well it will run host out of kernel, no? > To clarify: > > At the moment attaching/detaching queues is an unpriveledged operation. > > Shouldn't we worry that an application can cause large > allocations, and provide a way to limit these? But creating new queues (TUNSETIFF) is privileged. There's no way for unprivileged user to allocate more resources. So we are safe here? -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH net-net 0/4] Increase the limit of tuntap queues
From: "Michael S. Tsirkin" Date: Sun, 23 Nov 2014 22:30:32 +0200 > qemu runs in the host, but it's unpriveledged: it gets > passed tun FDs by a priveledged daemon, and it only > has the rights to some operations, > in particular to attach and detach queues. > > The assumption always was that this operation is safe > and can't make kernel run out of resources. This creates a rather rediculous situation in my opinion. Configuring a network device is a privileged operation, the daemon should be setting this thing up. In no other context would we have to worry about something like this. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH net-net 0/4] Increase the limit of tuntap queues
On Sun, Nov 23, 2014 at 01:43:23PM -0500, David Miller wrote: > From: "Michael S. Tsirkin" > Date: Sun, 23 Nov 2014 12:46:23 +0200 > > > At the moment attaching/detaching queues is an unpriveledged operation. > > > > Shouldn't we worry that an application can cause large > > allocations, and provide a way to limit these? > > > > David, could you comment on this please? > > I don't want arbitrary limits imposed. > > Where does this "application" run? If it's in the host, then who > cares? If they suck up all of their available memory with queue > resources, it's their problem. qemu runs in the host, but it's unpriveledged: it gets passed tun FDs by a priveledged daemon, and it only has the rights to some operations, in particular to attach and detach queues. The assumption always was that this operation is safe and can't make kernel run out of resources. -- MST -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH net-net 0/4] Increase the limit of tuntap queues
From: "Michael S. Tsirkin" Date: Sun, 23 Nov 2014 12:46:23 +0200 > At the moment attaching/detaching queues is an unpriveledged operation. > > Shouldn't we worry that an application can cause large > allocations, and provide a way to limit these? > > David, could you comment on this please? I don't want arbitrary limits imposed. Where does this "application" run? If it's in the host, then who cares? If they suck up all of their available memory with queue resources, it's their problem. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH net-net 0/4] Increase the limit of tuntap queues
On Wed, Nov 19, 2014 at 10:44:27PM +0200, Michael S. Tsirkin wrote: > On Wed, Nov 19, 2014 at 03:16:28PM -0500, David Miller wrote: > > From: Pankaj Gupta > > Date: Tue, 18 Nov 2014 21:52:54 +0530 > > > > > - Accept maximum number of queues as sysctl param so that any user space > > > application like libvirt can use this value to limit number of queues. > > > Also > > > Administrators can specify maximum number of queues by updating this > > > sysctl > > > entry. > > > > This is the only part I don't like. > > > > Just let whoever has privileges to configure the tun device shoot > > themselves in the foot if they want to by configuring "too many" > > queues. > > > > If the virtual entity runs itself out of resources by doing something > > stupid, it's purely their problem. > > Well it will run host out of kernel, no? To clarify: At the moment attaching/detaching queues is an unpriveledged operation. Shouldn't we worry that an application can cause large allocations, and provide a way to limit these? David, could you comment on this please? > -- > MST -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH net-net 0/4] Increase the limit of tuntap queues
> On Wed, Nov 19, 2014 at 03:16:28PM -0500, David Miller wrote: > > From: Pankaj Gupta > > Date: Tue, 18 Nov 2014 21:52:54 +0530 > > > > > - Accept maximum number of queues as sysctl param so that any user space > > > application like libvirt can use this value to limit number of queues. > > > Also > > > Administrators can specify maximum number of queues by updating this > > > sysctl > > > entry. > > > > This is the only part I don't like. > > > > Just let whoever has privileges to configure the tun device shoot > > themselves in the foot if they want to by configuring "too many" > > queues. > > > > If the virtual entity runs itself out of resources by doing something > > stupid, it's purely their problem. We can configure some fixed number of queues like 16 or so at the start if multiqueue is enabled and then if somebody needs more then that, they can do on their own. If we dont want anyone to directly increase the number of queues, we can add an ioctl command for increasing number of queues? Suggestions here. > > Well it will run host out of kernel, no? > > -- > MST > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH net-net 0/4] Increase the limit of tuntap queues
On Wed, Nov 19, 2014 at 03:16:28PM -0500, David Miller wrote: > From: Pankaj Gupta > Date: Tue, 18 Nov 2014 21:52:54 +0530 > > > - Accept maximum number of queues as sysctl param so that any user space > > application like libvirt can use this value to limit number of queues. > > Also > > Administrators can specify maximum number of queues by updating this > > sysctl > > entry. > > This is the only part I don't like. > > Just let whoever has privileges to configure the tun device shoot > themselves in the foot if they want to by configuring "too many" > queues. > > If the virtual entity runs itself out of resources by doing something > stupid, it's purely their problem. Well it will run host out of kernel, no? -- MST -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH net-net 0/4] Increase the limit of tuntap queues
From: Pankaj Gupta Date: Tue, 18 Nov 2014 21:52:54 +0530 > - Accept maximum number of queues as sysctl param so that any user space > application like libvirt can use this value to limit number of queues. Also > Administrators can specify maximum number of queues by updating this sysctl > entry. This is the only part I don't like. Just let whoever has privileges to configure the tun device shoot themselves in the foot if they want to by configuring "too many" queues. If the virtual entity runs itself out of resources by doing something stupid, it's purely their problem. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH net-net 0/4] Increase the limit of tuntap queues
On Tue, Nov 18, 2014 at 8:22 AM, Pankaj Gupta wrote: > Networking under KVM works best if we allocate a per-vCPU rx and tx > queue in a virtual NIC. This requires a per-vCPU queue on the host side. ... > I have done some testing to find out any regression and with sample program > which creates tun/tap for single queue / multiqueue device and it seems to be > working fine. I will also post the performance numbers. Sounds quite exciting. please share recommended setup (queue assignments) and performance gains you're seeing. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH net-net 0/4] Increase the limit of tuntap queues
This patch series is followup to the RFC posted as: https://lkml.org/lkml/2014/8/18/392 Changes from RFC are: PATCH 1: Sergei Shtylyov - Add an empty line after declarations. PATCH 2: Jiri Pirko - Do not introduce new module paramaters. Michael.S.Tsirkin - We can use sysctl for limiting max number of queues. Networking under KVM works best if we allocate a per-vCPU rx and tx queue in a virtual NIC. This requires a per-vCPU queue on the host side. Modern physical NICs have multiqueue support for large number of queues. To scale vNIC to run multiple queues parallel to maximum number of vCPU's we need to increase number of queues support in tuntap. This series is to increase the limit of tuntap queues. Original work is being done by 'jasow...@redhat.com'. I am taking this 'https://lkml.org/lkml/2013/6/19/29' patch series as a reference. As per discussion in the patch series: There were two reasons which prevented us from increasing number of tun queues: - The netdev_queue array in netdevice were allocated through kmalloc, which may cause a high order memory allocation too when we have several queues. E.g. sizeof(netdev_queue) is 320, which means a high order allocation would happens when the device has more than 16 queues. - We store the hash buckets in tun_struct which results a very large size of tun_struct, this high order memory allocation fail easily when the memory is fragmented. The patch 60877a32bce00041528576e6b8df5abe9251fa73 increases the number of tx queues. Memory allocation fallback to vzalloc() when kmalloc() fails. This series tries to address following issues: - Increase the number of netdev_queue queues for rx similarly its done for tx queues by falling back to vzalloc() when memory allocation with kmalloc() fails. - Switches to use flex array to implement the flow caches to avoid higher order allocations. - Accept maximum number of queues as sysctl param so that any user space application like libvirt can use this value to limit number of queues. Also Administrators can specify maximum number of queues by updating this sysctl entry. - Increase number of queues to 256, maximum number is equal to maximum number of vCPUS allowed in a guest. I have done some testing to find out any regression and with sample program which creates tun/tap for single queue / multiqueue device and it seems to be working fine. I will also post the performance numbers. tuntap: Increase the number of queues in tun tuntap: Reduce the size of tun_struct by using flex array tuntap: Accept tuntap max queue length as sysctl entry net: allow large number of rx queues drivers/net/tun.c | 91 +++- net/core/dev.c| 19 ++--- 2 files changed, 86 insertions(+), 24 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/