multiple wireguard interface and kworker ressources

2017-06-13 Thread nicolas prochazka
Hello,
for i in `seq 1 1000` ; do ip link add dev wg${i} type wireguard ; done
=> kernel thread kworker  is 50% cpu time   , with 5000 interface ,
100% cpu time

for i in `seq 1 1000` ; do ip link add dev ifb${i} type ifb ; done
  ( ifb or dummy .. )
=> kernel thread kworker is < 1%

Is it normal behavior ?
 Version : WireGuard 0.0.20170409
  kernel : 4.9.23

Regards,
Nicolas Prochazka
___
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard


Re: multiple wireguard interface and kworker ressources

2017-06-13 Thread Jason A. Donenfeld
Hi Nicolas,

I'll look into this. However, you need to update WireGuard to the
latest version, which is 0.0.20170613. I can't provide help for
outdated versions.

Jason
___
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard


Re: multiple wireguard interface and kworker ressources

2017-06-13 Thread nicolas prochazka
Hello again,
with 0.0.20170613  , i can reproduce a big kworker cpu time consumption
Regards,
nicolas

2017-06-13 14:48 GMT+02:00 Jason A. Donenfeld :
> Hi Nicolas,
>
> I'll look into this. However, you need to update WireGuard to the
> latest version, which is 0.0.20170613. I can't provide help for
> outdated versions.
>
> Jason
___
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard


Re: multiple wireguard interface and kworker ressources

2017-06-13 Thread Jason A. Donenfeld
Hi Nicolas,

It looks to me like some resources are indeed expended in adding those
interfaces. Not that much that would be problematic -- are you seeing
a problematic case? -- but still a non-trivial amount.

I tracked it down to WireGuard's instantiation of xt_hashlimit, which
does some ugly vmalloc, and it's call into the power state
notification system, which uses a naive O(n) algorithm for insertion.
I might have a way of amortizing on module insertion, which would
speed things up. But I wonder -- what is the practical detriment of
spending a few extra cycles on `ip link add`? What's your use case
where this would actually be a problem?

Thanks,
Jason
___
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard


Re: multiple wireguard interface and kworker ressources

2017-06-14 Thread nicolas prochazka
hello,
after create of wg interface, kworker thread does not return to a
normal state in my case,
kernel thread continues to consume a lot of cpu .
I must delete wireguard interface to kworker decrease.

Nicolas

2017-06-13 23:47 GMT+02:00 Jason A. Donenfeld :
> Hi Nicolas,
>
> It looks to me like some resources are indeed expended in adding those
> interfaces. Not that much that would be problematic -- are you seeing
> a problematic case? -- but still a non-trivial amount.
>
> I tracked it down to WireGuard's instantiation of xt_hashlimit, which
> does some ugly vmalloc, and it's call into the power state
> notification system, which uses a naive O(n) algorithm for insertion.
> I might have a way of amortizing on module insertion, which would
> speed things up. But I wonder -- what is the practical detriment of
> spending a few extra cycles on `ip link add`? What's your use case
> where this would actually be a problem?
>
> Thanks,
> Jason
___
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard


Re: multiple wireguard interface and kworker ressources

2017-06-14 Thread nicolas prochazka
At this moment, we are using  3000 wg tunnel on a single wireguard
interface, but now
we want divide the tunnels by interface and by group of our client, to
manage qos by wireguard interface, and some other tasks.
So on in a single interface, it's working well, but test with 3000
interface causes some trouble about cpu / load average , performance
of vm.
Regards,
Nicolas

2017-06-14 9:52 GMT+02:00 nicolas prochazka :
> hello,
> after create of wg interface, kworker thread does not return to a
> normal state in my case,
> kernel thread continues to consume a lot of cpu .
> I must delete wireguard interface to kworker decrease.
>
> Nicolas
>
> 2017-06-13 23:47 GMT+02:00 Jason A. Donenfeld :
>> Hi Nicolas,
>>
>> It looks to me like some resources are indeed expended in adding those
>> interfaces. Not that much that would be problematic -- are you seeing
>> a problematic case? -- but still a non-trivial amount.
>>
>> I tracked it down to WireGuard's instantiation of xt_hashlimit, which
>> does some ugly vmalloc, and it's call into the power state
>> notification system, which uses a naive O(n) algorithm for insertion.
>> I might have a way of amortizing on module insertion, which would
>> speed things up. But I wonder -- what is the practical detriment of
>> spending a few extra cycles on `ip link add`? What's your use case
>> where this would actually be a problem?
>>
>> Thanks,
>> Jason
___
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard


Re: multiple wireguard interface and kworker ressources

2017-06-14 Thread Jason A. Donenfeld
On Wed, Jun 14, 2017 at 9:52 AM, nicolas prochazka
 wrote:
> after create of wg interface, kworker thread does not return to a
> normal state in my case,
> kernel thread continues to consume a lot of cpu .
> I must delete wireguard interface to kworker decrease.

So you're telling me that simply running:

for i in `seq 1 1000` ; do ip link add dev wg${i} type wireguard ; done

Will keep a kworker at 100% CPU, even after that command completes?
I'm unable to reproduce this here. Could you give me detailed
environmental information so that I can reproduce this precisely?
Something minimal and easily reproducable would be preferred.
___
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard


Re: multiple wireguard interface and kworker ressources

2017-06-14 Thread Jason A. Donenfeld
On Wed, Jun 14, 2017 at 4:05 PM, Jason A. Donenfeld  wrote:
> Will keep a kworker at 100% CPU, even after that command completes?
> I'm unable to reproduce this here. Could you give me detailed
> environmental information so that I can reproduce this precisely?
> Something minimal and easily reproducable would be preferred.

Okay, I have a working reproducer now, and I've isolated the issue to
the xt_hashlimit garbage collector, an issue outside the WireGuard
code base. I might look for a work around within WireGuard, however.
I'll keep you posted.
___
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard


Re: multiple wireguard interface and kworker ressources

2017-06-14 Thread Jason A. Donenfeld
On Wed, Jun 14, 2017 at 3:50 PM, nicolas prochazka
 wrote:
> At this moment, we are using  3000 wg tunnel on a single wireguard
> interface, but now
> we want divide the tunnels by interface and by group of our client, to
> manage qos by wireguard interface, and some other tasks.
> So on in a single interface, it's working well, but test with 3000
> interface causes some trouble about cpu / load average , performance
> of vm.

This seems like a bad idea. Everything will be much better if you
continue to use one tunnel. If you want to do QoS or any other type of
management, you can safely do this per-IP, since the allowed IPs
concept gives strong binding between public key and IP address.
___
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard


Re: multiple wireguard interface and kworker ressources

2017-06-14 Thread nicolas prochazka
thanks :)
NIcolas

2017-06-14 16:13 GMT+02:00 Jason A. Donenfeld :
> On Wed, Jun 14, 2017 at 4:05 PM, Jason A. Donenfeld  wrote:
>> Will keep a kworker at 100% CPU, even after that command completes?
>> I'm unable to reproduce this here. Could you give me detailed
>> environmental information so that I can reproduce this precisely?
>> Something minimal and easily reproducable would be preferred.
>
> Okay, I have a working reproducer now, and I've isolated the issue to
> the xt_hashlimit garbage collector, an issue outside the WireGuard
> code base. I might look for a work around within WireGuard, however.
> I'll keep you posted.
___
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard


Re: multiple wireguard interface and kworker ressources

2017-06-14 Thread nicolas prochazka
hello,
one interface = one public key
with multiples interfaces we can manage mutliples ip without aliasing,
it's more confortable to bind some specific service .
statisitiques informations ( bp, error) is more easily to manage with
differents interfaces

we are talking about ~ 1000 wireguard interfaces with 500 tunnels
(peer) for each .

Nicolas

2017-06-14 16:15 GMT+02:00 Jason A. Donenfeld :
> On Wed, Jun 14, 2017 at 3:50 PM, nicolas prochazka
>  wrote:
>> At this moment, we are using  3000 wg tunnel on a single wireguard
>> interface, but now
>> we want divide the tunnels by interface and by group of our client, to
>> manage qos by wireguard interface, and some other tasks.
>> So on in a single interface, it's working well, but test with 3000
>> interface causes some trouble about cpu / load average , performance
>> of vm.
>
> This seems like a bad idea. Everything will be much better if you
> continue to use one tunnel. If you want to do QoS or any other type of
> management, you can safely do this per-IP, since the allowed IPs
> concept gives strong binding between public key and IP address.
___
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard


Re: multiple wireguard interface and kworker ressources

2017-06-21 Thread Jason A. Donenfeld
Hi Nicolas,

1000 interfaces with 500 peers each. That's a very impressive quantity
of 51 wireguard deployments! Please do let me know how that goes.
I'd be interested to learn what the name of this project/company is.

With regards to your problem, I've fixed it by completely rewriting
ratelimiter.c to not use xt_hashtable, a piece of decaying Linux code
from the 1990s, and instead using my own token bucket implementation.
The result performs much better, is easier on RAM usage, and requires
far fewer lines of code. Most importantly for you, all interfaces will
now be able to share the same netns-keyed hashtable, so that the
cleanup routines are always fast, no matter how many interfaces you
have.

I'll likely sit on it for a bit longer while I verify it and make sure
it works, but if you'd like to try it now, it's sitting in the git
master. Please let me know how it goes.

Regards,
Jason
___
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard