Re: multiple wireguard interface and kworker ressources

2017-06-14 Thread nicolas prochazka
hello,
after create of wg interface, kworker thread does not return to a
normal state in my case,
kernel thread continues to consume a lot of cpu .
I must delete wireguard interface to kworker decrease.

Nicolas

2017-06-13 23:47 GMT+02:00 Jason A. Donenfeld :
> Hi Nicolas,
>
> It looks to me like some resources are indeed expended in adding those
> interfaces. Not that much that would be problematic -- are you seeing
> a problematic case? -- but still a non-trivial amount.
>
> I tracked it down to WireGuard's instantiation of xt_hashlimit, which
> does some ugly vmalloc, and it's call into the power state
> notification system, which uses a naive O(n) algorithm for insertion.
> I might have a way of amortizing on module insertion, which would
> speed things up. But I wonder -- what is the practical detriment of
> spending a few extra cycles on `ip link add`? What's your use case
> where this would actually be a problem?
>
> Thanks,
> Jason
___
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard


Trouble running a proxy VPN

2017-06-14 Thread Pranesh Prakash

Dear all,
I'm running Ubuntu 16.04 on my laptop and a remote DigitalOcean server, 
and trying to set up a VPN proxy to send all my (for now IPv4) traffic 
through that server.


I can get a VPN tunnel up an working, but I can't get my web traffic to 
pass through it.  What am I doing wrong?


Here are my config files:
===
On the client:
~ cat /etc/wireguard/deneb.conf
[Interface]
Address = 10.10.10.2/32
PostUp = echo nameserver 10.10.10.1 | resolvconf -a tun.%i -m 0 -x
PostDown = resolvconf -d tun.%i
PrivateKey = [pvtkey-of-client]

[Peer]
PublicKey = [pubkey-of-server]
AllowedIPs = 0.0.0.0/0
Endpoint = 162.x.x.125:500
PersistentKeepalive = 25

On server:
sol@deneb:~⟫ cat /etc/wireguard/deneb.conf
[Interface]
Address = 10.10.10.1
PrivateKey = [pvtkey-of-server]
ListenPort = 500

[Peer]
PublicKey = [pubkey-of-client]
AllowedIPs = 10.10.10.2/24
===

On the client I do:
~ sudo wg-quick up deneb
[#] ip link add deneb type wireguard
[#] wg setconf deneb /dev/fd/63
[#] ip address add 10.10.10.2/32 dev deneb
[#] ip link set mtu 1420 dev deneb
[#] ip link set deneb up
[#] wg set deneb fwmark 51820
[#] ip -4 route add 0.0.0.0/0 dev deneb table 51820
[#] ip -4 rule add not fwmark 51820 table 51820
[#] ip -4 rule add table main suppress_prefixlength 0
[#] echo nameserver 10.10.10.1 | resolvconf -a tun.deneb -m 0 -x

~ cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by 
resolvconf(8)

# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 10.10.10.1
nameserver 127.0.1.1
search lan

~ ping -c2 10.10.10.1
PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data.
64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=263 ms
64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=287 ms

--- 10.10.10.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 263.302/275.567/287.833/12.276 ms

~ ping google.com
PING google.com (216.58.197.46) 56(84) bytes of data.
^C
--- google.com ping statistics ---
8 packets transmitted, 0 received, 100% packet loss, time 7000ms

~  sudo wg show deneb
interface: deneb
 public key: [pubkey-of-client]
 private key: (hidden)
 listening port: 40401
 fwmark: 0xca6c

peer: [pubkey-of-server]
 endpoint: 162.x.x.125:500
 allowed ips: 0.0.0.0/0
 latest handshake: 1 minute, 48 seconds ago
 transfer: 85.73 KiB received, 208.13 KiB sent
 persistent keepalive: every 25 seconds

On the server:
sol@deneb:~⟫ sudo wg show wg0
interface: wg0
  public key: [pubkey-of-server]
  private key: (hidden)
  listening port: 500

peer: [pubkey-of-client]
  endpoint: 123.x.x.4:40401
  allowed ips: 10.10.10.0/24
  latest handshake: 10 seconds ago
  transfer: 1.26 MiB received, 1.15 MiB sent

--
Pranesh Prakash
Policy Director, Centre for Internet and Society
http://cis-india.org | tel:+91 80 40926283
sip:pran...@ostel.co | xmpp:pran...@cis-india.org
https://twitter.com/pranesh



signature.asc
Description: OpenPGP digital signature
___
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard


Re: Trouble running a proxy VPN

2017-06-14 Thread Jason A. Donenfeld
Looks like maybe you forgot to enable IP forwarding and masquerading
on the server.
___
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard


Re: multiple wireguard interface and kworker ressources

2017-06-14 Thread nicolas prochazka
At this moment, we are using  3000 wg tunnel on a single wireguard
interface, but now
we want divide the tunnels by interface and by group of our client, to
manage qos by wireguard interface, and some other tasks.
So on in a single interface, it's working well, but test with 3000
interface causes some trouble about cpu / load average , performance
of vm.
Regards,
Nicolas

2017-06-14 9:52 GMT+02:00 nicolas prochazka :
> hello,
> after create of wg interface, kworker thread does not return to a
> normal state in my case,
> kernel thread continues to consume a lot of cpu .
> I must delete wireguard interface to kworker decrease.
>
> Nicolas
>
> 2017-06-13 23:47 GMT+02:00 Jason A. Donenfeld :
>> Hi Nicolas,
>>
>> It looks to me like some resources are indeed expended in adding those
>> interfaces. Not that much that would be problematic -- are you seeing
>> a problematic case? -- but still a non-trivial amount.
>>
>> I tracked it down to WireGuard's instantiation of xt_hashlimit, which
>> does some ugly vmalloc, and it's call into the power state
>> notification system, which uses a naive O(n) algorithm for insertion.
>> I might have a way of amortizing on module insertion, which would
>> speed things up. But I wonder -- what is the practical detriment of
>> spending a few extra cycles on `ip link add`? What's your use case
>> where this would actually be a problem?
>>
>> Thanks,
>> Jason
___
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard


Re: multiple wireguard interface and kworker ressources

2017-06-14 Thread Jason A. Donenfeld
On Wed, Jun 14, 2017 at 9:52 AM, nicolas prochazka
 wrote:
> after create of wg interface, kworker thread does not return to a
> normal state in my case,
> kernel thread continues to consume a lot of cpu .
> I must delete wireguard interface to kworker decrease.

So you're telling me that simply running:

for i in `seq 1 1000` ; do ip link add dev wg${i} type wireguard ; done

Will keep a kworker at 100% CPU, even after that command completes?
I'm unable to reproduce this here. Could you give me detailed
environmental information so that I can reproduce this precisely?
Something minimal and easily reproducable would be preferred.
___
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard


Re: multiple wireguard interface and kworker ressources

2017-06-14 Thread Jason A. Donenfeld
On Wed, Jun 14, 2017 at 4:05 PM, Jason A. Donenfeld  wrote:
> Will keep a kworker at 100% CPU, even after that command completes?
> I'm unable to reproduce this here. Could you give me detailed
> environmental information so that I can reproduce this precisely?
> Something minimal and easily reproducable would be preferred.

Okay, I have a working reproducer now, and I've isolated the issue to
the xt_hashlimit garbage collector, an issue outside the WireGuard
code base. I might look for a work around within WireGuard, however.
I'll keep you posted.
___
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard


Re: multiple wireguard interface and kworker ressources

2017-06-14 Thread Jason A. Donenfeld
On Wed, Jun 14, 2017 at 3:50 PM, nicolas prochazka
 wrote:
> At this moment, we are using  3000 wg tunnel on a single wireguard
> interface, but now
> we want divide the tunnels by interface and by group of our client, to
> manage qos by wireguard interface, and some other tasks.
> So on in a single interface, it's working well, but test with 3000
> interface causes some trouble about cpu / load average , performance
> of vm.

This seems like a bad idea. Everything will be much better if you
continue to use one tunnel. If you want to do QoS or any other type of
management, you can safely do this per-IP, since the allowed IPs
concept gives strong binding between public key and IP address.
___
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard


Re: multiple wireguard interface and kworker ressources

2017-06-14 Thread nicolas prochazka
thanks :)
NIcolas

2017-06-14 16:13 GMT+02:00 Jason A. Donenfeld :
> On Wed, Jun 14, 2017 at 4:05 PM, Jason A. Donenfeld  wrote:
>> Will keep a kworker at 100% CPU, even after that command completes?
>> I'm unable to reproduce this here. Could you give me detailed
>> environmental information so that I can reproduce this precisely?
>> Something minimal and easily reproducable would be preferred.
>
> Okay, I have a working reproducer now, and I've isolated the issue to
> the xt_hashlimit garbage collector, an issue outside the WireGuard
> code base. I might look for a work around within WireGuard, however.
> I'll keep you posted.
___
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard


Re: multiple wireguard interface and kworker ressources

2017-06-14 Thread nicolas prochazka
hello,
one interface = one public key
with multiples interfaces we can manage mutliples ip without aliasing,
it's more confortable to bind some specific service .
statisitiques informations ( bp, error) is more easily to manage with
differents interfaces

we are talking about ~ 1000 wireguard interfaces with 500 tunnels
(peer) for each .

Nicolas

2017-06-14 16:15 GMT+02:00 Jason A. Donenfeld :
> On Wed, Jun 14, 2017 at 3:50 PM, nicolas prochazka
>  wrote:
>> At this moment, we are using  3000 wg tunnel on a single wireguard
>> interface, but now
>> we want divide the tunnels by interface and by group of our client, to
>> manage qos by wireguard interface, and some other tasks.
>> So on in a single interface, it's working well, but test with 3000
>> interface causes some trouble about cpu / load average , performance
>> of vm.
>
> This seems like a bad idea. Everything will be much better if you
> continue to use one tunnel. If you want to do QoS or any other type of
> management, you can safely do this per-IP, since the allowed IPs
> concept gives strong binding between public key and IP address.
___
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard