Re: [Openvpn-devel] Bridging Question
On Tue, 27 Apr 2004, Lonnie Cumberland wrote: > It appears that on each side of the VPN that the hubs are allowing got > masked IP on the range of 192.189.0.0/24. > > Since each side is allowing the same range of IP's, doesn't this > particular set up require that no two machines have the same IP even > though they are on different sides of the bridge? > > I would think that an IP conflict would occur if that were to happen, right? Right. All mashines on both sides of the VPN bridge belong to the same ip subnet, just like if they were all connected to the same hub/switch. -- _ Mathias Sundman (^) ASCII Ribbon Campaign NILINGS ABXNO HTML/RTF in e-mail Tel: +46-(0)8-666 32 28 / \ NO Word docs in e-mail
Re: [Openvpn-devel] Windows and Shaper
Derek, Thanks for the function, I like it better than the previous function which was using timeGetTime (and therefore has wraparound problems). I've merged it in test26 which should be released soon. James Derek Burdick said: > Here is the function. Don't forget to #DEFINE HAVE_GETTIMEOFDAY in > config-win32.h. Let me know what you think. > Derek Burdick > - Original Message - > From: "James Yonan" > To: "Derek Burdick" ; > > Sent: Tuesday, April 27, 2004 12:48 PM > Subject: Re: [Openvpn-devel] Windows and Shaper > > > > Derek Burdick said: > > > > > I was browsing the online CVS repository and noticed the > > > config-win32.h.in says that HAVE_GETTIMEOFDAY is specified in misc.c. > When > > > I look in misc.c, I don't see the file. Is the latest version just not > > > checked in? I also implemented a gettimeofday for windows. It is based > on > > > QueryPerformanceCounter. If you are interested in this version let me > know. > > > If somebody knows the correct status of gettimeofday for the cvs code, I > > > would appreciate an update. > > > > Derek, > > > > Yes, I'd like to see your gettimeofday for Windows that uses > > QueryPerformanceCounter. OpenVPN 2.0 will have a gettimeofday function > for > > Windows so that --shaper and --mode server can be supported. > > > > James > > > > > --
[Openvpn-devel] MAC address collection
Hello All, Can you tell me if OpenVPN client can get the MAC address of a client road warrior at connection time and send that information over to the OpenVPN server? Thanks, Lonnie
[Openvpn-devel] (no subject)
A new release of the 2.0 beta is available. * One of the goals of OpenVPN 2.0 is extreme scalability, i.e. robustly handling connections from potentially thousands of clients. To do this, some kind of load balancing and failover capability is needed, because a single OpenVPN daemon running on a single processor may not be able to handle this kind of load. One solution is to set up a cluster of near-identically configured OpenVPN daemons on separate machines, or multiple daemons on a multiprocessor machine, or both. Each daemon is running with --mode server and can handle multiple clients. The feature that makes this now possible is that --remote has been extended to allow a list of remote servers and port numbers to be specified on the client such as: remote server1 5000 remote server1 5001 remote server2 5000 remote server2 5001 By default the OpenVPN client will try each host/port in the order specified. If the connection fails (such as triggered by --ping/--ping-restart) the client will move to the next host in the list. The client can also initially randomize the list using the new --remote-random flag, to provide a basic load-balancing capability. The servers are configured almost identically, though each has its own port number and --ifconfig-pool IP address range. * Harald Roelle has observed some limitations in the current Linux 2.4 and 2.6 tun/tap driver. Specifically, the TX queue size is set to 10 by default, which is too small. There is also a problem with "kicking" where a packet in the driver may get stuck there and need another packet to come through to "kick" it out. This may account for the "no buffer space available" message that some Linux users report. To work around this problem, OpenVPN has added a --txqueuelen option to raise the queue length to a more sane size, and now defaults to 100. Right now, this is a Linux-only option. I've also added --rcvbuf and --sndbuf to control the TCP/UDP socket buffer sizes. Harald Roelle and Max Krasnyansky have put together some patches which fix the Linux tun/tap driver issues, and should (hopefully) be in the pipeline shortly for inclusion in the mainline 2.4 and 2.6 branches. For now, --txqueuelen will provide a workaround. At this point I would say that nearly all of the key features which have been envisaged for 2.0 are in place with a few exceptions: * TCP Support in the multi-client server -- The only way that I can see of scalably adding TCP support (without using multiple threads or processes) is to use an efficient multi-socket API such as epoll() which is available on Linux 2.6 and also apparently on 2.4 via a kernel patch. * Forking server support -- The 2.0 multi-client server model is designed for people who want a potentially large number of clients to tunnel through a single tun or tap interface using a single daemon process. Some people however might prefer the forking server model, where the server automatically forks off a new process for each incoming client, dynamically allocating a private tun/tap interface for that client. * Multithreading support -- multithreading offers two key advantages: (1) it reduces the worst-case latency of packets flowing through the tunnel and (2) it offers the opportunity for a single daemon to utilize all the processors on an multiprocessor machine. Unfortunately, multithreading causes a lot of problems including complexifying the source code and introducing race-condition bugs which can be extremely difficult to reproduce or track down. My thinking at this point is that implementing multithreading may not be worth the trouble, especially given the fact that the new load balancing feature can allow multiple OpenVPN daemons running on multiple machines to serve the same client pool. * Compatibility with 1.x -- OpenVPN 2.0 tries as much as possible to be upwardly compatible with 1.x. The main difference is that 2.0 changes some parameter defaults. The tun/tap MTU has been raised to 1500, --mssfix 1450 is now the default, and --key-method now defaults to 2. The only feature which has been removed is the special-purpose SSL/TLS thread feature which is enabled on 1.x if you build OpenVPN with the --enable-pthread flag. I might put it back if people complain, but overall I'm not sure that it's worth the trouble. Change Log: 2004.04.28 -- Version 2.0-test26 * Optimized broadcast path in multi-client mode. * Added socket buffer size options --rcvbuf & --sndbuf. * Configure Linux tun/tap driver to use a more sensible txqueuelen default. Also allow explicit setting via --txqueuelen option (Harald Roelle). * The --remote option now allows the port number to be specified as the second parameter. If unspecified, the port number defaults to the --rport value. * Multiple --remote options on the client can now be specified for load balancing and failover. The --remote-random flag can be used to initially randomize the --remote list for basic load balanci
[Openvpn-devel] OpenVPN 2.0-test26 released
Ooops... let's try that again with the correct subject line. A new release of the 2.0 beta is available. * One of the goals of OpenVPN 2.0 is extreme scalability, i.e. robustly handling connections from potentially thousands of clients. To do this, some kind of load balancing and failover capability is needed, because a single OpenVPN daemon running on a single processor may not be able to handle this kind of load. One solution is to set up a cluster of near-identically configured OpenVPN daemons on separate machines, or multiple daemons on a multiprocessor machine, or both. Each daemon is running with --mode server and can handle multiple clients. The feature that makes this now possible is that --remote has been extended to allow a list of remote servers and port numbers to be specified on the client such as: remote server1 5000 remote server1 5001 remote server2 5000 remote server2 5001 By default the OpenVPN client will try each host/port in the order specified. If the connection fails (such as triggered by --ping/--ping-restart) the client will move to the next host in the list. The client can also initially randomize the list using the new --remote-random flag, to provide a basic load-balancing capability. The servers are configured almost identically, though each has its own port number and --ifconfig-pool IP address range. * Harald Roelle has observed some limitations in the current Linux 2.4 and 2.6 tun/tap driver. Specifically, the TX queue size is set to 10 by default, which is too small. There is also a problem with "kicking" where a packet in the driver may get stuck there and need another packet to come through to "kick" it out. This may account for the "no buffer space available" message that some Linux users report. To work around this problem, OpenVPN has added a --txqueuelen option to raise the queue length to a more sane size, and now defaults to 100. Right now, this is a Linux-only option. I've also added --rcvbuf and --sndbuf to control the TCP/UDP socket buffer sizes. Harald Roelle and Max Krasnyansky have put together some patches which fix the Linux tun/tap driver issues, and should (hopefully) be in the pipeline shortly for inclusion in the mainline 2.4 and 2.6 branches. For now, --txqueuelen will provide a workaround. At this point I would say that nearly all of the key features which have been envisaged for 2.0 are in place with a few exceptions: * TCP Support in the multi-client server -- The only way that I can see of scalably adding TCP support (without using multiple threads or processes) is to use an efficient multi-socket API such as epoll() which is available on Linux 2.6 and also apparently on 2.4 via a kernel patch. * Forking server support -- The 2.0 multi-client server model is designed for people who want a potentially large number of clients to tunnel through a single tun or tap interface using a single daemon process. Some people however might prefer the forking server model, where the server automatically forks off a new process for each incoming client, dynamically allocating a private tun/tap interface for that client. * Multithreading support -- multithreading offers two key advantages: (1) it reduces the worst-case latency of packets flowing through the tunnel and (2) it offers the opportunity for a single daemon to utilize all the processors on an multiprocessor machine. Unfortunately, multithreading causes a lot of problems including complexifying the source code and introducing race-condition bugs which can be extremely difficult to reproduce or track down. My thinking at this point is that implementing multithreading may not be worth the trouble, especially given the fact that the new load balancing feature can allow multiple OpenVPN daemons running on multiple machines to serve the same client pool. * Compatibility with 1.x -- OpenVPN 2.0 tries as much as possible to be upwardly compatible with 1.x. The main difference is that 2.0 changes some parameter defaults. The tun/tap MTU has been raised to 1500, --mssfix 1450 is now the default, and --key-method now defaults to 2. The only feature which has been removed is the special-purpose SSL/TLS thread feature which is enabled on 1.x if you build OpenVPN with the --enable-pthread flag. I might put it back if people complain, but overall I'm not sure that it's worth the trouble. Change Log: 2004.04.28 -- Version 2.0-test26 * Optimized broadcast path in multi-client mode. * Added socket buffer size options --rcvbuf & --sndbuf. * Configure Linux tun/tap driver to use a more sensible txqueuelen default. Also allow explicit setting via --txqueuelen option (Harald Roelle). * The --remote option now allows the port number to be specified as the second parameter. If unspecified, the port number defaults to the --rport value. * Multiple --remote options on the client can now be specified for load balancing and failover. The --remote-random flag can be used to