[Openvpn-devel] Add RPM build option to disable LZO
The attached patch to the OpenVPN 1.6.0 SPEC file adds a conditional build option to disable the use of LZO compression. LZO is not included in Red Hat/Fedora Core, so this will make it easier for users of those distributions to build a RPM without LZO support (if they don't want it). The package can be built with 'rpmbuild -tb --without lzo ...". -- Ian Pilcheri.pilc...@comcast.net --- openvpn.spec-orig 2004-05-27 10:04:09.561878264 -0500 +++ openvpn.spec 2004-05-27 10:15:37.344319440 -0500 @@ -11,6 +11,9 @@ Packager: bishop clark (LC957)BuildRoot: %{_tmppath}/%{name}-%(id -un) +%{!?_without_lzo:BuildRequires: lzo-devel} +%{!?_without_lzo:Requires: lzo} + %description OpenVPN is a robust and highly flexible tunneling application that uses all of the encryption, authentication, and certification features @@ -24,7 +27,7 @@ %setup -q %build -%configure --enable-pthread +%configure --enable-pthread %{?_without_lzo:--disable-lzo} %__make %install
Re: [Openvpn-devel] Trying to use zlib with openvpn
> Make sure that you define > > #define LZO_EXTRA_BUFFER(len) ((len)/64 + 16 + 3) /* LZO worst case size > expansion. */ > > for zlib. > OK thanks for the response. Upon closer investigation, I found that it is my own problem. The LZO_COMPRESS is taking the parameters as (source, source_len, dst, dest_len) whereas zlib compress is (dest, dest_len, source, source_len). Therefore I just turned them around. It is now working !!! My next question is whether anyone has a comparison of LZO and zlib in terms of relative time taken to compress. In addition, the zlib compress2() has a compression level parameter, I would also need to know how it fairs comparing with compress(). Thanks once again.
Re: [Openvpn-devel] Trying to use zlib with openvpn
Ming-Ching Tiewsaid: > > Last night after posting to openvpn-user maillist about > wanting to use zlib with OpenVPN, I had a look at the > code. It seems the compression code is well-contained > in lzo.c, I could even do a one-to-one swap of > 'LZO_COMPRESS' with zlib's 'compress' and similarly > for decompress. It get compiled and linked. I was quite > glad about that !!! > > But of course, it is not so simple, it ran and crashed with > segmentation fault. I now suspect the decompression > memory allocation is not big enough. Any clues > where else I should look at ? Make sure that you define #define LZO_EXTRA_BUFFER(len) ((len)/64 + 16 + 3) /* LZO worst case size expansion. */ for zlib. James