Re: sigsegv in BN_BLINDING_free 0.9.8a
Matthew L Daniel wrote: >>> I am experiencing a SIGSEGV in BN_BLINDING_free because mt_blinding >>> appears to be 0x11 instead of a pointer to some memory. >> We had an identical issue reported here: >> https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=193633 >> which is somehow caused by the use of Zimbra binaries. > > Thank you for your reply. I looked at that and it does not (at first > glance) seem applicable to me. I had never heard of the Zimbra suite > mentioned, but I rebuilt my openldap from src.rpm and that seems to have > cured it's ills. > > I appreciate your insight into this, and hope this thread will help > others avoid this pitfall. Matthew, Did you get this resolved yet? I'm seeing the same issue - I've updated Bug 193633 with my experiences. R. __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: Creating compatible PKCS12 files
On Mon, Jun 26, 2006, Jason K. Resch wrote: > > I wanted to thank you for your suggestions, it is working now. I had to > change the code: > > int res = i2d_PKCS8PrivateKey_fp (fp, clave, EVP_des_ede3_cbc(), NULL, > 0, NULL, pwd); > > TO > > int pbe_nid = OBJ_txt2nid("PBE-SHA1-3DES"); > int res = i2d_PKCS8PrivateKey_nid_fp (fp, clave, pbe_nid, NULL, 0, NULL, > pwd); > You can also use the NID directly: int res = i2d_PKCS8PrivateKey_nid_fp (fp, clave, NID_pbe_WithSHA1And3_Key_TripleDES_CBC, NULL, 0, NULL, pwd); Steve. -- Dr Stephen N. Henson. Email, S/MIME and PGP keys: see homepage OpenSSL project core developer and freelance consultant. Funding needed! Details on homepage. Homepage: http://www.drh-consultancy.demon.co.uk __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: Cygwin 5.6.xxx. Windows XP SP2 French
Hello, > I try to install openssl-0.9.7i onto Cygwin 5.6.xxx. Windows XP SP2 French > > The ./config command aborts with the following error > DES_PTR used > DES_RISC1 used > DES_UNROLL used > BN_LLONG mode > RC4_INDEX mode > RC4_CHUNK is undefined > 'make' n'est pas reconnu en tant que commande interne > ou externe, un programme exécutable ou un fichier de commande For me this looks like you have no installed "make" package in cygwin. Best regards, -- Marek Marcola <[EMAIL PROTECTED]> __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: Creating compatible PKCS12 files
Dr. Henson, I wanted to thank you for your suggestions, it is working now. I had to change the code: int res = i2d_PKCS8PrivateKey_fp (fp, clave, EVP_des_ede3_cbc(), NULL, 0, NULL, pwd); TO int pbe_nid = OBJ_txt2nid("PBE-SHA1-3DES"); int res = i2d_PKCS8PrivateKey_nid_fp (fp, clave, pbe_nid, NULL, 0, NULL, pwd); Thanks again, Jason - Original Message - From: "Dr. Stephen Henson" <[EMAIL PROTECTED]> Date: Sunday, June 25, 2006 6:34 am Subject: Re: Creating compatible PKCS12 files > On Sat, Jun 24, 2006, Jason K. Resch wrote: > > > I'm attempting to make software that can use the crypto features in > > either OpenSSL or Mozilla NSS. Thus far I've had little > difficulty in > > doing so except for one problem. When I export an > > EncryptedPrivateKeyInfo (for a 2048 bit key) using OpenSSL the > resulting> file is 1298 bytes in length. However when I export it > using NSS it > > comes out to be 1270 bytes. The odd thing is that OpenSSL can > read the > > exported NSS key using the following OpenSSL code: > > > > RSA *key = NULL; > > BIO *mem = BIO_new_mem_buf((void *) privateKeyData.getByteArray(), > > privateKeyData.size() ); > > > > char *pwd = (char*)passPhrase.c_str(); > > OpenSSL_add_all_algorithms(); > > ERR_load_crypto_strings(); > > > > EVP_PKEY *clave = d2i_PKCS8PrivateKey_bio(mem, NULL, NULL, pwd); > > if (clave == NULL) > > { > > ERR_print_errors_fp(stderr); > > } > > > > key = EVP_PKEY_get1_RSA(clave); > > > > I can also successfully export the NSS generated key using the > command:> "openssl pkcs8 -in private.key -inform DER -out encoded.out" > > > > However, when NSS attempts to decrypt the OpenSSL generated file, it > > fails with an error suggesting an invalid password was used. One > > difference I noticed is that NSS requires the password be in > Unicode,> while OpenSSL takes a plain ASCII string. But when I > attempted to use > > an ASCII string to encrypt the password in NSS, then "openssl > pkcs8 -in > > private.key -inform DER -out encoded.out" no longer could decrypt > the key. > > > > If it is of any help, the algorithm I am using in Mozilla NSS is: > > SEC_OID_PKCS12_V2_PBE_WITH_SHA1_AND_3KEY_TRIPLE_DES_CBC > > > > and the algorithm I am using in OpenSSL is: > > i2d_PKCS8PrivateKey_fp (fp, clave, EVP_des_ede3_cbc(), NULL, 0, > NULL, pwd); > > > > I am at a loss as to what is causing these key incompatibilities and > > would be grateful for any suggestions regarding the matter. > > > > > > Not sure what the subject is about "compatible PKCS#12 files" the > issues you > refer to are with PKCS#8 format private keys. > > The size of the output file can vary according to the algorithm and > indeed the > encoding of the private key. Mozilla PKCS#12 files for exampled > used to use > indefinite length construted encoding and were quite a bit larger > than the > OpenSSL equivalents. Other factors such as seed length, key > attributes and OID > lengths can have an influence. > > The PKCS#12 standard requires that keys should be in Unicode for > the PKCS#12 > PBE algorithms and use a double null string terminator. OpenSSL > should follow > this OK. > > The other main standard containg PBE algorithms is PKCS#5 v2.0 > which includes > some older PKCS#5 v1.5 algorithms with smaller keys sizes. It doesn't > specifically enforce a specific password format but some examples > use ASCII > or arguably UTF8: these examples were generated using OpenSSL BTW. > > Try the OpenSSL command line option to the pkcs8 utilty > > -v1 PBE-SHA1-3DES > > which should use the same PKCS#12 PBE algorithm as NSS. > > Steve. > -- > Dr Stephen N. Henson. Email, S/MIME and PGP keys: see homepage > OpenSSL project core developer and freelance consultant. > Funding needed! Details on homepage. > Homepage: http://www.drh-consultancy.demon.co.uk > __ > OpenSSL Project http://www.openssl.org > User Support Mailing Listopenssl-users@openssl.org > Automated List Manager [EMAIL PROTECTED] > __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: OpenSSL and multiple threads
On Mon, 2006-06-26 at 15:59 +0100, Darryl Miles wrote: > I do not believe the kernel has any problem with super-large fd_set's > being passed to it. I believe the kernel will use whatever size its > given and attempt to access the memory based on the 'maxfd' argument to > select. If the kernel attempts to access illegal memory from userspace > context (which the select() syscall runs in) the select() will return > EFAULT or SIGSEGV the application. So no "recompile the kernel" is needed. Understood. OK I implemented the sample source code as provided and it works fine. Thanks a million! I guess I "must" use this hack. So be it. Thanks again for all you're help I learned a lot. I hope the OpenSSL maintainers heard this cry (even if it is a small cry) and will at some point decide to use a better method than select. Cheers Leon > The "ulimit -a" resources are set per process yes. This affect the > allocation of new file descriptors within the kernel. The default for > Fedora and its glibc build is 1024. In order for you to have > descriptors above 1024 you must have already dealt with raising the > ulimit. At that point you run into these sorts of problems with any > auxiliary library your applications call that do IO with select(). So > you may need to audit other things YMMV. > > Its purely a userspace issue. > > Maybe in rand/rand_unix.c you can replace 'fd_set' with 'my_fd_set', > then at the top of the code put: > > #include > #define I_WANT_FD_SETSIZE 2048 > > #ifndef _NFDBITS > #define _NFDBITS 8 > #endif > #ifdef __fd_mask > /* probably not perfect for x86_64 */ > #define __fd_mask long int __fd_mask > #endif > > struct my_fd_set_type { > __fd_mask fds_bits[I_WANT_FD_SETSIZE / __NFDBITS]; > }; > > typedef struct my_fd_set_type my_fd_set; > > > > You need to replace: > > fd_set fset; > > with > > my_fd_set fset; > > > > You need to replace FD_ZERO(&fset) with: > > memset(&my_fd_set, 0, sizeof(my_fd_set)); > > > Since the default macro will only clear the bits covering the first 1024 > fds. > > > > The try as Marek suggests: > > FD_SET(fd, (fd_set *)&my_fd_set; > > And with select use: > select(aaa, (fd_set *)&my_fd_set, xxx, yyy, zzz); > > > It is only the userspace code that allocats storage for fd_set type and > calls FD_ZERO() and FD_SET() that needs to be altered. From there you > can pass around the address of 'my_fd_set' and ultimately use it on the > select() call. > > If the stock glibc FD_SET() do not work, you may need to implemnt your > own version of them against your my_fd_set_type. But I dont think this > is the case I think FD_SET() will work but FD_ZERO() wont. Just use > memset() for FD_ZERO(). > > > The above is pretty Linux / glibc specific. You dont need to recompile > anything but the single file rand/rand_unix.c from OpenSSL. It is the > only affected part within OpenSSL AFAIK. > > Obviously you may need to audit your application code for the same > requirements. > > > HTH > > Darryl > __ > OpenSSL Project http://www.openssl.org > User Support Mailing Listopenssl-users@openssl.org > Automated List Manager [EMAIL PROTECTED] __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: SSL Compile Problem II
Got it, had two different ssl.h files on the system. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Randy Sent: Monday, June 26, 2006 10:12 AM To: openssl-users@openssl.org Subject: SSL Compile Problem II This code compiles and links fine. If I uncomment the SSL_new line I get "undefined symbol: ssl_x" SSL_CTX *ctx; // SSL_new *ssl_x; /* Initializing OpenSSL */ SSL_load_error_strings();/* readable error messages */ SSL_library_init(); /* initialize library */ /* Setting up the SSL pointers */ ctx = SSL_CTX_new(SSLv23_client_method()); cc -gc -I../include -I/usr/local/ssl/include -I/usr/local/include -I/att /include -I/att/msgipc -I/usr/local/include curl_dip.c UX:acomp: ERROR: "curl_dip.c", line 271: undefined symbol: ssl_x UX:acomp: ERROR: "curl_dip.c", line 271: operands must have arithmetic type: op "*" __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED] __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: SSL Compile Problem II
Randy wrote: Unfortunately that is how I orginally had it but that generates the same error. SSL *ssl_x; Generates UX:acomp: ERROR: "curl_dip.c", line 271: undefined symbol: ssl_x UX:acomp: WARNING: "curl_dip.c", line 457: assignment type mismatch *** Error code 1 (bu21) Going back to your original error. Its complaining about "undefined symbol" so something at like 271 is using "ssl_x". But the code snippet you quoted does not *use* "ssl_x" it only declares it. A declaration can't cause an "undefined symbol" error. I'm thinking this is a simple C programming error. Check the scope of the variable "ssl_x" is correct in relation to the code on line 271. Try moving it out of the function scope and into the file scope. Otherwise maybe a C programming mailing list can offer you more help. Darryl __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: SSL Compile Problem II
Unfortunately that is how I orginally had it but that generates the same error. SSL *ssl_x; Generates UX:acomp: ERROR: "curl_dip.c", line 271: undefined symbol: ssl_x UX:acomp: WARNING: "curl_dip.c", line 457: assignment type mismatch *** Error code 1 (bu21) -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Darryl Miles Sent: Monday, June 26, 2006 10:14 AM To: openssl-users@openssl.org Subject: Re: SSL Compile Problem II Randy wrote: > This code compiles and links fine. If I uncomment the SSL_new line I > get "undefined symbol: ssl_x" > > SSL_CTX *ctx; > // SSL_new *ssl_x; SSL_new is not a type, but a function: "int SSL_new(SSL *);" #include SSL *ssl_x; should work. HTH Darryl __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED] __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
OpenSSL and Windows XP Install Issues
Hi All, I'm new to open SSL and have been out of the unix realm for several years now. I have cygwin installed on my machine and am trying to get openssl to build. Assuming I'm reading the install directions correctly, the configure is executing OK, but the ms\do_ms fails with %OSVERSION% is not defined at util/pl/VC-32.pl line 41. Compilation failed in require at util\mk1mf.pl line 138. And nmake -f ms\ntdll.mak runs for a while but eventually fails with re /fo"tmp32dll\libeay32.res" /d CRYPTO ms\version32.rc link /nologo /subsystem:console /opt:ref /dll /out:out32dll\libeay32.dll /def:ms/LIBEAY32def @C:\my_path_to_temp etc... link: extra operand `/opt:ref' try `link --help' for more information. NMAKE : fatalerror U1077: 'link' : return code '0x1' Stop. I receive the same error from the nmake if I run ms\do_masm first. If I start over and run ms\do_nasm, that works but then the nmake -f ms\ntdll.mak fails outright with 'nasmw' is not recognized as an internal or external command. Any help will be greatly appreciated. Joe Mierwa __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: SSL Compile Problem II
Darryl Miles wrote: SSL_new is not a type, but a function: "int SSL_new(SSL *);" Opps: SSL_new is not a type, but a function: "SSL *SSL_new(SSL_CTX *);" So then you'd use: ssl_x = SSL_new(ctx); __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: SSL Compile Problem II
Randy wrote: This code compiles and links fine. If I uncomment the SSL_new line I get "undefined symbol: ssl_x" SSL_CTX *ctx; // SSL_new *ssl_x; SSL_new is not a type, but a function: "int SSL_new(SSL *);" #include SSL *ssl_x; should work. HTH Darryl __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
SSL Compile Problem II
This code compiles and links fine. If I uncomment the SSL_new line I get "undefined symbol: ssl_x" SSL_CTX *ctx; // SSL_new *ssl_x; /* Initializing OpenSSL */ SSL_load_error_strings();/* readable error messages */ SSL_library_init(); /* initialize library */ /* Setting up the SSL pointers */ ctx = SSL_CTX_new(SSLv23_client_method()); cc -gc -I../include -I/usr/local/ssl/include -I/usr/local/include -I/att /include -I/att/msgipc -I/usr/local/include curl_dip.c UX:acomp: ERROR: "curl_dip.c", line 271: undefined symbol: ssl_x UX:acomp: ERROR: "curl_dip.c", line 271: operands must have arithmetic type: op "*" __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: OpenSSL and multiple threads
Leon wrote: On Mon, 2006-06-26 at 14:24 +0200, Marek Marcola wrote: OK weirdness going on here. I've added the RAND_load_file() command to the beginning of my program and it does not make a difference. With a 1000 threads I get a call to RAND_poll() only with the first connection and not with subsequent SSL_accept() calls. With 1500 threads it segfaults inside that one RAND_poll call. Sorry cant help as I dont know anything about the RAND_xxx() workaround. I now know that I am overwriting some memory due to the FD_SETSIZE limit. The best solution for me will be to increase the value and recompile the kernel. I will keep you guys posted if it does not work. This limit is per process? I do not believe the kernel has any problem with super-large fd_set's being passed to it. I believe the kernel will use whatever size its given and attempt to access the memory based on the 'maxfd' argument to select. If the kernel attempts to access illegal memory from userspace context (which the select() syscall runs in) the select() will return EFAULT or SIGSEGV the application. So no "recompile the kernel" is needed. The "ulimit -a" resources are set per process yes. This affect the allocation of new file descriptors within the kernel. The default for Fedora and its glibc build is 1024. In order for you to have descriptors above 1024 you must have already dealt with raising the ulimit. At that point you run into these sorts of problems with any auxiliary library your applications call that do IO with select(). So you may need to audit other things YMMV. Its purely a userspace issue. Maybe in rand/rand_unix.c you can replace 'fd_set' with 'my_fd_set', then at the top of the code put: #include #define I_WANT_FD_SETSIZE 2048 #ifndef _NFDBITS #define _NFDBITS 8 #endif #ifdef __fd_mask /* probably not perfect for x86_64 */ #define __fd_mask long int __fd_mask #endif struct my_fd_set_type { __fd_mask fds_bits[I_WANT_FD_SETSIZE / __NFDBITS]; }; typedef struct my_fd_set_type my_fd_set; You need to replace: fd_set fset; with my_fd_set fset; You need to replace FD_ZERO(&fset) with: memset(&my_fd_set, 0, sizeof(my_fd_set)); Since the default macro will only clear the bits covering the first 1024 fds. The try as Marek suggests: FD_SET(fd, (fd_set *)&my_fd_set; And with select use: select(aaa, (fd_set *)&my_fd_set, xxx, yyy, zzz); It is only the userspace code that allocats storage for fd_set type and calls FD_ZERO() and FD_SET() that needs to be altered. From there you can pass around the address of 'my_fd_set' and ultimately use it on the select() call. If the stock glibc FD_SET() do not work, you may need to implemnt your own version of them against your my_fd_set_type. But I dont think this is the case I think FD_SET() will work but FD_ZERO() wont. Just use memset() for FD_ZERO(). The above is pretty Linux / glibc specific. You dont need to recompile anything but the single file rand/rand_unix.c from OpenSSL. It is the only affected part within OpenSSL AFAIK. Obviously you may need to audit your application code for the same requirements. HTH Darryl __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER
Hello, > For example if you have 8 data bytes to send: > 8(data) + 20(MAC) + 8(padding) = 36 > and 5 bytes for SSL3/TLS record header = 41. Sorry, mistake, should be: For example if you have 12 data bytes to send: (12(data) + 20(MAC)) + 8(padding) = 40 and 5 bytes for SSL3/TLS record header = 45. 12+20 = 32 and remainder with dividing by 8 is 0. Best regards, -- Marek Marcola <[EMAIL PROTECTED]> __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER
Hello, > >> * TLS header/protocol overhead > >> * Cipher blocks and chaining modes (picking the most commonly used) > >> * Blocking mode padding overhead > >> * Ethernet 1500 MTUs > >> > >> I presume the minimum is 1 byte, to be send and flushed at the receiver. > >> > >> But maximum block size I read somewhere maybe around 16Kb ? > >> > >> So if we were looking in the 1500 to 6000 byte region for a nicely > >> aligned SSL_write() size, what are the magic numbers ? > > > > If you want to minimize overhead, you should use records of maximum > > length, which is 2^14 plaintext bytes (with a slightly longer > > ciphertext). > > I was thinking in terms of the possibility of optimizing for network > layer (than just raw encoded data length, necessary to encode the payload). > > > For example, if: > > * TLS overhead is: 5 bytes > * Cipher blocks + chaining alignment is 512bits / 64 bytes. Some > ciphers align at less (down to 8 bytes) which makes it easier to find > magic number for them. > * Blocking mode padding at 64 byte multiples of payload size is: 0 bytes > * Ethernet MTU is: 1500 bytes > > So magic numbers around the range 1500 to 6000 byte region would be: > > 1472 bytes payload (1472 divides by 64 with no remainder, and block > padding overhead for that length is 0) + 5 TLS header = 1472 bytes. Cipher block padding can not be 0, if remainder is 0, one block of padding is added (for example with 8 byte block - block filled with 8 is added). Next hint - you must add to this calculation MAC digest size (20 for SHA) per SSL record. For example if you have 8 data bytes to send: 8(data) + 20(MAC) + 8(padding) = 36 and 5 bytes for SSL3/TLS record header = 41. Best regards, -- Marek Marcola <[EMAIL PROTECTED]> __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: OpenSSL and multiple threads
On Mon, 2006-06-26 at 14:24 +0200, Marek Marcola wrote: > For me seems that if you properly initialize PRNG > (before creating threads) this may resolve problem. > I think something like: > RAND_load_file("/dev/urandom", 1024); > should be enough. OK weirdness going on here. I've added the RAND_load_file() command to the beginning of my program and it does not make a difference. With a 1000 threads I get a call to RAND_poll() only with the first connection and not with subsequent SSL_accept() calls. With 1500 threads it segfaults inside that one RAND_poll call. I now know that I am overwriting some memory due to the FD_SETSIZE limit. The best solution for me will be to increase the value and recompile the kernel. I will keep you guys posted if it does not work. This limit is per process? I must say: this was a pleasant experience mailing on this list! Thanks. Leon > > Best regards, __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER
Bodo Moeller wrote: On Mon, Jun 26, 2006 at 02:04:47PM +0100, Darryl Miles wrote: Bodo Moeller wrote: On Mon, Jun 26, 2006 at 12:35:57PM +0100, Darryl Miles wrote: Yes. During the first call to SSL_write(), OpenSSL may take as many bytes as fit into one TLS record, and encrypt this for transport. Then SSL_write() may fail with WANT_WRITE or WANT_READ both before and after this first record has been written, until finally all the data has been broken up into records and all these records have been sent. Checky extra question: Out of interest what are the overheads for TLS headers and block padding for bulk application data, is there an optimal SSL_write() size that would align all of these factors in the encoded output: * TLS header/protocol overhead * Cipher blocks and chaining modes (picking the most commonly used) * Blocking mode padding overhead * Ethernet 1500 MTUs I presume the minimum is 1 byte, to be send and flushed at the receiver. But maximum block size I read somewhere maybe around 16Kb ? So if we were looking in the 1500 to 6000 byte region for a nicely aligned SSL_write() size, what are the magic numbers ? If you want to minimize overhead, you should use records of maximum length, which is 2^14 plaintext bytes (with a slightly longer ciphertext). I was thinking in terms of the possibility of optimizing for network layer (than just raw encoded data length, necessary to encode the payload). For example, if: * TLS overhead is: 5 bytes * Cipher blocks + chaining alignment is 512bits / 64 bytes. Some ciphers align at less (down to 8 bytes) which makes it easier to find magic number for them. * Blocking mode padding at 64 byte multiples of payload size is: 0 bytes * Ethernet MTU is: 1500 bytes So magic numbers around the range 1500 to 6000 byte region would be: 1472 bytes payload (1472 divides by 64 with no remainder, and block padding overhead for that length is 0) + 5 TLS header = 1472 bytes. 5952 bytes payload + 5 TLS header = 5957 bytes. I'm pretty sure the metrics I list above are incorrect. But demonstrate the maths, I'm looking for an output in 1500 multiples. But an odd sized TLS header stuffs that possibility up anyway. If I send just a byte of payload data under TLS (AES256-SHA) IIRC get around 37 bytes are sent over the network. I presume I am allowed to increase the amount of data in a subsequent SSL_write() call, or does that break TLS block length previously setup ? OpenSSL won't complain if you increase the length on subsequent SSL_write() calls. I take your response to mean that OpenSSL doesn't care, as in I will not corrupt or mess anything up. Thanks you've been a great help clarifying my points. Darryl __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: OpenSSL and multiple threads
> You may look at poll() and epoll() as alternative event wake mechanisms > for IO with large numbers of fds in the working set. Yes. Either rebuild your entire system and fix this value: > /usr/include/bits/typesizes.h:#define __FD_SETSIZE1024 or use poll. You'll probably find poll() easier. /r$ -- SOA Appliances Application Integration Middleware __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: OpenSSL and multiple threads
select() has a limit on how big the descriptors can be, otherwise it crashes. /r$ -- SOA Appliances Application Integration Middleware __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
[no subject]
Hello, I try to install openssl-0.9.7i onto Cygwin 5.6.xxx. Windows XP SP2 French The ./config command aborts with the following error DES_PTR used DES_RISC1 used DES_UNROLL used BN_LLONG mode RC4_INDEX mode RC4_CHUNK is undefined 'make' n'est pas reconnu en tant que commande interne ou externe, un programme exécutable ou un fichier de commande Could someone help me please ? Jean-Luc * This message and any attachments (the "message") are confidential and intended solely for the addressee(s). Any unauthorised use or dissemination is prohibited. E-mails are susceptible to alteration. Neither SOCIETE GENERALE nor any of its subsidiaries or affiliates shall be liable for the message if altered, changed or falsified. Ce message et toutes les pieces jointes (ci-apres le "message") sont confidentiels et etablis a l'intention exclusive de ses destinataires. Toute utilisation ou diffusion non autorisee est interdite. Tout message electronique est susceptible d'alteration. La SOCIETE GENERALE et ses filiales declinent toute responsabilite au titre de ce message s'il a ete altere, deforme ou falsifie. * __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER
Kyle Hamilton wrote: On 6/26/06, Darryl Miles <[EMAIL PROTECTED]> wrote: I still have not gotten to the bottom of the entire scope of situation(s) can cause an SSL_write() to return -1 WANT_READ. If its only renegotiation that can; then this is always instigated by a SSL_renegotiate() (from my side) or and SSL_read() that causes a re-negotiate request (from the remote side) to be processed. For maximum security, you need to read any pending data before you write. This is because there's another type of data that the protocol uses: alerts. A good number of which are fatal and require the entire connection to be destroyed and recreated from scratch. I am happy when doing bulk writes to poll OpenSSL to give OpenSSL temporary program control to read and process any *fatal* alert conditions only and allow my application to act on them immediately. But I *don't* want to process any non-fatal alerts at that time or any application data. A non-fatal alert would be a renegotiate instigated from the remote end, I want to defer that until I can more usefully deal with it a little later. I think SSL_peek() will allow non-fatal alerts to be processed, so that API call isn't any use to me, when I am bulk writing. I'm thinking like: if(SSL_check_fatal_alert(ssl) < 0) { handle_this(); } Do all alerts that require a round trip automatically suspend the instigating ends outbound application data. SSL_write() -1 WANT_READ ? I.e. the protocol doesn't allow application data between the send alert packet and the receiving of the alert reply from its peer. But it does allow inbound application data to be receive around that alert reply (depending on the nature of the alert and if the other end if now waiting for out reply). If this is so there isn't any big problem of application data filling up the buffering (TCP/kernel/openssl) and stopping the alert response from being seen when an outbound alert request is in progress. The need for this dumbfounds me. If SSL_write() is returning (<= 0) then it should not have taken any data from my buffer, nor be retaining my buffer address (or accessing data outside the scope of the function call). I understand that you have prioritized traffic, but you've already stated that /you have committed that data to the connection/. OpenSSL has every right to take some of the data (for example, if you pass a buffer with a length of 4096 and the underlying interface handles only 1440-byte writes at a time) and run its internal operations -- such as updating its packet count, calculating the HMAC, and running the part of the buffer it can process next through the block or stream cipher, before it determines if it needs to read. (This is because cryptographic operations can be time-consuming, and more data can come in while all those operations are being done.) Basically, it's akin to a database that has autocommit set to 1. It's already lost its previous state before the data was pulled out of the buffer in the first place. With a kernel call a return of -1 does not commit _ANY_ data to the kernel buffer from the application. This is the behavior most application programmer would expect of OpenSSL (unless otherwise documented clearly - and I'd say its not at this time). Once I commit a partial write of a packet (my appl data is packetized) then that whole packet is considered inprogress and that situation can not be revoked. This works 100% without OpenSSL. Bodo's fine explaination has simply altered the point when I consider the next packet of my packetized data committed. After first presenting it with SSL_write() regardless of its return value, not after the first time SSL_write() returns (> 0) as is the case with raw kernel sockets. Once my application data packet is in progress it will always be the next thing to be driven into the layer below, before we look for the next packet by assessing priority queues. Pretty standard stuff for a priority queue implementation. On your comment of "and more data can come in while ...". More data can't come in, unless I call SSL_read(), providing there are no pending alert request instigated from my side. Bodo clarified that point too. It is also valid for me to "change my mind" about exactly what application data I want to write at the next SSL_write() call. This maybe a change of application data contents or a change of amount of data to write (length). It's valid for you to change your mind at write(). SSL_write() does not have precisely the same semantics... because SSL_write has already changed the state of the SSL object. (This is a case where multiple return values would be useful, but we don't have them, so SSL_write returns -1 to indicate that none of the application data has yet gone out on the interface.) It can't "roll back" to the state it was in before it returned WANT_READ. I think this is actually unnecessary design. I'm thinking that there should be:
Re: SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER
On Mon, Jun 26, 2006 at 02:04:47PM +0100, Darryl Miles wrote: > Bodo Moeller wrote: >> On Mon, Jun 26, 2006 at 12:35:57PM +0100, Darryl Miles wrote: >> Yes. During the first call to SSL_write(), OpenSSL may take as many >> bytes as fit into one TLS record, and encrypt this for transport. >> Then SSL_write() may fail with WANT_WRITE or WANT_READ both before and >> after this first record has been written, until finally all the data >> has been broken up into records and all these records have been sent. > Checky extra question: Out of interest what are the overheads for TLS > headers and block padding for bulk application data, is there an optimal > SSL_write() size that would align all of these factors in the encoded > output: > > * TLS header/protocol overhead > * Cipher blocks and chaining modes (picking the most commonly used) > * Blocking mode padding overhead > * Ethernet 1500 MTUs > > I presume the minimum is 1 byte, to be send and flushed at the receiver. > > But maximum block size I read somewhere maybe around 16Kb ? > > So if we were looking in the 1500 to 6000 byte region for a nicely > aligned SSL_write() size, what are the magic numbers ? If you want to minimize overhead, you should use records of maximum length, which is 2^14 plaintext bytes (with a slightly longer ciphertext). > The only thing that matters relates to data contents and length: > > * that once data has been FIRST introduced with SSL_write(), even if > that returned -1, I'm not allowed to alter the contents or reduce the > length within that I've already presented before. > > I presume I am allowed to increase the amount of data in a subsequent > SSL_write() call, or does that break TLS block length previously setup ? OpenSSL won't complain if you increase the length on subsequent SSL_write() calls. Bodo __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: How to verify OpenSSL lib version from autoconf?
autoconf compiles small programs to check the expected behavior. If you wrote an m4 macro that checked against the value of the version constant, you could check it. (return 0 on success, 1 on error, I believe is autoconf's concept.) Note that I haven't looked into autoconf for about six years, and things very well may have changed. -Kyle H On 6/25/06, Matt England <[EMAIL PROTECTED]> wrote: My project's code is apparently compatible with OpenSSL 0.9.7g (and possibly higher) but not 0.9.8 (because the header file changed from 0.9.7 and 0.9.8...which seems rather undesirable). In any case, I'd like our autoconf macros to be able to automatically check to see if 0.9.7g and higher is installed (but 0.9.8 or higher). Does anyone have a recommendation for how to do this? -Matt __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED] __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER
Bodo Moeller wrote: On Mon, Jun 26, 2006 at 12:35:57PM +0100, Darryl Miles wrote: Yes. During the first call to SSL_write(), OpenSSL may take as many bytes as fit into one TLS record, and encrypt this for transport. Then SSL_write() may fail with WANT_WRITE or WANT_READ both before and after this first record has been written, until finally all the data has been broken up into records and all these records have been sent. Checky extra question: Out of interest what are the overheads for TLS headers and block padding for bulk application data, is there an optimal SSL_write() size that would align all of these factors in the encoded output: * TLS header/protocol overhead * Cipher blocks and chaining modes (picking the most commonly used) * Blocking mode padding overhead * Ethernet 1500 MTUs I presume the minimum is 1 byte, to be send and flushed at the receiver. But maximum block size I read somewhere maybe around 16Kb ? So if we were looking in the 1500 to 6000 byte region for a nicely aligned SSL_write() size, what are the magic numbers ? Yes. The other party is always allowed to start renegotiation, so your SSL_write() call might be during renegotiation; but OpenSSL won't try to read data from the other party during SSL_write() unless it already knows that a renegotiation is going on. This is great information too. This clears up another thread I recently started about an application which is doing bulk writes having switched off doing any SSL_read() until the bulk write phase had finished. So my application doesn't have a deadlock situation. Since it won't be instigating any SSL_renegotiate() and I'm not reading any new data from my peer to be able to receive a renegotiate request. So SSL_write() will never return WANT_READ for me. It is still unclear how this would work, here is the strictest pseudo code case I can think up. This is where: * the exact address for the 4096th byte to send it always at the same address for every repeated SSL_write() call and * I don't change or reduce the amount of data to be written during subsequent SSL_write(), until all 4096 bytes of the first SSL_write() have been committed into OpenSSL. Exactly. You should not change the amount of data (n), and you should not change the contents of these n bytes. You may change the address of that buffer (provided that the contents remain the same) if you set the flag that you asked about. However ... This is the clearest explanation so far. I fully understand this. When you say "change the buffer location" do you mean the exact offset given to SSL_write() in 2nd argument ? Or do you mean for repeated calls to SSL_write() the last byte (4096th byte from example) address remains constant until OpenSSL gives indication that the last byte has been committed ? Here I am asking "which buffer when?" and "what location?" in relation to previous failed SSL_write() ? I don't quite understand these questions. You have been able to answer this with clarity in another response above. Basically the exact address location given at any SSL_write() does not matter. So I can have per-thread buffers (per-thread stack space) and I can thread hop my 'SSL *' context without worry. The only thing that matters relates to data contents and length: * that once data has been FIRST introduced with SSL_write(), even if that returned -1, I'm not allowed to alter the contents or reduce the length within that I've already presented before. I presume I am allowed to increase the amount of data in a subsequent SSL_write() call, or does that break TLS block length previously setup ? So for my priority queue implementation over SSL; I consider the data committed from application to OpenSSL after the first SSL_write() to present that data. Maybe I can work on with what you've said for a few days (to better my understanding) and provide a patch to better document this requirement in SSL_write() and SSL_set_mode() man pages for future users. Thank you for your response the exact application requirements are much clearer to me now. Darryl __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER
On 6/26/06, Darryl Miles <[EMAIL PROTECTED]> wrote: Bodo Moeller wrote: > On Thu, Jun 22, 2006 at 10:41:14PM +0100, Darryl Miles wrote: > >> SSL_CTX_set_mode(3) >> >> SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER >> Make it possible to retry SSL_write() with changed buffer >> location (the buffer contents must stay the same). This is not the >> default to avoid the mis- >> conception that non-blocking SSL_write() behaves like >> non-blocking write(). >> >> What is that all about ? My application makes no guarantee what the >> exact address given to SSL_write() is, it only guarantees the first so >> many bytes are my valid data. Why do I need to give it such guarantees ? Thanks for this clearest explanation so far. > When using SSL_write() over a non-blocking transport channel, you may > have to call SSL_write() multiple times until all your data has been > transferred. In this case, the data buffer needs to stay constant > between calls until SSL_write() finally returns a positive number > since (unless you are using SSL_MODE_ENABLE_PARTIAL_WRITE) some of the > calls to SSL_write() may read some of your data, and if the buffer > changes, you might end up inadvertantly transferring incoherent data. > To help detect such potential application bugs, OpenSSL includes a > simple sanity check -- if SSL_write() is called again but the data > buffer *location* has changed, OpenSSL suspects that this is a mistake > and returns an error. "Some of the calls to SSL_write() may read some of your data", I am still not such how the reading of data impacts the write operation. Are you saying that when WANT_READ is returned from SSL_write() the OpenSSL library has already committed some number of bytes from the buffer given but because its returning -1 WANT_READ it is failing to report that situation back to the application during the first SSL_write() call ? An under reporting of committed bytes if you want to call it that. This would also imply you can't reduce the amount of data to SSL_write() since a subsequent call that failed. Or implies that OpenSSL may access bytes outside of the range given by the currently executing SSL_write(), in that its somehow still using the buffer address given during a previous SSL_write() call. The protocol has some quirks, not the least of which is the need to handle alerts (of which many are fatal), and the ability to handle a maximum segment size. If there's data in the queue to be read, it needs to be read before any data can be sent out (as several alerts require the entire connection to be severed). I still have not gotten to the bottom of the entire scope of situation(s) can cause an SSL_write() to return -1 WANT_READ. If its only renegotiation that can; then this is always instigated by a SSL_renegotiate() (from my side) or and SSL_read() that causes a re-negotiate request (from the remote side) to be processed. For maximum security, you need to read any pending data before you write. This is because there's another type of data that the protocol uses: alerts. A good number of which are fatal and require the entire connection to be destroyed and recreated from scratch. Back to your clarification on the modes. It is still unclear how this would work, here is the strictest pseudo code case I can think up. This is where: * the exact address for the 4096th byte to send it always at the same address for every repeated SSL_write() call and * I don't change or reduce the amount of data to be written during subsequent SSL_write(), until all 4096 bytes of the first SSL_write() have been committed into OpenSSL. char pinned_buffer[4096]; int want_write_len = 4096; int offset = 0; int left = want_write_len; do { int n = SSL_write(ssl, &pinned_buffer[offset], left); if(n < 0) { sleep_as_necessary(); } else if(n > 0) { offset += n; left -= n; } while(left > 0); In practice many applications may copy their data to a local stack buffer and give that stack buffer to SSL_write(). This means the data shuffles up and the next 4096 byte window is use for SSL_write(). So what I am asking now is what is the _LEAST_ strict case that can be allowed too if the one above the what I see as the most strict usage. The need for this dumbfounds me. If SSL_write() is returning (<= 0) then it should not have taken any data from my buffer, nor be retaining my buffer address (or accessing data outside the scope of the function call). I understand that you have prioritized traffic, but you've already stated that /you have committed that data to the connection/. OpenSSL has every right to take some of the data (for example, if you pass a buffer with a length of 4096 and the underlying interface handles only 1440-byte writes at a time) and run its internal operations -- such as updating its packet count, calculating the HMAC, and running the part of the buffer it can process next throug
Re: OpenSSL and multiple threads
Hello, > > The select is part of the OpenSSL implementation. I specifically avoided > > the select() by going multi threaded and here I am sitting with a select > > problem (I think) due to the OpenSSL library. > > > > I want to stay away from hacking the OpenSSL library. > Sorry for misundestanding. > SSL_accept() calls RAND_bytes() and RAND_pseudo_bytes() > (with calls RAND_bytes() with ignorance of not seeded PRNG). > If PRNG is not seeded RAND_bytes() tries to self seed > using RAND_pool() - RAND_add() do not use select(). > For me seems that if you properly initialize PRNG > (before creating threads) this may resolve problem. > I think something like: > RAND_load_file("/dev/urandom", 1024); > should be enough. Sorry ones again - it seems that Bodo Muller already said that. Best regards, -- Marek Marcola <[EMAIL PROTECTED]> __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER
On Mon, Jun 26, 2006 at 12:35:57PM +0100, Darryl Miles wrote: > "Some of the calls to SSL_write() may read some of your data", I am > still not such how the reading of data impacts the write operation. Are > you saying that when WANT_READ is returned from SSL_write() the OpenSSL > library has already committed some number of bytes from the buffer given > but because its returning -1 WANT_READ it is failing to report that > situation back to the application during the first SSL_write() call ? Yes. During the first call to SSL_write(), OpenSSL may take as many bytes as fit into one TLS record, and encrypt this for transport. Then SSL_write() may fail with WANT_WRITE or WANT_READ both before and after this first record has been written, until finally all the data has been broken up into records and all these records have been sent. > An under reporting of committed bytes if you want to call it that. This > would also imply you can't reduce the amount of data to SSL_write() > since a subsequent call that failed. Or implies that OpenSSL may access > bytes outside of the range given by the currently executing SSL_write(), > in that its somehow still using the buffer address given during a > previous SSL_write() call. > > I still have not gotten to the bottom of the entire scope of > situation(s) can cause an SSL_write() to return -1 WANT_READ. If its > only renegotiation that can; then this is always instigated by a > SSL_renegotiate() (from my side) or and SSL_read() that causes a > re-negotiate request (from the remote side) to be processed. Yes. The other party is always allowed to start renegotiation, so your SSL_write() call might be during renegotiation; but OpenSSL won't try to read data from the other party during SSL_write() unless it already knows that a renegotiation is going on. > It is still unclear how this would work, here is the strictest pseudo > code case I can think up. This is where: > > * the exact address for the 4096th byte to send it always at the same > address for every repeated SSL_write() call and > > * I don't change or reduce the amount of data to be written during > subsequent SSL_write(), until all 4096 bytes of the first SSL_write() > have been committed into OpenSSL. Exactly. You should not change the amount of data (n), and you should not change the contents of these n bytes. You may change the address of that buffer (provided that the contents remain the same) if you set the flag that you asked about. However ... > char pinned_buffer[4096]; > int want_write_len = 4096; > int offset = 0; > int left = want_write_len; > > do { > int n = SSL_write(ssl, &pinned_buffer[offset], left); > if(n < 0) { > sleep_as_necessary(); > } else if(n > 0) { > offset += n; > left -= n; > } > while(left > 0); ... once SSL_write() returns a positive number, this indicates that this number of bytes, and *only* this number of bytes, has been processed. So any subsequent SSL_write() is "detached" from this SSL_write(); OpenSSL does not care what you change in the buffer Note that you'll have to set SSL_MODE_ENABLE_PARTIAL_WRITE to cause OpenSSL to return success before *all* of the application buffer has been written. The default is that OpenSSL will write all the data, using multiple records if necessary; with SSL_MODE_ENABLE_PARTIAL_WRITE, SSL_write() will report success once a single record has been written. > In practice many applications may copy their data to a local stack > buffer and give that stack buffer to SSL_write(). This means the data > shuffles up and the next 4096 byte window is use for SSL_write(). > > So what I am asking now is what is the _LEAST_ strict case that can be > allowed too if the one above the what I see as the most strict usage. > > > > The need for this dumbfounds me. If SSL_write() is returning (<= 0) > then it should not have taken any data from my buffer, nor be retaining > my buffer address (or accessing data outside the scope of the function > call). If SSL_write() has started writing a first record, but delayed other data to later records, then it may have to return -1 to indicate a "WANT_WRITE" or "WANT_READ" condition. > It is also valid for me to "change my mind" about exactly what > application data I want to write at the next SSL_write() call. This > maybe a change of application data contents or a change of amount of > data to write (length). Not if OpenSSL has already started handling the application data. In that case you should buffer the application data so that you can repeat the SSL_write() call properly. > Infact I have an application that does exactly this, it implements a > priority queue of packetized data and the decision about what to send > next is made right at the moment it knows it can call write(). > > > >But sometimes, you might want to change the buffer location for some > >reason, e.g. since the SSL
Re: OpenSSL and multiple threads
Hello, > On Mon, 2006-06-26 at 12:46 +0200, Marek Marcola wrote: > > Or resolution for this problem may be defining new data type > > "my_fd_set", replacing FD_* macros, and use this new data type in > > select() with cast to fd_set. > > The select is part of the OpenSSL implementation. I specifically avoided > the select() by going multi threaded and here I am sitting with a select > problem (I think) due to the OpenSSL library. > > I want to stay away from hacking the OpenSSL library. Sorry for misundestanding. SSL_accept() calls RAND_bytes() and RAND_pseudo_bytes() (with calls RAND_bytes() with ignorance of not seeded PRNG). If PRNG is not seeded RAND_bytes() tries to self seed using RAND_pool() - RAND_add() do not use select(). For me seems that if you properly initialize PRNG (before creating threads) this may resolve problem. I think something like: RAND_load_file("/dev/urandom", 1024); should be enough. Best regards, -- Marek Marcola <[EMAIL PROTECTED]> __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: OpenSSL and multiple threads
On Mon, Jun 26, 2006 at 12:25:09PM +0200, Leon wrote: > On Mon, 2006-06-26 at 11:44 +0200, Bodo Moeller wrote: >> What is the file descriptor number that you observe during these >> calls? > The file descriptor is 1507 which seems correct since each thread opened > a socket. >> Can a single-threaded application handle that many files (and >> still do select()) on whatever OS platform it is that you're using? > I did increase the file descriptor to 100 using ulimit > and /proc/sys/fs/max-file This does not increase FD_SETSIZE, though. It is probably an OpenSSL bug to try select() without considering FD_SETSIZE, but RAND_poll(), where you encounter these problems, is just OpenSSL's last resort when trying to seed its pseudo-random number generator (PRNG). The best thing to do is to have your application initialize the PRNG with some high-quality randomness before you start additional threads by using RAND_add() or RAND_seed(); then OpenSSL won't have to use RAND_poll() in the first place. __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: OpenSSL and multiple threads
Darryl Miles wrote: But I can see your point now, if it is an OpenSSL problem you are pretty much stuck. For example if OpenSSL uses select() to sleep for /dev/random but your application is already into the 1500th active file descritor. Then OpenSSL is pretty much hosed for using select() inside itself, infact it should do a sanity check internally otherwise random corruption/crashes will occur because FD_SET() may scribble on memory. Heh, nice... maybe this is a candidate: rand/rand_unix.c:211: if (select(fd+1,&fset,NULL,NULL,&t) < 0) You are safe if you don't increase your ulimit -n above the standard default of 1024. Above that OpenSSL may not be safe anymore. Try adding stack padding around the "fset" variable and see if your crashes go away to prove the solution. Then maybe we can patch this for a poll() on platforms that support poll and do a sanity check of: if(fd > __FD_SETSIZE) { do_error(); } on platforms that dont. Darryl __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: OpenSSL and multiple threads
Leon wrote: $ grep __FD_SETSIZE /usr/include/bits/*.h /usr/include/bits/typesizes.h:#define __FD_SETSIZE1024 This is the maximum number of fd's the "fd_set" type holds by default. Maybe it would be possible to stop the crashes and override this with some ugly stack paddings: If I follow the OpenSSL code (using source navigator), the FD_SET and others are defined in openssl/apps/s_apps.h and the code restricts the set to be a maximum of 32 bits (sizeof int). 32 entries is enough since the random devices are only accessed one by one at that point. So please educate me: How does the system define __FD_SETSIZE influence the OpenSSL defined FD_SET. [ Or am I just totally missing the point now? ] I am making the presumption that it was application code using the FD_SET macros and allocating the fd_set storage. You quote the file openssl/apps/s_apps.h but isn't this file for the distributed applications openssl comes with, like /usr/local/bin/openssl ? How does that header file affect *users* of OpenSSL. The header files you should be looking at are in openssl/include/openssl/*.h ? But I can see your point now, if it is an OpenSSL problem you are pretty much stuck. For example if OpenSSL uses select() to sleep for /dev/random but your application is already into the 1500th active file descritor. Then OpenSSL is pretty much hosed for using select() inside itself, infact it should do a sanity check internally otherwise random corruption/crashes will occur because FD_SET() may scribble on memory. Unless you start to compile a custom version of OpenSSL library and make sure you link your application with that specific version (like -static). But you said you didn't want to do this. So maybe you are asking too much, something has to give. You can't increase the default fd_set without recompiling whatever is using it and raising the limit. Alternatively as Marek suggested look at /usr/include/sys/select.h and make your own my_fd_set. The Linux glibc macros look like they will still work on large sizes. Darryl __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Root certificates usage and distibution
Hello. Sorry for possible off topic, but may be openssl users and gurus will be able to help me in SSL related issue. I want to receive as much root certificates as possible for internal application (SSL gateway). For me the simples way to do it: export all of them from Internet Explorer. But as far as I understand, each CA (like VeriSign) has own agreement for root certificate download. So, could you please give me advice: is it suitable from technical and law point of view to export root certificates from IE and use them for my own purposes, or should I ask for them each CA separately? Many thanks in advance. __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER
Bodo Moeller wrote: On Thu, Jun 22, 2006 at 10:41:14PM +0100, Darryl Miles wrote: SSL_CTX_set_mode(3) SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER Make it possible to retry SSL_write() with changed buffer location (the buffer contents must stay the same). This is not the default to avoid the mis- conception that non-blocking SSL_write() behaves like non-blocking write(). What is that all about ? My application makes no guarantee what the exact address given to SSL_write() is, it only guarantees the first so many bytes are my valid data. Why do I need to give it such guarantees ? Thanks for this clearest explanation so far. When using SSL_write() over a non-blocking transport channel, you may have to call SSL_write() multiple times until all your data has been transferred. In this case, the data buffer needs to stay constant between calls until SSL_write() finally returns a positive number since (unless you are using SSL_MODE_ENABLE_PARTIAL_WRITE) some of the calls to SSL_write() may read some of your data, and if the buffer changes, you might end up inadvertantly transferring incoherent data. To help detect such potential application bugs, OpenSSL includes a simple sanity check -- if SSL_write() is called again but the data buffer *location* has changed, OpenSSL suspects that this is a mistake and returns an error. "Some of the calls to SSL_write() may read some of your data", I am still not such how the reading of data impacts the write operation. Are you saying that when WANT_READ is returned from SSL_write() the OpenSSL library has already committed some number of bytes from the buffer given but because its returning -1 WANT_READ it is failing to report that situation back to the application during the first SSL_write() call ? An under reporting of committed bytes if you want to call it that. This would also imply you can't reduce the amount of data to SSL_write() since a subsequent call that failed. Or implies that OpenSSL may access bytes outside of the range given by the currently executing SSL_write(), in that its somehow still using the buffer address given during a previous SSL_write() call. I still have not gotten to the bottom of the entire scope of situation(s) can cause an SSL_write() to return -1 WANT_READ. If its only renegotiation that can; then this is always instigated by a SSL_renegotiate() (from my side) or and SSL_read() that causes a re-negotiate request (from the remote side) to be processed. Back to your clarification on the modes. It is still unclear how this would work, here is the strictest pseudo code case I can think up. This is where: * the exact address for the 4096th byte to send it always at the same address for every repeated SSL_write() call and * I don't change or reduce the amount of data to be written during subsequent SSL_write(), until all 4096 bytes of the first SSL_write() have been committed into OpenSSL. char pinned_buffer[4096]; int want_write_len = 4096; int offset = 0; int left = want_write_len; do { int n = SSL_write(ssl, &pinned_buffer[offset], left); if(n < 0) { sleep_as_necessary(); } else if(n > 0) { offset += n; left -= n; } while(left > 0); In practice many applications may copy their data to a local stack buffer and give that stack buffer to SSL_write(). This means the data shuffles up and the next 4096 byte window is use for SSL_write(). So what I am asking now is what is the _LEAST_ strict case that can be allowed too if the one above the what I see as the most strict usage. The need for this dumbfounds me. If SSL_write() is returning (<= 0) then it should not have taken any data from my buffer, nor be retaining my buffer address (or accessing data outside the scope of the function call). It is also valid for me to "change my mind" about exactly what application data I want to write at the next SSL_write() call. This maybe a change of application data contents or a change of amount of data to write (length). Infact I have an application that does exactly this, it implements a priority queue of packetized data and the decision about what to send next is made right at the moment it knows it can call write(). But sometimes, you might want to change the buffer location for some reason, e.g. since the SSL_write() data buffer is just a window in a larger buffer handled by the application. To tell OpenSSL that such an address change is intentional in your application, and that the application will make sure that any buffer contents will be preserved until SSL_write() reports success, you can set the SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER flag. This will not change OpenSSL's operation in any way except disabling the sanity check, since settings this flag indicates that your application does not require this check. When you say "change the buffer location" do you mean the exact o
Re: OpenSSL and multiple threads
On Mon, Jun 26, 2006, Leon wrote: > > $ grep __FD_SETSIZE /usr/include/bits/*.h > > /usr/include/bits/typesizes.h:#define __FD_SETSIZE1024 > > > > This is the maximum number of fd's the "fd_set" type holds by default. > > Maybe it would be possible to stop the crashes and override this with > > some ugly stack paddings: > > If I follow the OpenSSL code (using source navigator), the FD_SET and > others are defined in openssl/apps/s_apps.h and the code restricts the > set to be a maximum of 32 bits (sizeof int). 32 entries is enough since > the random devices are only accessed one by one at that point. > So please educate me: How does the system define __FD_SETSIZE influence > the OpenSSL defined FD_SET. [ Or am I just totally missing the point > now? ] > The header file s_apps.h is used to build the openssl command line utility only. It is not used by the library itself. Steve. -- Dr Stephen N. Henson. Email, S/MIME and PGP keys: see homepage OpenSSL project core developer and freelance consultant. Funding needed! Details on homepage. Homepage: http://www.drh-consultancy.demon.co.uk __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: OpenSSL and multiple threads
On Mon, 2006-06-26 at 03:53 -0700, Girish Venkatachalam wrote: > Right. If I were you I would use kqueue() on *BSD or > epoll() which is avail only on 2.6 linux kernels. > > I am not sure what you are trying to achieve but it > may be worthwhile to take a look at libevent by Neils > Provos as well. It abstracts out select(), kqueue() > and epoll() thus making ur app portable as a bonus. > > You may read the paper by Jonathan Lemmon on kqueue()s > advantages over select(). Select() gets horribly > inefficient as the number of file descriptors > increases. As I said in another posting: It's not me implementing the select() it is OpenSSL code. Thanks for the advise anyway. Thanks Leon __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: OpenSSL and multiple threads
On Mon, 2006-06-26 at 12:46 +0200, Marek Marcola wrote: > Or resolution for this problem may be defining new data type > "my_fd_set", replacing FD_* macros, and use this new data type in > select() with cast to fd_set. The select is part of the OpenSSL implementation. I specifically avoided the select() by going multi threaded and here I am sitting with a select problem (I think) due to the OpenSSL library. I want to stay away from hacking the OpenSSL library. Thanks Leon __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: OpenSSL and multiple threads
> $ grep __FD_SETSIZE /usr/include/bits/*.h > /usr/include/bits/typesizes.h:#define __FD_SETSIZE1024 > > This is the maximum number of fd's the "fd_set" type holds by default. > Maybe it would be possible to stop the crashes and override this with > some ugly stack paddings: If I follow the OpenSSL code (using source navigator), the FD_SET and others are defined in openssl/apps/s_apps.h and the code restricts the set to be a maximum of 32 bits (sizeof int). 32 entries is enough since the random devices are only accessed one by one at that point. So please educate me: How does the system define __FD_SETSIZE influence the OpenSSL defined FD_SET. [ Or am I just totally missing the point now? ] Thanks again Leon __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: OpenSSL and multiple threads
Right. If I were you I would use kqueue() on *BSD or epoll() which is avail only on 2.6 linux kernels. I am not sure what you are trying to achieve but it may be worthwhile to take a look at libevent by Neils Provos as well. It abstracts out select(), kqueue() and epoll() thus making ur app portable as a bonus. You may read the paper by Jonathan Lemmon on kqueue()s advantages over select(). Select() gets horribly inefficient as the number of file descriptors increases. HTH, Girish --- Darryl Miles <[EMAIL PROTECTED]> wrote: > Krishna M Singh wrote: > > We are using the multiple contexts (although not > same as thread count > > i.e. 10 Contexts for 3 threads).. Select call may > be failing as the > > default FD_SET_SIZE is 255 on most systems and > thus in case u want to > > handle 1000 sockets u need to increase the limit.. > There is #def in > > some Windows file.. chk that.. (Assuming u are > running this on > > Windows)... > > This response is closest to the truth. > > By default the "fd_set" type is a standard size on > each platform. In > another post you indicated you are using Linux > 2.6.xx, which I presume > is based one glibc 2.3.x. > > Looking at the systems I have access too glibc 2.3.2 > and glibc 2.3.6: > > > $ grep __FD_SETSIZE /usr/include/bits/*.h > /usr/include/bits/typesizes.h:#define __FD_SETSIZE >1024 > > This is the maximum number of fd's the "fd_set" type > holds by default. > Maybe it would be possible to stop the crashes and > override this with > some ugly stack paddings: > > #defined EXTRA_FDS 500 > char padd0[(EXTRA_FDS/8)+1]; > fd_set fdread; > char padd1[(EXTRA_FDS/8)+1]; > > > > Maybe look at the comment in > /usr/include/linux/posix_types.h for > informations. > > > > It does not look like there is a compile time > override to allow a lager > size. Maybe the kernel has a hard upper limit for > select() too, but I > dont this is the case. > > > As Kyle also pointed out "ulimit -n 1500" would need > to be addressed on > most standard Linux installed to get any usage > beyond the default 1024 > limit. Otherwise accept() = -1 (EMFILE). > > > You may look at poll() and epoll() as alternative > event wake mechanisms > for IO with large numbers of fds in the working set. > > > HTH > > Darryl > __ > OpenSSL Project > http://www.openssl.org > User Support Mailing List > openssl-users@openssl.org > Automated List Manager > [EMAIL PROTECTED] > __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: OpenSSL and multiple threads
Hello, > This is the maximum number of fd's the "fd_set" type holds by default. > Maybe it would be possible to stop the crashes and override this with > some ugly stack paddings: > > #defined EXTRA_FDS 500 > char padd0[(EXTRA_FDS/8)+1]; > fd_set fdread; > char padd1[(EXTRA_FDS/8)+1]; Or resolution for this problem may be defining new data type "my_fd_set", replacing FD_* macros, and use this new data type in select() with cast to fd_set. Best regards, -- Marek Marcola <[EMAIL PROTECTED]> __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: OpenSSL and multiple threads
On Mon, 2006-06-26 at 03:00 -0700, Kyle Hamilton wrote: > Does changing the quota of file descriptors available to the program > modify its behavior? 'ulimit -n -H' should give you your maximum > number allowed. Made it more from the start. Set at 10. > Also, your system may have a limit on the number of open sockets to > /dev/random. Well the app segfaults when I make a single connection to one of the 1500 listening sockets. So only one thread will open /dev/random to create the session ID and segfaults right there. > What's your operating system? Gentoo 2.6.15-gentoo-r5 for dual xeon box and Ubuntu Breezy on AMD Sempron. Both has 1GB RAM. > > -Kyle H > > On 6/26/06, Leon <[EMAIL PROTECTED]> wrote: > > Thanks for your reply! > > > > > Some days back, we had a riot on "select" call usage. You may revisit > > > those > > > posts to see if it is helpful. > > Well, I do not think it is select() since it works for a 1000 threads. > > The part that fails is also part of the standard OpenSSL code so I would > > not like to change it unless an explicit error is found. > > > > > One thing I do not get is: "Each thread has it's own SSL context ". > > Yes I setup the SSL_CTX in each thread. I have also taken it out of the > > threads into main() creating one global context BUT this gives the same > > error. > > > > > Therefore, I think creating as many SSL contexts as many threads are there > > > is an issue. > > Exactly - what is the issue? If a 1000 threads work why not 1500? > > > > Both machines have 1GB RAM and running Gentoo 2.5.15 (xeon) and Ubuntu > > Breezy (Sempron). > > > > Cheers > > Leon > > > > > > > > > > > -Original Message- > > > From: [EMAIL PROTECTED] > > > [mailto:[EMAIL PROTECTED] Behalf Of Leon > > > Sent: Monday, June 26, 2006 12:19 PM > > > To: openssl-users@openssl.org > > > Subject: OpenSSL and multiple threads > > > > > > > > > Hi, > > > > > > I wrote a server app to use multiple threads. Each thread has it's own > > > SSL context and perform all the socket (socket,accept) and SSL > > > (SSL_accept) tasks. I implemented the dynamic lock mechanism for multi > > > thread support. > > > > > > I can start the server with the 1000 threads (one for each required > > > port) that we need, but found that it fails if I try 1500. Now I am > > > afraid that the 1000 threads may also become unstable with continuous > > > use. This failure occurs when all the threads are listening and I open a > > > single connection to one of them. That is: there is no load. > > > > > > I tracked the bug with gdb and found that it fails in RAND_poll(), > > > called from SSL_accept(), when a new session key is generated. The > > > strange thing is that after the file descriptor set is zeroed > > > [(FD_ZERO(&fset)] the call [FDSET(fd,&fset)] to add the random device's > > > file descriptor does not work. The set remains zeroed and once the > > > select() has executed using this set, most of the parameters including > > > the random devices file descriptor, becomes zeroed (NULL). > > > > > > It looks as if something is overrunning my stack - any ideas as to how > > > to get the sucker? I had the same results on a dual xeon as well as a > > > Athlon box. Using 0.9.7e. > > > > > > Please ignore the design - it is suppose to be experimental. > > > > > > Thanks > > > Leon > > > > > > > > > __ > > > OpenSSL Project http://www.openssl.org > > > User Support Mailing Listopenssl-users@openssl.org > > > Automated List Manager [EMAIL PROTECTED] > > > > > > > > > DISCLAIMER > > > == > > > This e-mail may contain privileged and confidential information which is > > > the property of Persistent Systems Pvt. Ltd. It is intended only for the > > > use of the individual or entity to which it is addressed. If you are not > > > the intended recipient, you are not authorized to read, retain, copy, > > > print, distribute or use this message. If you have received this > > > communication in error, please notify the sender and delete all copies of > > > this message. Persistent Systems Pvt. Ltd. does not accept any liability > > > for virus infected mails. > > > __ > > > OpenSSL Project http://www.openssl.org > > > User Support Mailing Listopenssl-users@openssl.org > > > Automated List Manager [EMAIL PROTECTED] > > > > __ > > OpenSSL Project http://www.openssl.org > > User Support Mailing Listopenssl-users@openssl.org > > Automated List Manager [EMAIL PROTECTED] > > > __ > OpenSSL Project
Re: OpenSSL and multiple threads
Krishna M Singh wrote: We are using the multiple contexts (although not same as thread count i.e. 10 Contexts for 3 threads).. Select call may be failing as the default FD_SET_SIZE is 255 on most systems and thus in case u want to handle 1000 sockets u need to increase the limit.. There is #def in some Windows file.. chk that.. (Assuming u are running this on Windows)... This response is closest to the truth. By default the "fd_set" type is a standard size on each platform. In another post you indicated you are using Linux 2.6.xx, which I presume is based one glibc 2.3.x. Looking at the systems I have access too glibc 2.3.2 and glibc 2.3.6: $ grep __FD_SETSIZE /usr/include/bits/*.h /usr/include/bits/typesizes.h:#define __FD_SETSIZE1024 This is the maximum number of fd's the "fd_set" type holds by default. Maybe it would be possible to stop the crashes and override this with some ugly stack paddings: #defined EXTRA_FDS 500 char padd0[(EXTRA_FDS/8)+1]; fd_set fdread; char padd1[(EXTRA_FDS/8)+1]; Maybe look at the comment in /usr/include/linux/posix_types.h for informations. It does not look like there is a compile time override to allow a lager size. Maybe the kernel has a hard upper limit for select() too, but I dont this is the case. As Kyle also pointed out "ulimit -n 1500" would need to be addressed on most standard Linux installed to get any usage beyond the default 1024 limit. Otherwise accept() = -1 (EMFILE). You may look at poll() and epoll() as alternative event wake mechanisms for IO with large numbers of fds in the working set. HTH Darryl __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: OpenSSL and multiple threads
Does changing the quota of file descriptors available to the program modify its behavior? 'ulimit -n -H' should give you your maximum number allowed. Remember that network sockets are considered files as well. From your description, I'd think that your file limit is around 2048. Also, your system may have a limit on the number of open sockets to /dev/random. What's your operating system? -Kyle H On 6/26/06, Leon <[EMAIL PROTECTED]> wrote: Thanks for your reply! > Some days back, we had a riot on "select" call usage. You may revisit those > posts to see if it is helpful. Well, I do not think it is select() since it works for a 1000 threads. The part that fails is also part of the standard OpenSSL code so I would not like to change it unless an explicit error is found. > One thing I do not get is: "Each thread has it's own SSL context ". Yes I setup the SSL_CTX in each thread. I have also taken it out of the threads into main() creating one global context BUT this gives the same error. > Therefore, I think creating as many SSL contexts as many threads are there > is an issue. Exactly - what is the issue? If a 1000 threads work why not 1500? Both machines have 1GB RAM and running Gentoo 2.5.15 (xeon) and Ubuntu Breezy (Sempron). Cheers Leon > > > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] Behalf Of Leon > Sent: Monday, June 26, 2006 12:19 PM > To: openssl-users@openssl.org > Subject: OpenSSL and multiple threads > > > Hi, > > I wrote a server app to use multiple threads. Each thread has it's own > SSL context and perform all the socket (socket,accept) and SSL > (SSL_accept) tasks. I implemented the dynamic lock mechanism for multi > thread support. > > I can start the server with the 1000 threads (one for each required > port) that we need, but found that it fails if I try 1500. Now I am > afraid that the 1000 threads may also become unstable with continuous > use. This failure occurs when all the threads are listening and I open a > single connection to one of them. That is: there is no load. > > I tracked the bug with gdb and found that it fails in RAND_poll(), > called from SSL_accept(), when a new session key is generated. The > strange thing is that after the file descriptor set is zeroed > [(FD_ZERO(&fset)] the call [FDSET(fd,&fset)] to add the random device's > file descriptor does not work. The set remains zeroed and once the > select() has executed using this set, most of the parameters including > the random devices file descriptor, becomes zeroed (NULL). > > It looks as if something is overrunning my stack - any ideas as to how > to get the sucker? I had the same results on a dual xeon as well as a > Athlon box. Using 0.9.7e. > > Please ignore the design - it is suppose to be experimental. > > Thanks > Leon > > > __ > OpenSSL Project http://www.openssl.org > User Support Mailing Listopenssl-users@openssl.org > Automated List Manager [EMAIL PROTECTED] > > > DISCLAIMER > == > This e-mail may contain privileged and confidential information which is the property of Persistent Systems Pvt. Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Pvt. Ltd. does not accept any liability for virus infected mails. > __ > OpenSSL Project http://www.openssl.org > User Support Mailing Listopenssl-users@openssl.org > Automated List Manager [EMAIL PROTECTED] __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED] __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: OpenSSL and multiple threads
On Mon, 2006-06-26 at 10:46 +0200, Bernhard Froehlich wrote: > Hmm, another wild shot, could it be that /dev/random runs out of entropy > and blocks? > What exactly are the symtoms? Does your application crash or is it just > hanging? Sorry but I did post a follow up directly after the initial post but seems to have been lost. The app segfaults when trying to get the next random device in the list since the list is now NULL. The for-loop did not even get to read any data from the random device since the FD_SET failed to add the fd to the zeroed list. Anyway, it tries to read from /dev/urandom first, which (as far as I know) does not care about entropy. Thanks for your help. Leon __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: OpenSSL and multiple threads
On Mon, 2006-06-26 at 14:31 +0530, Ambarish Mitra wrote: > > One thing I do not get is: "Each thread has it's own SSL context ". > Yes I setup the SSL_CTX in each thread. I have also taken it out of the > threads into main() creating one global context BUT this gives the same > error. > > - Even if it gives the same error, I think you should persue the route of > creating one context in the application, and using the context to create the > SSL objects in each of the threads. This is a correct way to design, I > guess. Once the correct design is implemented, then we can try to diagnose. Done - same problem! > Are you on on Windows or *Nix? random seedning is done differently in > different systems, therefore this is important to know. I did post a follow up after my initial post but it seems to have been lost: Using Gentoo 2.6.15 (32bit) on Xeon box and Ubuntu Breezy on Sempron (box). NOTE: I can start the app to only start one thread. If I do this and start the app 1500 times everything works fine. Thanks again for the help! Leon __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: OpenSSL and multiple threads
On Mon, Jun 26, 2006 at 08:49:19AM +0200, Leon wrote: > I tracked the bug with gdb and found that it fails in RAND_poll(), > called from SSL_accept(), when a new session key is generated. The > strange thing is that after the file descriptor set is zeroed > [(FD_ZERO(&fset)] the call [FDSET(fd,&fset)] to add the random device's > file descriptor does not work. The set remains zeroed and once the > select() has executed using this set, most of the parameters including > the random devices file descriptor, becomes zeroed (NULL). > > It looks as if something is overrunning my stack - any ideas as to how > to get the sucker? I had the same results on a dual xeon as well as a > Athlon box. Using 0.9.7e. What is the file descriptor number that you observe during these calls? Can a single-threaded application handle that many files (and still do select()) on whatever OS platform it is that you're using? __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: OpenSSL and multiple threads
> One thing I do not get is: "Each thread has it's own SSL context ". Yes I setup the SSL_CTX in each thread. I have also taken it out of the threads into main() creating one global context BUT this gives the same error. - Even if it gives the same error, I think you should persue the route of creating one context in the application, and using the context to create the SSL objects in each of the threads. This is a correct way to design, I guess. Once the correct design is implemented, then we can try to diagnose. > Therefore, I think creating as many SSL contexts as many threads are there > is an issue. Exactly - what is the issue? If a 1000 threads work why not 1500? - Again, 1000 threads may work today, but if the design is flawed, then there is no certainty that it will work tomorrow. Maybe there is some undiagnosed bug that is not manifesting with 1000 threads, but is manifesting with 1500 threads. Since the design is flawed, it is not worth to chase this question. Are you on on Windows or *Nix? random seedning is done differently in different systems, therefore this is important to know. DISCLAIMER == This e-mail may contain privileged and confidential information which is the property of Persistent Systems Pvt. Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Pvt. Ltd. does not accept any liability for virus infected mails. __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: OpenSSL and multiple threads
Leon wrote: Thanks for your reply! Some days back, we had a riot on "select" call usage. You may revisit those posts to see if it is helpful. Well, I do not think it is select() since it works for a 1000 threads. The part that fails is also part of the standard OpenSSL code so I would not like to change it unless an explicit error is found. [...] Hmm, another wild shot, could it be that /dev/random runs out of entropy and blocks? What exactly are the symtoms? Does your application crash or is it just hanging? Hope it helps, Ted ;) -- PGP Public Key Information Download complete Key from http://www.convey.de/ted/tedkey_convey.asc Key fingerprint = 31B0 E029 BCF9 6605 DAC1 B2E1 0CC8 70F4 7AFB 8D26 smime.p7s Description: S/MIME Cryptographic Signature
RE: OpenSSL and multiple threads
Thanks for your reply! > Some days back, we had a riot on "select" call usage. You may revisit those > posts to see if it is helpful. Well, I do not think it is select() since it works for a 1000 threads. The part that fails is also part of the standard OpenSSL code so I would not like to change it unless an explicit error is found. > One thing I do not get is: "Each thread has it's own SSL context ". Yes I setup the SSL_CTX in each thread. I have also taken it out of the threads into main() creating one global context BUT this gives the same error. > Therefore, I think creating as many SSL contexts as many threads are there > is an issue. Exactly - what is the issue? If a 1000 threads work why not 1500? Both machines have 1GB RAM and running Gentoo 2.5.15 (xeon) and Ubuntu Breezy (Sempron). Cheers Leon > > > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] Behalf Of Leon > Sent: Monday, June 26, 2006 12:19 PM > To: openssl-users@openssl.org > Subject: OpenSSL and multiple threads > > > Hi, > > I wrote a server app to use multiple threads. Each thread has it's own > SSL context and perform all the socket (socket,accept) and SSL > (SSL_accept) tasks. I implemented the dynamic lock mechanism for multi > thread support. > > I can start the server with the 1000 threads (one for each required > port) that we need, but found that it fails if I try 1500. Now I am > afraid that the 1000 threads may also become unstable with continuous > use. This failure occurs when all the threads are listening and I open a > single connection to one of them. That is: there is no load. > > I tracked the bug with gdb and found that it fails in RAND_poll(), > called from SSL_accept(), when a new session key is generated. The > strange thing is that after the file descriptor set is zeroed > [(FD_ZERO(&fset)] the call [FDSET(fd,&fset)] to add the random device's > file descriptor does not work. The set remains zeroed and once the > select() has executed using this set, most of the parameters including > the random devices file descriptor, becomes zeroed (NULL). > > It looks as if something is overrunning my stack - any ideas as to how > to get the sucker? I had the same results on a dual xeon as well as a > Athlon box. Using 0.9.7e. > > Please ignore the design - it is suppose to be experimental. > > Thanks > Leon > > > __ > OpenSSL Project http://www.openssl.org > User Support Mailing Listopenssl-users@openssl.org > Automated List Manager [EMAIL PROTECTED] > > > DISCLAIMER > == > This e-mail may contain privileged and confidential information which is the > property of Persistent Systems Pvt. Ltd. It is intended only for the use of > the individual or entity to which it is addressed. If you are not the > intended recipient, you are not authorized to read, retain, copy, print, > distribute or use this message. If you have received this communication in > error, please notify the sender and delete all copies of this message. > Persistent Systems Pvt. Ltd. does not accept any liability for virus infected > mails. > __ > OpenSSL Project http://www.openssl.org > User Support Mailing Listopenssl-users@openssl.org > Automated List Manager [EMAIL PROTECTED] __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER
On Thu, Jun 22, 2006 at 10:41:14PM +0100, Darryl Miles wrote: > SSL_CTX_set_mode(3) > > SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER > Make it possible to retry SSL_write() with changed buffer > location (the buffer contents must stay the same). This is not the > default to avoid the mis- > conception that non-blocking SSL_write() behaves like > non-blocking write(). > > > > What is that all about ? My application makes no guarantee what the > exact address given to SSL_write() is, it only guarantees the first so > many bytes are my valid data. Why do I need to give it such guarantees ? When using SSL_write() over a non-blocking transport channel, you may have to call SSL_write() multiple times until all your data has been transferred. In this case, the data buffer needs to stay constant between calls until SSL_write() finally returns a positive number since (unless you are using SSL_MODE_ENABLE_PARTIAL_WRITE) some of the calls to SSL_write() may read some of your data, and if the buffer changes, you might end up inadvertantly transferring incoherent data. To help detect such potential application bugs, OpenSSL includes a simple sanity check -- if SSL_write() is called again but the data buffer *location* has changed, OpenSSL suspects that this is a mistake and returns an error. But sometimes, you might want to change the buffer location for some reason, e.g. since the SSL_write() data buffer is just a window in a larger buffer handled by the application. To tell OpenSSL that such an address change is intentional in your application, and that the application will make sure that any buffer contents will be preserved until SSL_write() reports success, you can set the SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER flag. This will not change OpenSSL's operation in any way except disabling the sanity check, since settings this flag indicates that your application does not require this check. __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: Accessing Manual Pages in openssl
On 26/06/06, Simon <[EMAIL PROTECTED]> wrote: >> Another option may be using http://www.openssl.org/docs/crypto/, >> http://www.openssl.org/docs/ssl and http://www.openssl.org/docs/apps > > Is that the same as the generated files please? Seems so, however, my only comment is there is no printer-friendly version Since I doubt I can help any other way, I could convert the existing text into XML, say docbook, then it could be provided as man pages, html, PDF etc on any system? If that's any use. What format/encoding is the current documentation in please? regards -- Dave Pawson XSLT XSL-FO FAQ. http://www.dpawson.co.uk __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: OpenSSL and multiple threads
Hi We are using the multiple contexts (although not same as thread count i.e. 10 Contexts for 3 threads).. Select call may be failing as the default FD_SET_SIZE is 255 on most systems and thus in case u want to handle 1000 sockets u need to increase the limit.. There is #def in some Windows file.. chk that.. (Assuming u are running this on Windows)... Running 1000 Contexts.. wow.. that would must be challenging and there might be some assumptions about max. Contexts per application...Best thing would be to start with N=10 Context and than may be check the limit where it fails.. Than search for that limit in the openSSL code. That might give us the location where it is hardcoded.. HTH Regards -Krishna Flextronics, Gurgaon, India On 6/26/06, Ambarish Mitra <[EMAIL PROTECTED]> wrote: Some days back, we had a riot on "select" call usage. You may revisit those posts to see if it is helpful. One thing I do not get is: "Each thread has it's own SSL context ". I also had a mult-threaded application, and for the entire process, there was only one context created with SSL_CTX_new. The individual threads had their own SSL objects from the context using SSL_new. SSL_new takes the SSL context as the input argument. Therefore, I think creating as many SSL contexts as many threads are there is an issue. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Leon Sent: Monday, June 26, 2006 12:19 PM To: openssl-users@openssl.org Subject: OpenSSL and multiple threads Hi, I wrote a server app to use multiple threads. Each thread has it's own SSL context and perform all the socket (socket,accept) and SSL (SSL_accept) tasks. I implemented the dynamic lock mechanism for multi thread support. I can start the server with the 1000 threads (one for each required port) that we need, but found that it fails if I try 1500. Now I am afraid that the 1000 threads may also become unstable with continuous use. This failure occurs when all the threads are listening and I open a single connection to one of them. That is: there is no load. I tracked the bug with gdb and found that it fails in RAND_poll(), called from SSL_accept(), when a new session key is generated. The strange thing is that after the file descriptor set is zeroed [(FD_ZERO(&fset)] the call [FDSET(fd,&fset)] to add the random device's file descriptor does not work. The set remains zeroed and once the select() has executed using this set, most of the parameters including the random devices file descriptor, becomes zeroed (NULL). It looks as if something is overrunning my stack - any ideas as to how to get the sucker? I had the same results on a dual xeon as well as a Athlon box. Using 0.9.7e. Please ignore the design - it is suppose to be experimental. Thanks Leon __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED] DISCLAIMER == This e-mail may contain privileged and confidential information which is the property of Persistent Systems Pvt. Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Pvt. Ltd. does not accept any liability for virus infected mails. __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED] __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: OpenSSL and multiple threads
Some days back, we had a riot on "select" call usage. You may revisit those posts to see if it is helpful. One thing I do not get is: "Each thread has it's own SSL context ". I also had a mult-threaded application, and for the entire process, there was only one context created with SSL_CTX_new. The individual threads had their own SSL objects from the context using SSL_new. SSL_new takes the SSL context as the input argument. Therefore, I think creating as many SSL contexts as many threads are there is an issue. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Leon Sent: Monday, June 26, 2006 12:19 PM To: openssl-users@openssl.org Subject: OpenSSL and multiple threads Hi, I wrote a server app to use multiple threads. Each thread has it's own SSL context and perform all the socket (socket,accept) and SSL (SSL_accept) tasks. I implemented the dynamic lock mechanism for multi thread support. I can start the server with the 1000 threads (one for each required port) that we need, but found that it fails if I try 1500. Now I am afraid that the 1000 threads may also become unstable with continuous use. This failure occurs when all the threads are listening and I open a single connection to one of them. That is: there is no load. I tracked the bug with gdb and found that it fails in RAND_poll(), called from SSL_accept(), when a new session key is generated. The strange thing is that after the file descriptor set is zeroed [(FD_ZERO(&fset)] the call [FDSET(fd,&fset)] to add the random device's file descriptor does not work. The set remains zeroed and once the select() has executed using this set, most of the parameters including the random devices file descriptor, becomes zeroed (NULL). It looks as if something is overrunning my stack - any ideas as to how to get the sucker? I had the same results on a dual xeon as well as a Athlon box. Using 0.9.7e. Please ignore the design - it is suppose to be experimental. Thanks Leon __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED] DISCLAIMER == This e-mail may contain privileged and confidential information which is the property of Persistent Systems Pvt. Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Pvt. Ltd. does not accept any liability for virus infected mails. __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]