Re: Openssl changes 19759-19762 (AES_wrap_key)
On Mon, Jul 12, 2010, Victor Duchovni wrote: In changes: http://cvs.openssl.org/chngview?cn=19759 http://cvs.openssl.org/chngview?cn=19760 http://cvs.openssl.org/chngview?cn=19761 http://cvs.openssl.org/chngview?cn=19762 a bug is fixed in AES_wrap_key(), but the same bug remains unchanged in AES_unwrap_key. What is the impact of this pair of bugs? Where are AES_wrap_key and AES_unwrap_key() used? It looks like these are used only in: CMS_RecipientInfo_encrypt() CMS_RecipientInfo_decrypt() via cms_RecipientInfo_kekri_encrypt(), cms_RecipientInfo_kekri_decrypt(). Should be fixed now. This should only affect external applications using those functions because internally they are only used to wrap AES keys and the bug isn't triggered for keys of that size. Steve. -- Dr Stephen N. Henson. OpenSSL project core developer. Commercial tech support now available see: http://www.openssl.org __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
reading and writing into pem file
Hi, i had tried to generating the key and writing in the pem file...but it is giving segmentation fault...without .readprivatekey and readpublickey functions these is generating pem file...i dont know why? any knows guide me #includestdio.h #includeopenssl/pem.h #includeopenssl/bio.h #includeopenssl/rsa.h RSA *generatersa() { RSA *rsa; rsa=RSA_generate_key(2048,RSA_F4,NULL,NULL); return rsa; } writekey(RSA *key2) { EVP_PKEY *pkey; FILE *fp; //BIO *file; OpenSSL_add_all_ciphers(); OpenSSL_add_all_algorithms(); //file = BIO_new_file(filename, w); pkey = EVP_PKEY_new(); EVP_PKEY_assign_RSA(pkey,key2); // WRITE PRIVATE KEY if(!(fp = fopen(private1.pem, w))) { fprintf(stderr, Error opening PEM file %s\n, private1.pem); exit(1); } if(!PEM_write_PrivateKey(fp, pkey,NULL,NULL,0,NULL,NULL)){ fprintf(stderr, Error writing PEM file %s\n, private1.pem); exit(1); } close(fp); // WRITE PUBLIC KEY if(!(fp = fopen(public1.pem, w))) { fprintf(stderr, Error opening PEM file %s\n, public1.pem); exit(1); } if(!PEM_write_PUBKEY(fp, pkey)){ fprintf(stderr, Error writing PEM file %s\n, public1.pem); exit(1); } close(fp); } RSA * readPrivKey(char *filename) { RSA *key; BIO *bp; OpenSSL_add_all_ciphers(); OpenSSL_add_all_algorithms(); bp=BIO_new(BIO_s_file()); if (BIO_read_filename(bp,filename) = 0) { perror(ERROR: rsakey.pem); exit(0); } if ((key=(RSA *)PEM_read_bio_RSAPrivateKey(bp,NULL,NULL,NULL)) == NULL) { ERR_print_errors_fp(stderr); key = NULL; } BIO_free(bp); return key; } RSA * readPubKey(char *filename) { RSA *key; BIO *bp; ERR_load_crypto_strings(); bp=BIO_new(BIO_s_file()); if (BIO_read_filename(bp,filename) = 0) { perror(ERROR: public.pem); exit(0); } if ((key=(RSA *)PEM_read_bio_RSA_PUBKEY(bp,NULL,NULL,NULL)) == NULL) { ERR_print_errors_fp(stderr); key = NULL; } BIO_free(bp); return key; } int main(void) { RSA *key1; FILE *fp; RSA *pubkey; RSA *privkey; char **key; key1=generatersa(); writekey(key1); pubkey = readPubKey(public1.pem); privkey = readPrivKey(private1.pem); printf(size of (in byte)s pu:pr :: %d:%dn, RSA_size(pubkey),RSA_size(privkey)); RSA_free(key1); } ki...@kicha-laptop:~/Downloads$ ./output2 2438:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:650:Expecting: PUBLIC KEY 2438:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:650:Expecting: ANY PRIVATE KEY Segmentation fault ki...@kicha-laptop:~/Downloads$ ls function1.c function2.c function3.c function4.c openssl.c output1 output2 private1.pem public1.pem read.c ki...@kicha-laptop:~/Downloads$ cat private.pem cat: private.pem: No such file or directory ki...@kicha-laptop:~/Downloads$ cat public1.pem ki...@kicha-laptop:~/Downloads$ cat public1.pem ki...@kicha-laptop:~/Downloads$ cat private1.pem ki...@kicha-laptop:~/Downloads$ Thanks for your time, Krishnamurthy
Re: Connection Resetting
Okay so the fix for the bug that I mentioned before introduced a much worse bug (That's what I get for not knowing exactly what is going on). This new bug causes the system to keep all threads alive for the length of the proxy so with enough sites visited the computer the proxy is running on becomes useless (100% processor load). I now understand that the reason they never exit is because the connection is never getting shutdown properly as far as the TLS standard is concerned, so the fix I put in was to basically ignore that, and keep going. But alas, that didn't work for long. My question is why is the session getting reset before I can upload a file? Is there some sort of a watchdog timer that I am neglecting to poke before the connection is reset? Any help would be appreciated. Thanks, Sam On Mon, Aug 30, 2010 at 12:49 PM, Sam Jantz sjan...@gmail.com wrote: Dave, Thank you for the clarification on HTTP keep-alives. I have just now fixed the bug. The source of the problem was an SSL_read call on the client half of the proxy. This was triggering an error SSL_ERROR_SYSCALL with a ret of zero. According to the documentation this is normally caused by an improperly shutdown SSL connection, however rescheduling the read for when the socket was ready (using a select statement) fixed this issue. I have tested it up to a 5MB file, and it works perfectly. I am a little confused on why I was getting the error in the first place still though. What would cause SSL_ERROR_SYSCALL to be flagged, and have an empty error queue if the socket was not closed improperly on the other side? On Sun, Aug 29, 2010 at 11:06 PM, Dave Thompson dthomp...@prinpay.comwrote: From: owner-openssl-us...@openssl.org On Behalf Of Sam Jantz Sent: Friday, 27 August, 2010 18:16 I have a question concerning Keep-Alives. I'm writing a SSL proxy (which is working great except for this issue) and every time I [POST about 470KB rather than about 18KB] the connection resets, and it gets caught in an infinite retransmit loop. snip This behavior is only implemented in Firefox. In the other browsers it seems to fail out with some error about unexpected reset. Is there some parameter that I can set when establishing the SSL connection that will allow me to wait for larger transfers without reseting? 1. This has nothing to do with keep-alives. HTTP 1.1 keep-alive is a passive feature; it doesn't do anything, instead if agreed the server REFRAINS FROM closing the connection as it would for 1.0. 2. It sounds like the browser is getting RST. (Or to be exact, getting an error from the OS that *it* got RST.) Firefox might respond to this differently than other browsers, by retrying; I don't have time to test. If so, the RST is caused by your proxy doing something abnormal, most likely dying. Check your code for bugs, and/or your logs -- your program does have logging and diagnostic code in it, like any well-designed program, right? 3. Or do you think the proxy is getting RST from gmail? I am 99.99% certain google wouldn't have a problem that would do that, although it isn't completely impossible. It's much more likely to be some network (mis)feature between you and gmail, like a firewall, NAT box, access controller, transparent (but not really) cache, etc. Try without your proxy, but with a client (i.e. browser) on the machine where the proxy is, to the same server with the same amounts of data (or at least reasonably close). If you can, try from different places in the Internet, like from home or a Starbucks versus the company office. 4. SSL itself has no timelimits; it will wait forever, or until the underlying TCP connection fails. (If a remote host just dies without closing properly, TCP may detect this in anywhere from a few minutes to many hours or days, depending.) An application *using* SSL might have a timelimit; if so you have to look to that program as to how, and whether, you can change it. And sometimes a firewall or NAT box or such has an idle timeout, where it will terminate your connection if it isn't used for an excessive period of time, and some netadmins have a crazy idea what is excessive; but I've never seen less than 15 minutes, which I expect is not the case in your example. The really awful ones do this silently, or by faking FIN; the ones that fake(?) RST at least give you a detectable error. __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org -- Sam Jantz Software Engineer -- Sam Jantz Software Engineer
ssl_error_handshake_failure_alert hints?
Hi everyone -- I'm an OpenSSL noob trying to debug some code written by someone that's smarter than me. It's basically a small HTTPS server using self-signed certs. It works fine with IE and Google Chrome, but not Firefox. Even after adding a security exception for Firefox, I still get the dreaded ssl_error_handshake_failure_alert I'm using OpenSSL 1.0.0a and also tried 0.9.8l Firefox (3.6) can successfully connect to: openssl s_server -cert mycert.pem -www (tested with 1.0.0a) and shows all the ciphers, etc. After MUCH reading and Googling, it seems like the below is important (and as far as I know, correct): SSL_CTX_new(SSLv23_method()) SSL_CTX_set_options(ctx, SSL_OP_ALL | SSL_OP_NO_SSLv2 | SSL_OP_SINGLE_DH_USE) SSL_CTX_set_cipher_list(ctx, ALL:!eNULL:!aNULL:@STRENGTH) SSL_CTX_set_verify(ctx, SSL_VERIFY_NONE, ... I ran some tests using openssl s_client -CAfile cacert.pem -connect localhost:443 My app with 0.9.8l, openssl 0.9.8l fails with: verify return:1 6436:error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure:. \ssl\s3_pkt.c:1061:SSL alert number 40 6436:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:.\ssl\s23_lib .c:188: If I use -ssl2, it appears to succeed without the SSL errors above. Using -ssl3 fails like above. My app with 1.0.0a, openssl 1.0.0a also fails (it does show certificate info, but the information about the session shows New (NONE), Cipher is (NONE), etc). Given the errors above, can anyone point me towards some docs, or APIs, or ??? that can help me troubleshoot and fix the reason that Firefox and OpenSSL -s_client can't connect to my HTTPS server? Thanks a lot Doug
RE: Connection Resetting
From: owner-openssl-us...@openssl.org On Behalf Of Sam Jantz Sent: Monday, 30 August, 2010 13:50 I have just now fixed the bug. The source of the problem was an SSL_read call on the client half of the proxy. This was triggering This is ambiguous; do you mean the connection to the client (where the proxy is acting as server) or the connection to the server (proxy acting as client)? The latter makes more sense to me, see below. an error SSL_ERROR_SYSCALL with a ret of zero. According to the documentation this is normally caused by an improperly shutdown SSL connection, however rescheduling the read for when the socket was ready (using a select statement) fixed this issue. I have tested it up to a 5MB file, and it works perfectly. This isn't entirely clear; are you saying SSL_read returned 0 (which indicates TCP disconnect aka EOF) and *then* SSL_get_error returned _SYSCALL (and not _ZERO_RETURN)? That would mean the peer disconnected but without doing a shutdown alert first. It's arguable whether this is improper, but it's at least suboptimal or possibly worrisome. You didn't say, and I didn't think to ask: how do you know you are sending the whole request? If you are say SSL_read'ing only the first 32K or 100K or 1M or such of the request from the client, SSL_write'ing that to the server, and then SSL_read'ing the server, of course you can't get a valid response. In that case I would expect gmail to time-out and disconnect -- as a huge public service, it can't afford to keep potentially huge numbers of connections from wedged clients. If they did just-disconnect, it might be politer to do SSL-shutdown, but depending on the software structure that might be difficult. You might have been on the right track originally about keepalive. For 1.0 you can simply do connection-oriented: while (n=SSL_read(fromcli,buf)) 0 SSL_write(tosvr,n); if n==0 /* EOF=disconnect=request complete, now do response */ while (n=SSL_read(fromsvr,buf)) 0 SSL_write(tocli,n); close(tocli); /* indicate end */ close(tosvr); /* clean up */ elif n==-1 /* error, possibly incomplete request */ ?? But for 1.1 if keepalive is enabled (and all browsers I have used do, although technically it is optional) you *won't* get EOF following (and delimiting) the request, so in general you must either: 1. parse the request headers (at least if there is a body, which there is for POST) and do: while more_req() (n=SSL_read(cli))0 SSL_write(svr) if error or incomplete ?? /* similarly for response if body, which there is for most requests, with the additional complication of possible chunked transport */ loop for next request+response, until EOF on either side or 2. do 'full-duplex' which works for any HTTP sequence: while forever or until manually interrupted when data available from cli read and write to svr when data available from svr read and write to cli in between do something that doesn't hog CPU but if an error happens you don't know what the HTTP state is and can't even try to recover. Using select-readable *on both sides* gives you a good approximation to this, but in general SSL and thus openssl may need to both send and receive even on a connection that is logically write-only or read-only, so instead of just select'ing for readable (or writable), the robust way is to use nonblocking sockets, try SSL_read or SSL_write, let it return -1 and SSL_get_error will tell you _WANT_READ or _WANT_WRITE, and (remember and) select for that. This is described in both the SSL_read/write and SSL_get_error manpages. Or, less efficient but simpler, just (re)try _read (and _write when needed) every X milliseconds, and it will progress when it can. I am a little confused on why I was getting the error in the first place still though. What would cause SSL_ERROR_SYSCALL to be flagged, and have an empty error queue if the socket was not closed improperly on the other side? First, EOF isn't really an error. Second, when SSL_read etc. (calls BIO_sock which) gets a socket error, it returns -1 and SSL_get_error returns _SYSCALL, but the error is not (usually?) put in the ERR_ queue. You must instead use errno on Unix or [WSA]GetLastError() on Windows. The manpage for SSL_get_error says this may be the case and in my experience it always is. (Note that internally, at the OS level, [WSA]EWOULDBLOCK/etc. for nonblocking are treated as errors, but openssl handles them internally so your code only sees 'real' errors.) __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: reading and writing into pem file
From: owner-openssl-us...@openssl.org On Behalf Of krishnamurthy santhanam Sent: Tuesday, 31 August, 2010 13:33 #includestdio.h writekey(RSA *key2) You're obviously using a C89 (or earlier) compiler or mode. Snipped most non-I/O steps: { EVP_PKEY *pkey; FILE *fp; //BIO *file; OpenSSL_add_all_ciphers(); OpenSSL_add_all_algorithms(); Aside: add_all_algorithms *includes* add_all_ciphers if(!(fp = fopen(private1.pem, w))) { if(!PEM_write_PrivateKey(fp, pkey,NULL,NULL,0,NULL,NULL)){ close(fp); if(!(fp = fopen(public1.pem, w))) { if(!PEM_write_PUBKEY(fp, pkey)){ close(fp); close() is not the correct routine to close a stdio FILE*. It doesn't even take the correct type of argument, but your compiler wasn't required to warn you because you didn't include its header (e.g. unistd.h) and in C=89 undeclared functions default to int(/*unspecified*/). Since you didn't close the files, no data actually got written to them, so there was nothing there for the PEM_read's to read. Use fclose. And see if you can use a C99 compiler, or at least a C89 compiler with better warnings (like gcc -Wimplicit). __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
How to check client certificate for expiration
In my VPN client I'd like to warn the user when their certificate is almost out of date. Is there a way to get the client certificate from the SSL_CTX after the client cert has been loaded? As discussed elsewhere, it's quite painful for an application simply to undertake the task of load a client certificate provided by the user. If I want to check the notAfter date of the certificate, however, it seems to get even more painful. I can't find a way to get the certificate back from the CTX, so... ... for PKCS#12 certs, we keep a pointer to the X509 structure we add as we parse it. ... for PEM certs and TPM 'blobs' we actually have to re-parse the file because SSL_CTX_use_certificate_chain_file() doesn't let us see the X509 (and the alternative is open-coding a reimplementation of that function). On the whole, it just makes the whole thing even more horrid. And I was quite pissed off with it already. Am I missing something? http://git.infradead.org/users/dwmw2/openconnect.git/commitdiff/1b9a2db4 -- dwmw2 __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Connection Resetting
I'm writing a SSL proxy (which is working great except for this issue) and every time I got to attach a file in an email the connection resets, and it gets caught in an infinite retransmit loop. There are two totally different ways you can make an SSL proxy, and to figure out your issue, we really need to know which type. 1) An SSL proxy can understand the underlying protocol, know which side is supposed to transmit when, and only try to read from that side. In this case, it's vital that the proxy correctly track the protocol and not be reading from one side when it's the other side's turn to send. 2) An SSL proxy can ignore the underlying protocol and not know which side is supposed to transmit when. In this case, the proxy must always be ready to read from either side. It must never block indefinitely trying to read from one side. You can also have a hybrid. For example, you can read only from the client side until you get the full request, and then once you process the request, you switch to bidirectional proxying. It is very common for people to naively assume that their code will magically know which side to read from. I assure, this is not the case. Unless you carefully track the protocol, all you know is that the client has to send some data first. But once it does, all bets are off -- again, unless you carefully track the protocol. Also, you don't mention whether your I/O is blocking or non-blocking, and if non-blocking, how your socket discovery works. This can be subtle with OpenSSL and your mistake might lie there. For example, if you using blocking I/O, you can't just block one thread in SSL_read in each direction, because if you do, there's nothing you can do when SSL_read returns (since the connection you need to send on is in use, potentially indefinitely, by the other thread). DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org