Getting error information
I'm using openssl-0.9.7d in a multithreaded application and want to do some special processing in the case of a client mistakenly trying to connect without SSL to a port which is expecting SSL (i.e. someone types "http://foo.com:1"; into their browser instead of "https://foo.com:1";). Looking at s23_srvr.c there is even an error code for this called SSL_R_HTTP_REQUEST. The complication is that my application uses a "worker thread" model for handling connections, there is not one thread per SSL connection. Therefore the error stack, which is stored per thread, doesn't really help me since I don't know which connection the error information applies to since a given thread may do work for multiple connections. Does anyone know if there is another way to get error information which is per connection and not tied to the thread? I haven't found anything in the SSL or SSL_CTX objects which seem right. I've also tried hooking into the info_callback and msg_callback callbacks, but neither provides any information in this case of an error right from the get go. Thanks very much, -- Jonathan ___ Do you Yahoo!? Declare Yourself - Register online to vote today! http://vote.yahoo.com __ OpenSSL Project http://www.openssl.org User Support Mailing List[EMAIL PROTECTED] Automated List Manager [EMAIL PROTECTED]
Re: Blinding Breaks Engines?
Here is an email I sent to the list back in March regarding what I think is the same issue (this was entered into the bug database though, I don't know the bug number). Basically, I saw the same issue with the ESA Blinding patch when using a Broadcom card (engine ubsec). If I backed out the patch then the problem went away. However, I was using 0.9.7a, which did not contain the patch so it was easy to back out, just don't apply it. I thought the issue was fixed with the version of the RSA Blinding patch that worked in multithreaded environments, which I think is what's in 0.9.7b. However, I have not actually tried 0.9.7b. -- Jonathan --- Jonathan Hersch <[EMAIL PROTECTED]> wrote: > Date: Wed, 26 Mar 2003 19:44:30 -0800 (PST) > From: Jonathan Hersch <[EMAIL PROTECTED]> > Subject: Crash with openssl and ubsec and RSA blinding patch (CAN-2003-0147) > To: [EMAIL PROTECTED] > > Hi, > > I'm using openssl 0.9.7a with a Broadcom accelerator card (engine type > ubsec). > If I apply the patches to rsa_eay.c and rsa_lib.c which fix CAN-2003-0147, > and > then try and create an RSA key and CSR at the command line while using the > Broadcom card then openssl crashes. The command is: > > openssl req -engine ubsec -newkey rsa:1024 -sha1 -keyout foo.pem -out > foo.csr > > (I use "foobar" for the password, CN, etc., doesn't matter for the test.) > > Doing: > > openssl req -newkey rsa:1024 -sha1 -keyout foo.pem -out foo.csr > > does not crash. > > Similarly, building openssl without the patches avoids the crash, even when > using -engine ubsec. > > After some poking around there is a suspicous looking line of code in > hw_ubsec.c:ubsec_mod_exp() (which gets called eventually by the blinding > code), > here's part of that function: > > > /* Check if hardware can't handle this argument. */ > y_len = BN_num_bits(m); > if (y_len > max_key_len) { > UBSECerr(UBSEC_F_UBSEC_MOD_EXP, UBSEC_R_SIZE_TOO_LARGE_OR_TOO_SMALL); > return BN_mod_exp(r, a, p, m, ctx); > } > > if(!bn_wexpand(r, m->top)) > { > UBSECerr(UBSEC_F_UBSEC_MOD_EXP, UBSEC_R_BN_EXPAND_FAIL); > return 0; > } > memset(r->d, 0, BN_num_bytes(m)); /* IS THIS RIGHT ??? */ > > if ((fd = p_UBSEC_ubsec_open(UBSEC_KEY_DEVICE_NAME)) <= 0) { > fd = 0; > UBSECerr(UBSEC_F_UBSEC_INIT, UBSEC_R_UNIT_FAILURE); > return BN_mod_exp(r, a, p, m, ctx); > } > > if (p_UBSEC_rsa_mod_exp_ioctl(fd, (unsigned char *)a->d, BN_num_bits(a), > (unsigned char *)m->d, BN_num_bits(m), (unsigned char *)p->d, > BN_num_bits(p), (unsigned char *)r->d, &y_len) != 0) > { > UBSECerr(UBSEC_F_UBSEC_MOD_EXP, UBSEC_R_REQUEST_FAILED); > p_UBSEC_ubsec_close(fd); > > return BN_mod_exp(r, a, p, m, ctx); > } > > Coming into this function from the blinding code the arguments "r" and "a" > are > the same BIGNUM. If "r" is zeroed then when the BN_num_bits(a) call is made > a > few lines later there is a problem since "a" is now zero. > > I don't know the BIGNUM stuff, but this seems suspicious. And removing this > line of code fixes the problem. Maybe someone who knows this stuff better > can > say if it seems ok? > > Thanks, > > -- Jonathan > > > __ > Do you Yahoo!? > Yahoo! Platinum - Watch CBS' NCAA March Madness, live on your desktop! > http://platinum.yahoo.com > __ > OpenSSL Project http://www.openssl.org > User Support Mailing List[EMAIL PROTECTED] > Automated List Manager [EMAIL PROTECTED] __ Do you Yahoo!? Yahoo! Calendar - Free online calendar with sync to Outlook(TM). http://calendar.yahoo.com __ OpenSSL Project http://www.openssl.org User Support Mailing List[EMAIL PROTECTED] Automated List Manager [EMAIL PROTECTED]
Problems with DSA and engine ubsec
Hi, I'm signing and verifying documents using DSA and have run into a couple of problems. I'm working with OpenSSL 0.9.7 on Linux with a Broadcom crypto card based on the 5821 (so OpenSSL engine type is "ubsec"). I have version 1.81 of the Broadcom driver. (1) While testing I found that verification of certain signed documents crashed OpenSSL. The problem appears to be that hw_ubsec.c:ubsec_dsa_verify() calls p_UBSEC_dsa_verify_ioctl() and if this call fails then the code tries using software crypto, indirectly calling dsa_ossl.c:dsa_do_verify(). However, dsa_do_verify() tries to do: if (!ENGINE_get_DSA(dsa->engine)->dsa_mod_exp(dsa, &t1,dsa->g,&u1, dsa->pub_key,&u2, dsa->p,ctx,mont)) goto err; and this dies because dsa_mod_exp is NULL. The current workaround is to set up pointers in ubsec_dsa for dsa_mod_exp and dsa_bn_mod_exp (just in case): #ifndef OPENSSL_NO_DSA static int dsa_mod_exp(DSA *dsa, BIGNUM *rr, BIGNUM *a1, BIGNUM *p1, BIGNUM *a2, BIGNUM *p2, BIGNUM *m, BN_CTX *ctx, BN_MONT_CTX *in_mont) { return BN_mod_exp2_mont(rr, a1, p1, a2, p2, m, ctx, in_mont); } static int dsa_bn_mod_exp(DSA *dsa, BIGNUM *r, BIGNUM *a, const BIGNUM *p, const BIGNUM *m, BN_CTX *ctx, BN_MONT_CTX *m_ctx) { return BN_mod_exp_mont(r, a, p, m, ctx, m_ctx); } /* Our internal DSA_METHOD that we provide pointers to */ static DSA_METHOD ubsec_dsa = { "UBSEC DSA method", ubsec_dsa_do_sign, /* dsa_do_sign */ NULL, /* dsa_sign_setup */ ubsec_dsa_verify, /* dsa_do_verify */ dsa_mod_exp,/* ubsec_dsa_mod_exp */ /* dsa_mod_exp */ dsa_bn_mod_exp, /* ubsec_mod_exp_dsa */ /* bn_mod_exp */ NULL, /* init */ NULL, /* finish */ 0, /* flags */ NULL/* app_data */ }; #endif Not sure if this is entirely kosher, but I don't know why they were NULL to begin with? (2) The next question is why did the original call to p_UBSEC_dsa_verify_ioctl() fail, since the Broadcom card is present and functioning? The answer is that one of the arguments passed in this ioctl call is the length in bits of the hash (which is what was signed and we're trying to verify). If the hash happens to start with > 7 zero bits then when the bit size is converted back to bytes the number of bytes will be < 20. And deep in the Broadcom driver this length will get rejected because the driver insists the hash is 20 bytes long due to alignment issues. (If you have access to the Broadcom driver see ubsec_key.c:dsa_verify_ioctl() where the length is converted from bits to bytes, and param.c:ubsec_keysetup_DSA() where the length is checked against 20). I think the solution here is to just always pass the hash length as 160 bits and pad it with zeroes to make it the full 20 bytes long. I haven't tested this though. Maybe there is an issue here in the way the API is defined? (3) Further testing showed that DSA signatures created using OpenSSL with the Broadcom card appear to be broken in general somehow. I've attached a tar file containing a DSA private/public key pair, a small test file to sign, and a script which does a bunch of DSA sign and verify tests. What these tests show is that a DSA signature generated using the Broadcom card will fail to verify if the Broadcom card is not used during the verification. The inverse, using a card to verify a signature created without the card, works fine. I don't know whether the issue lies in the card, the driver, OpenSSL, or some combination. (IBM's xss4j tool also cannot verify DSA signatures generated using OpenSSL with the Broadcom card, though it can verify signatures generated without using the Broadcom card.) (4) To further complicate the last problem, in the tar file there is a file "testfile-crash" which causes OpenSSL to segfault when used with the Broadcom card for signing. It looks like another case of a NULL pointer in the ubsec_dsa function table, in this case for dsa_sign_setup. I've tried making dsa_sign_setup global and sticking it in the table and this seems to prevent the crash, though I'm getting some error messages about memory leaks (it's a debug build). I don't understand the layers of function pointer tables being used with the engine stuff so I'm not sure what the real way is to fix this, or why it happens sometimes and not others. Does anyone have experience with the Broadcom card and DSA and maybe can confirm that I'm on the right track with my "fixes", and maybe shed some light on the invalid DSA signatures? I'm trying to get in touch with Broadcom as well to find out if they have a later driver. So far their response has been that they've tested with OpenSSL and DSA and they work fine f
Re: engines and keys
--- Geoff Thorpe <[EMAIL PROTECTED]> wrote: > The ENGINE is a sort of container for implementations of the various > ***_METHOD implementations, and the "method" tables have always worked > this way too. Ie. upon creation, a structure is linked to a function > table that handles processing. In the case of ENGINEs this is also pretty > much necessary because the ENGINE may maintain state associated with a > given key structure so you must map the structure to the ENGINE. This > happens even in the acceleration-only case (cached values) but is > especially important when supported keys contained in hardware. That makes sense, I hadn't considered state being maintained by the ENGINE, I was thinking it was more an atomic operation. Thanks! -- Jonathan __ Do you Yahoo!? Yahoo! Mail Plus Powerful. Affordable. Sign up now. http://mailplus.yahoo.com __ OpenSSL Project http://www.openssl.org User Support Mailing List[EMAIL PROTECTED] Automated List Manager [EMAIL PROTECTED]
Re: client-side session reuse
--- Xperex Tim <[EMAIL PROTECTED]> wrote: > I don't understand how your solution completely fixes things though. What if > the server is > restarted with caching disabled while the client still has sessions cached. > When the sessions > were cached by the client the session ID was not zero length so you validly > cached them. Yet you > run into the same problem when the server restarts. > > I realize this is an unlikely scenario but it lead me to think that the > problem should be fixed > elsewhere, namely in the OpenSSL client code. > > Am I following things correctly? > > Tim I think in this case what would happen is that the client tries to reuse its saved session, however the server will return an empty session_id (session_id_length == 0). The client will then throw away the session it was trying to use and create a new one with session_id_length = 0. When the connection finishes an attempt will be made to cache the session, but since the session_id_length is 0 the session is rejected and not cached. So I think it works out okay. -- Jonathan __ Do you Yahoo!? Y! Web Hosting - Let the expert host your web site http://webhosting.yahoo.com/ __ OpenSSL Project http://www.openssl.org User Support Mailing List[EMAIL PROTECTED] Automated List Manager [EMAIL PROTECTED]
client-side session reuse
Hi, I'm trying to add session caching to a multi-threaded SSL client. I've run into a crash when my client, with caching enabled, is talking to an SSL server which has caching DISabled. What I see in the debugger is that if more than one client connection is coming up, and both are using the same session from the cache, then when the server's certificate in the session is updated (session->sess_cert) by the second connection, the first connection is left holding an invalid pointer to the public key in the cert. The crash occurs when that public key is used for encrypting a secret to send to the server. What's confusing me, and I feel like I'm missing something basic here, is that the code doesn't look like it should ever work with reused sessions because the only field in the session which is modified under lock is the reference count. SSL_set_session() doesn't copy the session, it reuses the pointer and increments the ref count. Other code changes fields in the session at will. When my client is caching and talking to a server which is also caching then everything works great. What am I missing here? Thanks, -- Jonathan __ Do you Yahoo!? Faith Hill - Exclusive Performances, Videos & More http://faith.yahoo.com __ OpenSSL Project http://www.openssl.org User Support Mailing List[EMAIL PROTECTED] Automated List Manager [EMAIL PROTECTED]
SSL_ERROR_SYSCALL and SSL_ERROR_SSL
Hi, Can anyone clarify the semantics of how to handle these errors? Sometimes they seem to mean retry the read/write, sometimes EOF, sometimes an error indicating time to shutdown. I've seen errno values come back of EAGAIN, ECONNRESET, and EPIPE, as well as EAGAIN. I keep adding more special cases to my code to handle the possible combinations but new ones keep coming up. For example, what does it mean to get SSL_ERROR_SSL with errno of 0 ? Is this the same as shutdown, or retry? I've looked at s_client and s_server and they seem to generally handle these as a shutdown indication (except when requesting a read of 0 bytes, in which case it's retry). Is there some standard way to handle these errors? Is there perhaps something bad I'm doing that keeps causing me to get them? Thanks very much, -- J __ Do You Yahoo!? Yahoo! Finance - Get real-time stock quotes http://finance.yahoo.com __ OpenSSL Project http://www.openssl.org User Support Mailing List[EMAIL PROTECTED] Automated List Manager [EMAIL PROTECTED]