RE: AIX 5.3 - FIPS_mode_set fails due to RSA self-test failure
The end result is that I had to change the makefile to -q32 to get it to work with the openssl-0.9.8j distribution, which smartly does use 32_64 mode and will FAIL if I did not change the fips-1.2 makefile. This violates the security policy and invalides the FIPS certification. You cannot change the makefile for building the FIPS canister. You have two possible solutions: 1) Build on a pure native 32-bit platform. This will ensure that the approved build process detects only 32-bit capability. 2) Build in a 32-bit sub-platform. This would mean creating your own substitute for whatever the build process uses to determine that it is on a 64-bit platform and instead tells it that you are on a 32-bit platform. You could, for example, have your own wrappers for things like 'uname', 'gcc' and so on. Some may argue that option 2 violates the spirit of the security policy. I don't feel competent to comment on that. I recommend building the FIPS canister always on a least common denominator machine and environment specifically engineered for this purpose. Start with a clean OS install, decide which patches/updates you want and install them. Install only the tools you specifically want in the exact versions you want. Build the FIPS canister and manually, on paper, record its SHA1 checksum. Then put it where you want it. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: FIPS Server
I have a general query regarding FIPS mode. I am running an simple openssl https server based on openssl that services https requests from window clients. Is it in FIPS mode, yes or not? If not, then you cannot claim it is FIPS compliant. I have the following setting in my windows XP Use FIPS comliant algorithms for encryption, hashing and signing set to 1 . This does exactly what it says. It forces XP system components to only use FIPS-compliant *algorithms*. Note that using FIPS-compliant algorithms is one requirement for FIPS compliance, but it is far from the only one. Using IE on a windows xp client with the above setting i am able to communicate with a openssl command line https server. I dont have FIPS enabled on my opessl command line tool. Then how come i am able to handle requests from a windows machine which has the FIPS setting to 1. The premise on which this question is based on simply completely incorrect. FIPS is not a remote interrogation protocol. It's a way of ensuring that cryptographic algorithms, as used in an endpoint, are secure and reliable. Everything is working because the Windows machines are certified secure and reliable and the server hasn't failed. That's all that's required for things to work. My bank can have the best security in the world, and I can write my ATM pin on my card and leave it at the local McDonald's. The bank can be super secure and can still interoperate with morons. Now is it ok to say i am FIPS compliant on the server side becaause i am handling FIPS requests from clients? No. A FIPS-compliant endpoint will not use non-FIPS-allowed algorithms. But there is much more to FIPS-compliance than simply using only permitted algorithms. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: FIPS
I have some doudt regarding fips capbable openssl... If in my system , one of the my application gets into fips mode .. whether that going to effect other application to use fips enabled cryptography alogorithm.. No. I have seen in some fips enabled library, if one application gets into fips mode , whole library will be in fips mode and all the application in the system will be in fips mode. I don't believe this. I don't see how any system could do this and still meet the various FIPS requirements for integrity checking and isolation. is this true for openssl ? Is the fips enabled at system level or application . Your notion of system level seems incoherent to me. It would be an absolute disaster if one user could put another user's applications into FIPS mode and stop them from interoperating with, say, systems that used MD5 signatures (when the user who ran that program intended that to work). Nobody would design such an obviously broken system. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: FIPS
Is there any way i can make my implementaion of openssl FIPS capable and FIPS compliant ? If you change even one line of code or one parameter in the building of the canister, you have to go through the FIPS process yourself. Contact any of the 13 accredited testing labs. http://ts.nist.gov/Standards/scopes/crypt.htm DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: License for Certificate?
Hello, I am currently developing an interface to a 3rd party product that requires HTTPS support using an X.509 certificate. I have been given instructions on how to generate the certificate using openssl. While in development mode (this is a commercial product), do I need to include some license file or text? Include in what? So, I would like to know if I have to include a license file or text for using the openssl certificate in these two cases basically (development/testing and production). Again, include in what? Gisella Saavedra I'm having a hard time understanding your question. All you tell us about what you're doing is that it requires HTTPS support using an X.509 certificate. If it requires a certificate, then you need one to use it. That's what requires means. My guess is that your question is about what certificate you should supply to the 3rd party product and where it should come from. There is no way to answer that question without knowing for what purpose the 3rd party product requires the certificate and what you're trying to do. Is it for client validation? Is it for server validation? What *exactly* does it need to validate? (For example, when I connect to amazon.com with a secure browser, what I need to validate and what amazon.com needs to validate are completely different.) If it uses it, for example, to securely identify the client, then you will need to set up a scheme in which the client has a certificate suitable for use for such secure identification. Depending on exactly what your question really is, it may get into deep issues about your security framework and threat models. Or it may be as simple as generate a self-signed certificate each time or go to a CA and get a certificate. It depends on what the certificate is doing in the security framework. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: License for Certificate?
thanks for the response. I just need the certificate to securely identify that a request is coming from who I think it is coming. Then you need some way to distribute a certificate to that endpoint and for the other end to know what certificate that endpoint has. My goal is that I can indistinctively use http or https while testing. I just want to set up my application server, Tomcat, so that requests can be received using https. I know that I have to upload the public certificate into the other party (to whom I am talking to). I do not expect to modify the application code because of https. Am I right? If you don't modify the application code, then what will make sure that the request is coming from who you think it is coming from? Some code will need to perform that check. Regarding just using the certificate in the fashion mentioned above, will I need to include some license in some file or product brochure? There's no way to answer that question without knowing how you plan your authentication to work. The only case where I see mentioning the certificate authority would be in a System Diagnostics option, where we display the environment variables, so maybe we would want to display some info about who issued the certificate, when using one. When you say securely identify that a request is coming from who I think it is coming, what *EXACTLY* do you mean? For example, you could mean: 1) I need to identify the actual human being who sent the request so I can hold them responsible for it. or 2) I need to identify that the request is coming from the same entity that some other request came from, and I'll authenticate that request by user/password. or 3) I need to know that the request is coming from someone authorized to send such requests, and the person who authorizes such requests will do so by issuing a certificate. It all depends on exactly what you're trying to do, what your threat model is, and so on. You probably won't get useful advice on a mailing list unless you go into much more detail. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: FIPS Server
One final question. Given that non fips mode openssl can talk with fips validated implementations , Lets say i have a server which is using openssl in non fips mode which speaks and suports all the ciphers (including the FIPS ciphers) .Now for a FIPS validated client is there any way for the client to tell that it is speaking with a non fips server.? That depends on the implementation. There are many ways, but they're outside the scope of FIPS itself. For example, suppose you're part of a military organization. Your certificates can include a field that says that such certificates are only issued to FIPS-certified endpoints. You can refuse to talk to any server that doesn't present a certificate with that extension. Normally though, you can't care. My browser's job is to make sure that when I send my credit card to Amazon.com, only Amazon.com gets it. But it can't control what Amazon.com does with the information once they have it. That's out of scope. So you are talking about the security of the other endpoint, which is logically not the responsibility of an endpoint. If not the server could claim to be FIPS compliant and trick the client while in reality it is not FIPS compliant but is just speaking fips ciphers that the client proposes. Is the above possible then? If the client can be tricked by the server, it's broken. If this was a problem in your implementation, then you should have implemented a mechanism to ensure it can't happen. This is why you need threat models and security evaluations. Again, one sane way to do this is to use a CA that you trust to certify that endpoints are trustworthy for whatever trust you need to extend to them. An endpoint could be FIPS-compliant and could publish all its secrets in the New York Times. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: ECDSA/Using private and Public keys
Why does the call to d2i_ECPrivateKey(NULL, pptr, len); always fail? Because you didn't pass it a key. Change that 'NULL' to 'eckey'. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: FIPS Server
FIPS validated cryptography is mandated on endpoints which handle sensitive information by the US Federal Government (though current practice includes procurement, not necessarily implementation). Thanks David and kyle for your time. Kyle, though current practice includes procurement, not necessarily implementation I did not understand the above statement? Can you elaborate.. He means that FIPS-validated cryptography is mandated, by current practice, when cryptographic solutions are procured and not necessarily when they are implemented. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: FIPS Server
Hello all, I have a general query regarding FIPS mode. I am running an simple openssl https server based on openssl that services https requests from window clients. I have the following setting in my windows XP Use FIPS comliant algorithms for encryption, hashing and signing set to 1 . Using IE on a windows xp client with the above setting i am able to communicate with a openssl command line https server. I dont have FIPS enabled on my opessl command line tool. Then how come i am able to handle requests from a windows machine which has the FIPS setting to 1. It's very hard to understand the logic behind your question. Why wouldn't it work? FIPS is *not* an 'over the wire' security thing. It's a 'secure endpoint' thing. FIPS is not a protocol, it's about methodology, testing, and validating. If the client is FIPS validated, we would hope that the server would be unable to trick it or exploit defects in its encryption algorithm. But if the server doesn't try, there's no reason things shouldn't work. Now is it ok to say i am FIPS compliant on the server side becaause i am handling FIPS requests from clients? Of course not. That's like saying a bank has good security because sometimes honest people can walk in and not rob it. Good security is about what happens when the bad guys try to rob the bank. It's not about what happens when good guys try to make a deposit. FIPS compliance is about how you design, test, and validate by an objective third party. It's not just about what the code but about it was assured that it will not do what it's not supposed to do. thanks in advance for your time. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Extra character from X509_get_subject_name
Hi, Do you know why an extra charater / is attached in front of the subject name? X509_NAME_oneline(X509_get_subject_name(cert), data, 256); fprintf(stderr, Subject = %s\n, data); The output is like /CN=XXX.hp.com. Carol X509_NAME_online is known to be buggy and quirky and is documented to have a broken output format. You should use some other function, like X509_NAME_print_ex(... XN_FLAG_ONELINE). DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: how to trace aes quickly?
Victor Duchovni wrote: Because in amost all cases that's exactly the right advice. The cryptography learning that is sufficient and desirable is from books such as Applied Cryptography which cover protocols and algorithms at a high level. Studying the implementation or creating ones own implementation is for experts who don't need to ask questions, or ask sufficiently interesting questions that it is clear they are experts. As soon as someone tells me that I shouldn't learn about something and that it is my best interests to remain ignorant, I no longer trust that thing, or the people giving the advice. This is especially true of crypto. Regards, Graham He didn't say you shouldn't learn about something or that it's in your best interests to remain ignorant, he pointed out that you are starting in completely the wrong place. If you honestly thing investigating the implementation of OpenSSL will yield you useful information on whether or not you should trust it, you are seriously deluded. The implementation of OpenSSL is regularly scrutinized by real honest-to-goodness cryptography experts, and if you look at the last ten significant security issues found in OpenSSL, there's maybe one that could conceivably have been located by someone who is not a serious crypto expert. On the flip side, it's easy for a non-export to screw it up by thinking there's something he can/should mess with in there. For example: http://blogs.computerworld.com/fixing_debian_openssl You are barking up the wrong tree and ignoring good advice. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: len of encrypted data
Hi... a simple question, i hope somebody know the solution: I need to use the EVP_DecryptUpdate... but for fifth argument, i need the large of encrypted.. how i do this? i'm sure that strlen not works... You cannot have a chunk of data without knowing how big it is. What it means to have a chunk of data is to know where it's stored and how many bytes it is. If you don't know how large it is, you don't have it. How do you know where the encrypted data is in memory? Whatever told you that should have also told you how many bytes it was. If it didn't, then it's broken. Fix it. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: EVP_DecryptFinal_ex:bad decrypt
When i use to encrypt data, i have not problems.. when i decrypt the result of this code, i have not problem... when i decrypt with this program, i have 13015:error:06065064:digital envelope routines: EVP_DecryptFinal_ex:bad decrypt:evp_enc.c:461: The EVP_DecryptUpdate works ok, decrypt the info, but the rest off encypted (the remaining encrypted data) is not decrypted, and the tlen = 0. That code is really, really awful. Sorry to be blunt, but it's truly horrible. for(i=0; i strlen(hex) ; i=i+2){ sprintf(tmp,%c%c,*(hex+i),*(hex+i+1)); sprintf(tmp,%c,(unsigned int)strtol(tmp,NULL,16)); strcat(*ascii,tmp); } I mean, c'mon. Your biggest overall problem is that you suffer from the everything's a string delusion. For example: hextoascii(llave1,key1); hextoascii(llave2,key2); strcpy(key,key1); strcat(key,key2); strcat(key,key1); You need to come up with some rational way to pass around chunks of data that are not C-style strings. And you need to not use functions like 'strcpy' and 'strcat' on such chunks. The 'str*' functions, 'strcat', 'strcpy', 'strlen', and so on are usable *only* on C-style strings. That means a chunk of data that has a terminating zero byte and cannot contain any embedded zero bytes. What stops 'key1' from containing an embedded zero byte? C-style strings should be used for input and output to humans or human-readable files, but they're almost never appropriate for internal structures. You unpack the user-supplied key into an internal structure -- that should *not* be a C-style string. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: EVP_DecryptFinal_ex:bad decrypt
buff = (char *)malloc(bptr-length); memcpy(buff, bptr-data, bptr-length-1); buff[bptr-length-1] = 0; Umm, you don't copy the last byte of data. You don't allocate enough space to hold the data and a terminator. This is probably your main error. How will 'buff' hold a C-style string when it's only as many bytes as the data? (Remember, you need an extra byte for the terminator.) And why aren't you copying the last byte of data? And why are you writing a zero in the wrong place? And for the love of all that is holy, why are you copying the data into a temporary buffer just to copy it immediately into the final buffer? What purpose does the temporary buffer serve? You have to allocate it, copy out of it, and free it -- and for what purpose?! When i use to encrypt data, i have not problems.. when i decrypt the result of this code, i have not problem... when i decrypt with this program, i have 13015:error:06065064:digital envelope routines: EVP_DecryptFinal_ex:bad decrypt:evp_enc.c:461: The EVP_DecryptUpdate works ok, decrypt the info, but the rest off encypted (the remaining encrypted data) is not decrypted, and the tlen = 0. You treat the Base64 BIO as if it were a C-style string, but it's not. The Base64 BIO is just a chunk of arbitrary data. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Problems with encryption
Has anyone seen problems encrypting credit card numbers with BlowFish. When encrypting with a 32 char or a 56 char key the there are a number of values that are not encrypting and thus decrypting all of the characters. This sounds like a classic example of bugs caused by the everything is a C-style string mindset. No, not everything is a string. Some things are binary data. Some things have their length stored separately and aren't terminated by a zero byte. Some things have zeroes inside them, and 'strlen' won't give you the data length. Perhaps you want to base64 encode the encrypted data? DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: revoking a self-signed certificate
Olaf Gellert: I would not say so. If I found a CRL which contains the self signed root certificate I would stop to trust it immediately. Why? What do you think that CRL means? Specifically, do you think it means the public key was compromised? Do you think it means the issuer of the original certificate no longer wants you to trust it? Why should I not trust a CRL issued by a root CA that I trust? You should trust a CRL when it revokes certificates that you trust specifically because they're not on that CRL. Remember: The trust has to be established before, but when you already trust the CA, you can trust CRLs issued by it. Even if the root CAs key was compromised, I would not care if the CRL was issued by the attacker or the CA itself. Right, but you have to know what the CRL means. In some alternate universe where that means no longer trust the public key that this certificate signs or no longer trust the root certificate that's in this CRL, then you might choose to stop trusting the trust anchor. But in this universe, it doesn't mean any of those things. I agree that it makes sense to have higher level protocols that take care of root CA revocation and trust anchor management, but in my opinion not evaluating a CRL which revokes the root is missing a chance of good CA practise and taking an unnecessary risk... The problem is that it doesn't mean anything. A certificate being in a CRL does not mean the certificate's public key has been compromised. The mechanism you are describing simply doesn't exist. Maybe it could exist, maybe it should, but it doesn't. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: revoking a self-signed certificate
Can you please elaborate on how would the higher-layer security infrastructure go about this? Simply put, whatever put the certificate in its trusted position is what is to remove it. If a CA says to trust a certificate, that CA can say not to. But if the certificate is self-signed, the trust came from the user who said to trust it (or some other mechanims outside the scope of the certificate verification scheme). That same mechanism is the only thing that can say to stop trusting it. To me, it just seems impossible to do this and the issue might only be mitigated by spreading awareness by an out-of-band means but not eliminated until ofcourse, the self-signed CA certificate expires. It's not impossible. Just use the same technique that installed the self-signed certificate to uninstall it. If you could get it trusted somehow, why can't you get it untrusted that same way? DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
Re: How to check Server certificate and signature?
Ger Hobbelt wrote: Okay, so if I get this right, you're saying you want to verify the server certificate BUT you do NOT want to check it's activation date / expiry date (i.e. the time range over which the certificate is valid)? I'll forego the very bad security implications of such a wish (those time ranges are there for a reason, after all), you can do such a thing by providing your own certificate validation callback which does forego the time checks. [...] Anyway, cave canem: from what I read in your request you are treading dangerous security ground. This is not an uncommon thing to do. Verifying the time in certificates is a problem in mobile devices that do not have a trusted source of the time. If the mobile and network authenticate each other using certs, the mobile is at a disadvantage if it gets the time from the network. So instead 1: skip the date check of the date range in the network's cert. 2: connect 3: use the link to consult a trusted time source online 4: re-check the cert now you know time 5: start using the link, assuming the cert validated correctly the second time. __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
Re: DTLS server implementation experiences and documentation
On Sat, 2009-01-24 at 00:13 +0100, Georges Le grand wrote: I wonder if you could give out a reference on how to establish a VPN using DTLS or to tell how to do so. We are just using Cisco's AnyConnect VPN, which runs over an HTTPS 'CONNECT' and will use DTLS for subsequent data transfer if it can. The client code is at git://git.infradead.org/users/dwmw2/openconnect.git (viewable in gitweb by changing git:// to http:// in that URL). That code works on Linux and MacOS, and if anyone wants to provide a patch to make it work on other BSD systems that would be much appreciated. Since Cisco use an old version of OpenSSL on the server side, you'll need to patch OpenSSL to make it compatible with its own pre-RFC version of DTLS -- see http://rt.openssl.org/Ticket/Display.html?id=1751 for the patch. The VPN will work over HTTPS if you don't patch OpenSSL, but VPN over TCP is a very suboptimal solution. I haven't done server-side code yet; the point of this was to interoperate with the existing servers, and I have no immediate need to _replace_ them. It really wouldn't be hard though -- it's all fairly trivial stuff. You might also be interested in http://campagnol.sourceforge.net/ -- dwmw2 __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
Re: DTLS server implementation experiences and documentation
On Sat, 2009-01-24 at 23:03 +0100, Georges Le grand wrote: So it is alike SSL VPN with data encapsulated into HTTP Packets, but I don't get how does HTTP run over UDP. Probably best explained by the code... it just uses HTTP for the initial setup -- a CONNECT request with an HTTP cookie for authentication, and you get IP address etc. in the headers of the response. Then you're connected with an SSL connection, you can forget HTTP, and run IP packets over that connection. In the headers of the initial exchange you _also_ set up parameters for a DTLS connection, over which you can pass packets. -- dwmw2 __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: force 32-bit fips
All, I am trying to build OpenSSL-fips-1.2 on a Solaris 10 machine with Sun Studio 8 and force it to build 32-bit objects. Is there a way I can do that without changing the makefile and thus violating the fips validation? I'm not specifically familiar with 64-bit Solaris, but I know that 64-bit Linux has a way to set its 'personality' to 32-bit and cause automatic detection schemes to see it as a 32-bit machine. But if you really need FIPS, you shouldn't screw around. Build it on a 32-bit machine if it's going to be used on a 32-bit machine. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
Re: DTLS server implementation experiences and documentation
On Thu, 2009-01-22 at 06:10 +0100, Robin Seggelmann wrote: To avoid getting into trouble with already fixed bugs you should apply the patches I sent to the dev list. I'll set up a website with a patch collection and some instructions soon. Is there anyone who actually cares about DTLS and getting patches applied? I've had patches to make OpenSSL capable of talking to production servers out there in the wild, which use the OpenSSL-specific pre-RFC version of DTLS and I've been able to write a complete VPN client along with NetworkManager support, and get it into Linux distributions, in the time it's taken to get the patch into OpenSSL... and I'm still waiting... It's getting to the point where I wonder if it would be quicker and easier just to reimplement DTLS in GNUTLS and use that. -- dwmw2 __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: How to detect dead peers with DTLS?
Please note that I can not solve this problem via the protocol that I use on top of DTLS - which is IPFIX - because IPFIX - by definition - only *sends* but does not receive data. I.e. I can not infer that the server crashed from the fact the he does not send any data because he does not send data anyway (except Handshake messages like ServerHello, ServerKeyExchange, etc.). I guess IPFIX is a one-way protocol. Thanks Daniel You have a problem that cannot be solved in principle. If you do not allow the other side to ever send anything, then there is simply no way you can ever detect its absence. If you wish to detect the loss of the other side, the other side *must* send something. There is no other way. I suggest you either modify your protocol or layer another protocol between it and DTLS. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
Importing OpenSSL CRL into Windows 2003 error
A native Windows CRL includes the following additional extensions : Authority Key Identifier CA Version Next CRL Publish I was able to add Authority Key Identifier and CA Version via the new_oids section: msCAVersion=1.3.6.1.4.1.311.21.1 msCRLNextPublish=1.3.6.1.4.1.311.21.4 I also added the following to the crl_ext section: authorityKeyIdentifier=keyid:always,issuer:always msCAVersion=DER:02:01:00 ** Notice I was not able to add the msCRLNextPublish oid because I don't know how. I get this error, when trying to importing this CRL into Windows 2003: A required CRL extension is missing CertUtil: -dsPublish command FAILED: 0x80070490 (WIN32: 1168) CertUtil: Element not found. So I assume this means I need the CRL Next Publish oid somehow... Or I have something messed up above. Please help - DAVID BLAINE, GCIA , CISSP GDLS-C Lead Information Risk Manager (LIRM) CSC 6000 E. 17 Mile Rd. Sterling Heights MI 48313 GIS | o: 586.825.7650 | c: 810.217.8041 | f: 586.825.8606 | dblai...@csc.com | www.csc.com This is a PRIVATE message. If you are not the intended recipient, please delete without copying and kindly advise us by e-mail of the mistake in delivery. NOTE: Regardless of content, this e-mail shall not operate to bind CSC to any order or other contract unless pursuant to explicit written agreement or government initiative expressly permitting the use of e-mail for such purpose.
SSL_CTX_new:unable to load ssl2 md5 routines
We are experiencing the following error intermittently when we create SSL connections via PHP + cURL: error:140A90F1:SSL routines:SSL_CTX_new:unable to load ssl2 md5 routines We're running Arch Linux, with Apache 2.2.10 and the latest Pacman modules for PHP (5.2.7-2), cURL (7.19.2), and OpenSSL (0.9.8j). These errors occur very regularly on one of our servers, but only once about every 30 minutes on our much faster (and busier) server which runs Red Hat Enterprise 5. It looks like there's some sort of random problem where the EVP_get_digestbyname call fails. I think Apache and Postfix (for smtps connections) would also be using libssl on these servers, but my understanding is that this shouldn't be a problem with current versions of the applications. Has anyone else experienced this sort of behavior? Or are we doing something wrong? Thanks, Dave __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Memory Paging
Hi, I am writing an application that using openssl to do some encryption and decryption. I am wondering if there is a way, on the command line or otherwise, to make sure that no memory that OPENSSL is using is ever paged out to disk? I want to make sure that after the program is done there is no way to get the decrypted data.. Thanks, Bram Cymet You can replace OpenSSL's memory allocator with your own with the CRYPTO_set_mem[_ex]_functions and CRYPTO_set_locked_mem[_ex]_functions functions. If your platform provides a way to provide this assurance, you can make sure OpenSSL uses it. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Where to store client PEM certificates for an application
Edward Diener wrote: Perhaps your seeing this shows why I was at least nominally concerned about the MySQL client having its own public key-private key certificates. I have tried to find out what actual use the client's public key-private key has in MySQL, from either the client or the server's point of view, but to no avail since no one involved with MySQL answers questions about SSL and the documentation that comes with MySQL does not explain the use MySQL may have for the client certs. The ideal solution would be for you to issue each customer their own X.509 certificate signed by your own CA. You can then identify each customer just by certificate. This is how, I think, it was all intended to work. Using things the way they are intended to work means that you don't have any unique security issues, which is good. A more practical solution might be to just give every client the exact same certificate, public key, and 'private' key. Make no attempt to keep this a secret. Do not rely on it for anything. Just grant privileges by username/password, not by X.509 certificate. There are two potential risks with this approach: 1) You need someone to confirm that having a client use a known-compromised private key to authenticate over SSL is no worse than the client using no key at all. It seems to me like you'd almost have to try to make this a problem, but who knows -- maybe it's never been thought about. 2) You need someone to confirm that MySQL doesn't specifically have some odd issue with this non-standard setup. Alternatively, you can hack your own MySQL code to not require or request client authentication. To a lesser extent, you still have problem '2'. You could hack just your MySQL server not to request client authentication. Give the client's some garbage key/cert (just because the client library insists), but the server won't request it, so I don't think they'll use it at all. You can leave the MySQL client libraries alone. You could also put a proxy in front of your MySQL server. The proxy won't request any client authentication on the customer-facing SSL connections. Good luck. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Where to store client PEM certificates for an application
Edward Diener wrote: 1) You need someone to confirm that having a client use a known-compromised private key to authenticate over SSL is no worse than the client using no key at all. It seems to me like you'd almost have to try to make this a problem, but who knows -- maybe it's never been thought about. Whether a client private key is used or no client key at all, there is still the issue of figuring out the username/password. No, there isn't. If using a known-compromised client key compromises the SSL connection, then an attacker can get a username/password simply by reading it out of a compromised SSL connection. On another note, something still seems fundamentally wrong with your approach. Since every customer has a username/password, and you don't trust your customers, you still cannot allow someone to mess with an arbitrary data just because he has a valid username/password. Being able to find the guilty party after a compromise is certainly one part of security. But when you design a security system, it's more important to make compromise difficult. No sane person would argue, I don't need locks on my doors because I have a camera pointing at it. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Question about SHA256 on a RSA* key
Victor Duchovni wrote (ironically, just a week ago): No, it is the protocol design (how all the pieces fit together), not the specific algorithms that make it secure (yes the pieces have to have the right general properties, but this is secondary). I can't resist pointing out how today's news has made my point: http://www.win.tue.nl/hashclash/rogue-ca/ MD5 has the right general properties, but the protocol design failed. Why? Because it used an unsuitable algorithm. If we want secure compare by hash, then almost any sync protocol that uses SHA-256 will be fine but almost any that uses MD5 will not. Why? Because SHA-256 is good for compare by hash and MD5 is not. Any protocol that's not brain-damaged that uses SHA-256 will work, and any that uses MD5 will not. MD5 is (still) vastly stronger (no known second-preimage attacks) in most applications than the weakest parts in real security systems. Spending time choosing between MD5 to SHA1 is in most cases a waste of time. Sure, use SHA1, it is best practice to do so, but this is extremely likely to have any positive impact on the security of the system in question: You still think so? As I said: When we have a set of security requirements, the first thing we do is select the algorithms that meet those requirements, then we look for protocols that implement them. SSL uses MD5 for compare-by-hash. MD5 is broken for compare-by-hash in a situation where an attacker knows the correct input and can choose his own input. My point is not that this particular break is the end of the world or that people should disable MD5 right now. Victor was certainly right when he said: If leaving MD5 enabled improves interoperability, leave it enabled... My point is that the first think you should do after figuring out your threat model and requirements is investigate the algorithms that can defeat that threat model and meet those requirements. Then look for a protocol that implements those algorithms. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Where to store client PEM certificates for an application
Edward Diener wrote: In this last case I do not understand how the client can encrypt data going to the server if it has no private key of its own. Your question is kind of puzzling. Why would the client needs its own private key in order to encrypt data going to the server? In general, private keys are used for decrypting. During the SSL session establishment phase, the client can encrypt data using the server's public key. Basically, it goes like this: 1) The client connects to the server. 2) The server sends the client its public key and certificate(s). 3) The client validates the certificates and public key and confirms that it has reached the correct server. At this point, the client has the server's public key and knows that it has the public key of the server it wishes to talk to. So it can encrypt data using that public key and send it securely to the server. Since only the appropriate server has the corresponding private key (that's what the CA has attested to, at least), the client knows that only its intended recipient can decrypt what it sends. This completes the first phase of the session establishment. Logically the phases are (assuming a typical SSL use -- a browser connects to a secure web server with a certificate issued by a typical CA): 1) The client connects to the server. 2) The server sends the client the server's public key and certificate. 3) The client verifies the certificate's validity and appropriateness: Is it for the web site we intended to reach? Was it issued by a CA we trust? Does the public key given match the certificate? 4) The client challenges the server to decrypt something that the client has encrypted using the server's public key. 5) The server proves to the client that it can decrypt what the client sent and establishes a shared secret with the client. The client now knows that it is talking to an entity that owns the private key correspondending to the public key it knows belongs to the server it wishes to talk to. 6) The client and server use the new shared secret to converse securely using a symmetric encryption scheme. Note that the server has no idea who it is talking to. Typically, the client will validate its identity using a username and password sent over the secure connection. But SSL does support the client sending its own certificate and proving that it owns the private key corresponding to the public key in the client's certificate (using more or less the same process used above). In that case, each party knows who it is speaking to when session establishment completes. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Question about SHA256 on a RSA* key
The TLS protocol did not fail, what failed is the X.509v3 protocol where algorithm choices are not made by SSL users, rather the poor choices were made by CAs, who should have known better, and in any case have largely phased out MD5, with Verisign (reportedly) just one month away from completing their migration to SHA-1. In other words, they chose the wrong algorithm, one that couldn't meet their security requirements. No, but you forget we won't agree. I don't believe that non-experts can come remotely close to choosing algorithms well, but they can choose from a menu of protocols, given a reasonable description of which protocols are alleged to solve which problem. TLS: channel-security PGP or S/MIME message-security AES-XTR disk encryption ... Right, but we just proved that doesn't work. You can choose a secure protocol, but if it uses an underlying algorithm that doesn't meet your security requirements, you are screwed. Nothing is wrong with SSL. Nothing is wrong with TLS. Nothing is wrong with X509v3. MD5 was the problem. A security system is only as strong as its weakest link. If you pick the right algorithms, you only need pick protocols that aren't broken. If you pick the wrong algorithms, no protocol can save you. Protocols rarely have subtle security issues. Algorithms frequently do. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Where to store client PEM certificates for an application
I can understand your summary quite clearly. Great. Suppose the server encrypts data it sends to the client and the client needs to decrypt that data. This is the case when my client SELECTs data from the MySQL database. Does this need a different sequence than the sequence mentioned above, where the client sends the server the client's public key so that the server uses it to encrypt data before sending it to the client who decrypts it using the client's private key ? Or can the same server public-private key be used as you originally specified ? Once session establishment is completed, the client and the server have a shared secret. This is some chunk of data that only the server and the client know. Each side can use the shared secret to encrypt data that only the other side can decrypt. They typically do so using a symmetric encryption algorithm such as AES or RC4. The reason I ask this is that MySQL, in setting up certificates, specifies a public key-private key pair for both the server and any given client. My original thought on seeing this is that this is necessary because both the client and the database server may encrypt/decrypt data. No, that's not why. That would simply be to allow the server to identify the client. If you have no need to do this, and already authenticate the client by some other means (such as username/password) you can probably not specify a client certificate. (It would be fairly unusual to absolutely require one in a case where there is some other way to authenticate the client.) But others seem to imply that only the server public key-private key pair is necessary. That would be the usual situation. In which case if this is true, when the server sends encrypted data to the client which the client must decrypt, the data must be encryoted with the server's private key and decrypted by the client with the server's public key, therefore reversing the role of the public key-private key for encrypting/decrypting data you mention above. No. That would be hideously inefficient. The public/private keys are only used during session establishment. Thanks for the information. Evidently MySQL works with both the server and a given client both having a public key-private key pair. In using the MySQL client library API I must pass the paths to my client certificates as SSL options to a client library connection object before making a SSL connection to the server. After that everything works automatically to encrypt/decrypr data between the client and the database server. Really? It absolutely requires a client certificate? Why not just have the client make up a self-signed certificate then? I just did some research, yes, you are correct. This is a known deficiency of MySQL's SSL support, first reported in 2003! Bug number 2233. From reading this bug and related bugs, there appears to be a lot of weirdness in MySQL's usage of OpenSSL to perform transport encryption. I wonder if there has been any kind of security review. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Where to store client PEM certificates for an application
Edward Diener: But other than vague remonstrances about security planning, and that I was not qualified as a mere programmer to handle security issues from people who have no idea about my ability, I have yet to receive any specifics from others about what they would do in this very common scenario to implement security for the data in the server database. First, it has nothing to do with you being a mere programmer. It has to do with the questions you are asking. If a person asks you what characteristics are important in a bridge building material, they should not be the person currently in charge of designing and building a traffic suspension bridge. As a programmer, you probably know that there are a large number of common errors that people frequently make the first time they do something. Well, the security field is an area particularly rich with such mistakes. You made two very serious ones just in the example code you posted. One of them would have caused your code to appear to work but not actually provide any security. I am a very experienced programmer/designer/architect and vague talk about security does not really impress me very much. Of course if you or others would like to get technical and mention what you feel are good technical solutions to any problems which entail private-key/public-key encryption, I am willing to listen and learn about things which I do not fully know or understand. Security doesn't work that way. It's part of system design, not the implementation of one small piece of a system. Once again the specific issue is that the MySQL server database has a certificate from a CA authority with a server public-private key and my client application was issued the same certificate from the CA authority with a a client public-private key. I need to pass the file location of the client CA certificate/public-key/private-key to the client side library in order to have an SSL connection to the database server where data passing between the client application and the server database is encrypted both ways. I told my employer that we should simply distribute the client CA certificate/public-key/private-key in the same application directory in which the rest of our modules reside. He had been told by someone from Sun that this was inherently bad security and, despite my arguing that this was not the case and that without the username/password to the database nothing could be accomplished even with the client side certs by a destructive hacker, he wanted me to investigate the issue. You have one private key that you distribute to all customers? And this is the private key on which a CA certificate was issued? Is that really what you're saying? I really hope I'm misunderstanding you and you mean something else by client CA certificate. From what others have written, I feel that I am right and coming up with elaborate schemes of hiding the client certs from the end-user until thay are actually going to be used by client application code in making the connection is largely a waste of time. Instead we should be ensuring that the server database and its data are protected from the prying eyes of a destructive hacker. You should ensure that if you give a user credentials, those credentials cannot be used to do anything the user should not be allowed to do. That way, there is no harm if a user compromises his own credentials, either accidentally or intentionally. You should not ever give a normal user anything that can be used to compromise either the server or another user's data. If you do not follow this, you are screwed no matter what you do. If you do follow this, you should have no need to hide a user's credentials from that user. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Where to store client PEM certificates for an application
Kyle Hamilton wrote: If your company hires a security consultant, s/he will state the same thing. -Kyle H The fundamental problem is this: You have one door. Every customer must walk through it. However, you don't want a customer to run amuck once he gets through the door. Your solution is to put more and more locks on the door, and give the customer the key to each one. All of these locks only keep people from going through the door. But the very people who you need to let through the door are the same ones you need to keep from running amuck once they get through the door. No amount of additional locks on the door will do this. You cannot give a person a credential that allows them to do things you must prevent them from doing. You must make it so that their credentials only allow them to do things you would like them to be able to do. It is unfortunate that you are in the position you are in, as it is a nearly hopeless one. Security cannot be added as an afterthought. It must be designed in from the very beginning. You must construct a threat model, state security requirements, and build into the design a way to meet those requirements and defeat all plausible threat models. Honestly, the type of schemes you are considering as band-aids are unlikely to slow down a determined attacker very long. I would bet dollars to donuts that the end result will take less than a day to break. Your scheme requires you to put the credentials where an attacker can get them in unencrypted form. All an attacker need do is terminate your process as soon as it attempts a network connection (or intercept its filesystem calls and snapshot every file before it is deleted or overwritten). Your scheme requires these credentials to be sufficient for someone to do harm. Bluntly, your scheme is hopeless from a security standpoint. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Where to store client PEM certificates for an application
Edward Diener wrote: Please suggest ways to do so. The server is no different from any other server database. It accepts a username/password to prevent unauthorized users from accessing its data. I am perfectly willing to listen to other server techniques which involve security, or read about such techniques, but I need to be pointed to such things. Just generally saying what you say is not going to help me. I am open to specific suggestions if you want to give them. If the username/password prevents unauthorized users from accessing the data, and a user can only do what he or she is allowed to do, what is the rationale for trying to protect the certificates? (From what you've said previously, it seems to be, that will give my boss a warm fuzzy feeling.) What you are doing is putting a screen door on your safe. If the safe's existing door is not adequate, the screen door sure as hell isn't going to be. If protecting the certificates is necessary, it is inadequate. Therefore, you must design your system such that protecting the certificates is not necessary. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Where to store client PEM certificates for an application
Edward Diener Your scheme requires you to put the credentials where an attacker can get them in unencrypted form. All an attacker need do is terminate your process as soon as it attempts a network connection (or intercept its filesystem calls and snapshot every file before it is deleted or overwritten). Your scheme requires these credentials to be sufficient for someone to do harm. Bluntly, your scheme is hopeless from a security standpoint. So any scheme which relies on client-server certificates (aka private-public key encryption) and encrypted data is hopeless from a security standpoint ? No. Any scheme that relies on giving credentials to the same people it is trying to keep them from is hopeless from a security standpoint. Care to suggest what is not hopeless from a security standpoint, which is actually programmable ? A scheme which gives agents only credentials that permit them to do what they are supposed to do and which has no need to hide credentials from their owners. Again, you are trying to hide a customer's credentials from that customer himself. Let me use an analogy. Good: Two houses. Each has a key. People who are allowed into house 1 only get key 1. They cannot get into house 2 because they do not have the key. Bad: Two houses, with a door between them that is always open. People who are only supposed to be in house one get the key to house one, and we try to make it hard for them to find the door between the houses. Maybe, we put it behind some drapes. Or maybe we don't tell them they have a key and they won't notice they're in a house. No proper use of certificates or keys involves hiding the certificates or keys from the very agents who are going to use them. The reason public-key encryption works is because the private keys are not given to those who are not supposed to have them. Not because they are sort of given but sort of hidden. I cannot impersonate 'www.amazon.com' because I do not have their private key. It is not hidden somewhere in my computer where I might be able to find it. It is not obscured from me. It never leaves Amazon's servers, because Amazon is supposed to have it, not me. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Question about SHA256 on a RSA* key
It is not just about you but about many people that have skills in security, but I have this feeling that those people likes to bash on newbies, thinking that they are stupid. Would you want to drive over a bridge that was built by a newbie engineer who didn't think it was important to have an expert check over his design? Newbies can only make secure products by accident. That's is the cold, hard, honest truth. There is nothing stupid about not knowing how to do something properly. What is stupid is to do it anyway and then tell others they can rely on it. What you are doing is equivalent to inventing your own bridge design. Sure, you can do that for fun. But you should know better than to tell other people they should drive over it. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Where to store client PEM certificates for an application
No, my risk model is to simply ascertain whether distributing the certs as files in the application directory is a serious security risk or not and, if it is, what steps can make it less so. If it's a security risk, it's because something is broken someplace else. Why do you need to hide a customer's own certificates from that customer? Presumably, the certificate only permits the customer to do things the customer is authorized to do. If the customer's cert lets the customer do something the customer is not allowed to do, then something is broken elsewhere. This will not keep a cryptographer out of your application, but should pass the warm and fuzzy test. It may make it harder for a disruptive hacker. How would a hacker be able to disprut anything by obtaining normal user access? If normal user access permits disruption, you have a design flaw. Whether this extra processing is really necessary, good security, or security theatre as another respondent on this thread claims, I am really not sure. But that is why I posted my OP, to see what others think and how others handled the situation. Thanks ! If it's necessary, it's inadequate. An ordinary user should only be permitted to do those things you wish to allow that user to do. So an ordinary user getting access to his own credentials should not pose a security risk. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Question about SHA256 on a RSA* key
If we want secure compare by hash, then almost any sync protocol that uses SHA-256 will be fine but almost any that uses MD5 will not. Why? Because SHA-256 is good for compare by hash and MD5 is not. Any protocol that's not brain-damaged that uses SHA-256 will work, and any that uses MD5 will not. MD5 is (still) vastly stronger (no known second-preimage attacks) in most applications than the weakest parts in real security systems. Spending time choosing between MD5 to SHA1 is in most cases a waste of time. Sure, use SHA1, it is best practice to do so, but this is extremely likely to have any positive impact on the security of the system in question: Actually, MD5 is almost worthless for compare-by-hash. http://www.schneier.com/book-practical-preface.html Security is only as strong as the weakest link, and the mathematics of cryptography is almost never the weakest link. The fundamentals of cryptography are important, but far more important is how those fundamentals are implemented and used. Arguing about whether a key should be 112 bits or 128 bits long is rather like pounding a huge stake into the ground and hoping the attacker runs right into it. You can argue whether the stake should be a mile or a mile-and-a-half high, but the attacker is simply going to walk around the stake. Security is a broad stockade: it's the things around the cryptography that make the cryptography effective. That's certainly not true for the specific case of where there are known deficiencies in an algorithm and the algorithm is used such that those deficiencies break the guarantees the system is supposed to provide. Using MD5 in a case where the ability of an attacker to create two plaintexts with the same hash is fatal would be suicide. MD5 was specifically designed for this not to be possible. If leaving MD5 enabled improves interoperability, leave it enabled... That depends on the application. MD5 is perfectly suitable for some applications and completely unsuitable for others. For some applications, if an attacker who knows the real data can craft modified data with the same hash, you are completely sunk. IOW, to evaluate the protocol sensibly, you have to not only know that it does use MD5, but you have to know *how* it uses MD5. If it uses it as part of a signature algorithm, you are relatively safe. If it uses it to validate data an attacker cannot know, you are again safe. If it uses it to validate data known to an attacker, you are not safe. MD5 is simply now broken for that purpose. Yes, once you have a decent protocol, disable legacy symmetric ciphers weaker than ~80 bits (no 40-bit export ciphers, no single-DES), but choose the protocol that addresses the right security model (say secure transport) and don't sweat the algorithms too much, the protocol designers should have taken care of that. In other words, once you choose the protocol, disable the algorithms that don't meet your requirements and enable the ones that do. How can you do that unless you *know* which algorithms meet your requirements and which don't? This advice is of course for application developers, not cryptographers. Still, somehow, I don't think we're likely to reach consensus, over and out. It doesn't seem like it. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Question about SHA256 on a RSA* key
BiGNoRm6969: Never heard about binary specification of the RSA* private key. Can you give more more information about that please. Okay, think about this logically. You want to take the SHA256 hash of an RSA private key and get the same result every time. But the SHA256 hash function takes in arbitrary binary data. So you need to feed it the same arbitrary binary data every time to gt the same hash result. Are you with me so far? That means that you need some kind of specification for converting an RSA private key (which is just a notional thing, it's some numbers) into a binary representation. And you need one and only one true way, because while 3, 3.0 and 03 are the same number, if fed as binary input to a SHA256 hash, you will get a different result. So your algorithm cannot possibly work unless it specifies one and only one precise way to convert an RSA key (a notional thing, some numbers) into binary data suitable for SHA256 hashing. The fact that you didn't even realize that this had to be done proves that you are not even remotely competent to devise a security protocol. If you can't even understand the logical conceptual requirements, the odds of you getting the security right are near zero. I'm sorry to be so blunt, but for your own safety and those of anyone who might use any code you might have an influence on, please don't do what you're doing. Using an established and tested algorithm for its intended purpose. Or, employ someone who is qualified to write security software. If this is anything other than a toy for your own amusement, you're heading towards creating another worthless security product that provides no actual security. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Question about SHA256 on a RSA* key
And, I should note, you've already proved our point a dozen times over. Your code contains three separate bugs, all of them extremely serious. For example, you used the byte size of the *MODULUS* (that's what RSA_size returns) as the hash input size for the private key. And, by the way, I'm not sure you realize quite how serious this is. If you used a binary format that put the public portions of the key in memory first along with the size of the public key rather than the private key, the net result would be a SHA256 hash of the *public* key only. Or perhaps only small bits of the private key. That would result in an algorithm that provided no security at all. Literally none. Do you understand what I'm saying? You could have easily, with bugs very similar to the ones you already had, accidentally produced an algorithm that appeared to work, producing the same hash every time, but actually producing a hash that is predictable from only the public key. (Or with only a few bytes of the private key, leaving it easily broken by brute force.) You are tap dancing on a mine field. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Question about SHA256 on a RSA* key
For information: I am using this key to encrypt / decrypt files locally on a host. Why not use the RSA key for this purpose, using an established and tested algorithm? Since you have the RSA key, and there are any number of established algorithms to use an RSA key for encryption, why did you roll your own? And, I should note, you've already proved our point a dozen times over. Your code contains three separate bugs, all of them extremely serious. For example, you used the byte size of the *MODULUS* (that's what RSA_size returns) as the hash input size for the private key. If you can't even specify an algorithm, what are the odds that whatever you wind up with will actually be secure? (Sorry to be harsh, but security is not an area where you can 'wing it'. Raally.) DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Question about SHA256 on a RSA* key
Why not use the RSA key for this purpose, using an established and tested algorithm? Since you have the RSA key, and there are any number of established algorithms to use an RSA key for encryption, why did you roll your own? This too is wrong, If it's wrong, why did you say the same thing I did after claiming I was wrong? one does not use RSA for this purpose, one uses an established protocol, CMS, S/MIME, PGP, ... when the file is encrypted by Alice for delivery to Bob, or a reputable symmetric PBE (password based encryption) when Alice is encrypting the file for later use by Alice and integrity protection is not required. This is precisely the same thing I said, just in different words. Neither of us suggested using RSA directly, and both of us suggested using an established mechanism that uses the RSA key. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Question about SHA256 on a RSA* key
- Don't choose algorithms for security, choose protocols for security. That sounds completely backwards to me. When we have a set of security requirements, the first thing we do is select the algorithms that meet those requirements, then we look for protocols that implement them. For example, suppose I need to synchronize files across a network. The first thing I would do is ask what algorithms are suitable. Is compare by hash okay? Do I need encryption? Do I need compression? Then, with a list of algorithms that suit my requirements, I can look at protocols that implement them. You can't look at the protocols first. For example, SSL can be used with many different algorithms. So can SSH. So asking, does SSL meet my requirements isn't really even possible without recursing into the algorithms it supports. - The right protocol will have a sensible set of algorithms to go with it, in some cases choose the appropriate subset of parameters within the protocol to yield the right security, performance and interoperability tradeoffs. That's, largely, what makes it the right protocol. If we want secure compare by hash, then almost any sync protocol that uses SHA-256 will be fine but almost any that uses MD5 will not. Why? Because SHA-256 is good for compare by hash and MD5 is not. Any protocol that's not brain-damaged that uses SHA-256 will work, and any that uses MD5 will not. - Do not be tempted to design new algorithms (most IT people know this). Definitely. This is an absolute disaster in almost all cases. - Do not be tempted to design new protocols (most IT people don't know this). This is less of an absolute disaster and doesn't have as many pitfalls as trying to design your own algorithm. But you are absolutely right that it does contain a significant set of potential hazards. Viktor DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Question about SHA256 on a RSA* key
Hi! I am doing a SHA256 on a RSA* private key. I used the result as a symmetric key for AES encryption. Do you have a specification for how to do this? What ensures that the RSA private key has the same binary representation each time? For example, 3 and 03 represent the same number, so does 3.0. But they will each have a different binary representation and hance a different SHA256 hash. So if you were to write a standard that expected the same output each time, you would need to specify a particular binary representation for the RSA key. Did you do that? // / int length = RSA_size(rsaPrivateKey); SHA256_CTX sha256ctx; SHA256_Init(sha256ctx); SHA256_Update(sha256ctx, rsaPrivateKey, length); unsigned char* hash = new unsigned char[SHA256_DIGEST_LENGTH]; SHA256_Final(hash, sha256ctx); // / If I execute this code couples of time in the same process execution, hash variable is always the same value (this is normal!!). But, each time I restart the application, hash value is different. You forgot to: 1) create a specification 2) implement it In the past I used the same pattern, with SHA512 instead of SHA256, and with a char* instead of a RSA* and I dit not have this problem. Any idea what's going on ? You forgot to create a specification for the binary format of the RSA key such that the same RSA key will always have the same binary format. You forgot to convert the RSA key to this format and take the hash of the converted key. If you do not have a specification, you can only be right by accident. And even if you are right, you can never prove it. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: RSA_public_encrypt() strange result output... is it a bug !?
Ok. I am a little bit confused. You are telling me that a same data encrypted with the same key can generate different results? Yes. This is absolutely essential for any public-key system to be secure. Imagine if someone asks you, Should we attack at dawn? Send the message securely using my public key. If the same data encrypted with the same key generated the same results, an attacker would simply have to encrypt yes and no and see which compared to the encrypted data and they would break the code. How can the decryption process can succeed ?! I'm not sure I follow the question. Maybe it's my cryto knowledge that are limited, but I was sure that one output correspond to one input. That would not work. That would allow an attacker to try encrypting every possible input, find the matching output, and break the code. That would make the encryption scheme useless for any application where the encryption input is predictable. That's a lot of schemes. I run my tests couples of time and it always gives me the same output result each times (meaning that the peuso-random generator always gives the same number?). You are probably right, but could you confirm me that my tests are made correctly (using a longer array than the size passed to the encryption function) ? It depends on the exact algorithm you are using. Generally, public-key algorithms are used as follows: 1) A random key is generated. 2) The message is encrypted with the random key using conventional encryption. 3) The random key is encrypted using a public-key algorithm. 4) The encrypted message from step 2 is sent with the encrypted key from step 3. In this way, the message for the public-key algorithm is unpredictable because it's purely random. However, many padding schemes for public-key algorithms make sure that they are protected from this kind of attack even if they are not used to encrypt random data. Read up on OAEP for one example. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Doubts about security
Hello, Can you explain-me how ssl can to be security comunicating first by the public key, and them negociate a private key? If anyone can get the public key, anyone can get the private sniffing the packs. Thanks. Walter Neto - Brazil Private keys are never sent over the network and are not computable from anything sent over the wire. So no, nobody can get the private key from sniffing the packets. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: Installing openssl-fips-1.2
Then how would I fix it so it would compil and not violation any security policy Getting a FIPS build just right is a major pain and requires all kinds of trade offs. I just wouldn't bother unless you absolutely, positively must have a FIPS build for some reason. What you have to do is find some other platform on which the FIPS build works that is object binary compatible with yours. Build the FIPS canister on that platform, and copy it onto your development machine. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: Installing openssl-fips-1.2
The only reasion I'm installing openSSL is because Perl SSH2 requires it. Am I getting to deep into this or is there another way I can get the library I need? Get OpenSSL-0.9.8e or any other version that SSH2 supports. Then how would I fix it so it would compil and not violation any security policy Why would you care about violating a FIPS security policy? You have no FIPS requirements. If you do not have an absolute requirement for FIPS compliance, do not even touch the FIPS build of OpenSSL. Simply build whatever version of OpenSSL you want. Ignore the FIPS user guide, ignore the FIPS security policy. They only matter if for some reason you absolutely must comply with FIPS. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: BIO_do_accept() in non-blocking mode - Better way than loop!?
In the non-blocking mode, is there a better way than watch return value of BIO_do_accept() in a loop ? Is there a way to be notified when a handshake is initiated from the client ? A kind of WAITINCOMINGHANDSHAKE which have a timeout ? Or nothing else ? The OpenSSL documentation tells calls to BIO_do_accept() will await an incoming connection, or request a retry in non blocking mode. This indicate that a retry must be done but it's not what I looking for. Why are you operating in non-blocking mode if you want to block until things happen? If you want to block until something happens, that's what blocking mode is for. It's not clear exactly what you're doing, but if you're calling BIO_do_accept just as a wrapper for 'accept', then you can check for readability on the socket before calling 'accept' again. BIO_do_accept will not succeed until the listening socket is readable. You can block for readability by calling 'accept', 'poll', or whatever is appropriate on your platform. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: Non-blocking windows socket cause SSL_accept error (SSL_ERROR_WANT_READ)
SSL_accept always returns 0 error. With SSL_get_error I found that the error is SSL_ERROR_WANT_READ. During debugging and troubleshooting, I realised that when I use normal blocking windows socket, SSL_accept works fine. Why using non-blocking windows socket caused that error ? This is expected behavior. Because the socket is non-blocking, it returns an error rather than blocking. If you would rather it block than return an error, use blocking sockets. The SSL_accept cannot complete immediately because data from the other side cannot be read at this moment (since the other side has not sent it yet or it hasn't gotten here yet). Since the socket is non-blocking, it cannot wait for the data to arrive. So it returns an error, and the error (SSL_ERROR_WANT_READ) explains that it wants to read from the socket, but cannot do so because the socket is non-blocking. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: sign/verify kicking my ass
I used fwrite(signature,1,strlen(signature),fp) and got the same results. You seem to have a fundamental misunderstanding about how strings work in C. That's not good for someone writing security software. The 'strlen' function computes the length of a C-style string. The signature *IS* *NOT* a C-style string. It *MUST* *NOT* be passed to 'strlen'. Also, this code has a problem: if(RSA_sign(NID_sha1, (unsigned char*) message, strlen(message), signature, slen, private_key) != 1) { You are telling RSA_sign that you are using it to sign a SHA1 hash, but the message is not a SHA1 hash. I believe this will currently sort of work, but it's very bad practice. You should not be using low-level RSA functions unless you really understand RSA. You have already gotten, in the previous round, perfectly clear explanations of this: RSA_sign() and RSA_verify() don't sign arbitrary data they expect the digest of the data being signed/verified. If you want an API that does sign arbitrary data use EVP_Sign*() and EVP_Verify*() instead. You are still neither calling the EVP_* functions nor generating a hash. and The signature is not a NUL terminated C-string, so using printf is not the right way to save it to a file. You are throwing away slen, don't. You are still treating the signature as if it was a C-style string and throwing away slen. What's the point of asking questions if you ignore the answers? DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: CRYPTO_set_dynlock_* mystery ... (was: Engine Issue: nShield 500)
Hi all, it seems that I am missing the usage of the set of obscure functions: CRYPTO_set_dynlock_create_callback() CRYPTO_set_dynlock_lock_callback() CRYPTO_set_dynlock_destroy_callback() but I have no idea how to initialize those functions - is there any example on how to do that by using pthreads ? Ciao, Max Off the top of my head, and untested, but it should give you the idea: struct CRYPTO_dynlock_value { pthread_rwlock_t lock; }; #ifndef CRYPTO_LOCK #define CRYPTO_LOCK 0x01 #define CRYPTO_UNLOCK 0x02 #define CRYPTO_READ 0x04 #define CRYPTO_WRITE0x08 #endif void locking_callback(int mode, struct CRYPTO_dynlock_value *l, const char *, int) { if(mode==(CRYPTO_LOCK|CRYPTO_READ)) pthread_rwlock_rdlock(l-lock); else if(mode==(CRYPTO_LOCK|CRYPTO_WRITE)) pthread_rwlock_wrlock(l-lock); else if(mode==(CRYPTO_UNLOCK|CRYPTO_READ)) pthread_rwlock_unlock(l-lock); else if(mode==(CRYPTO_UNLOCK|CRYPTO_WRITE)) pthread_rwlock_unlock(l-lock); } struct CRYPTO_dynlock_value *create_callback(const char *, int) { CRYPTO_dynlock_value *l=(CRYPTO_dynlock_value *) malloc(sizeof(CRYPTO_dynlock_value)); pthread_rwlock_init(l-lock, NULL); return l; } void destroy_callback(struct CRYPTO_dynlock_value *l, const char *, int) { pthread_rwlock_destroy(l-lock); free(l); } void InitDynLocks(void) { CRYPTO_set_dynlock_create_callback(create_callback); CRYPTO_set_dynlock_lock_callback(locking_callback); CRYPTO_set_dynlock_destroy_callback(destroy_callback); } DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: signature length mismatch ERROR in RSA_Verify.
RSA_verify(NID_md5, datatosign, (strlen(datatosign)), signature, strlen(signature), key); The 'strlen' function is only useable on a C-style string. The signature cannot be a C-style string because it is arbitrary binary data. Best regards, Am. Sivaramakrishnan DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: RSA_sign RSA_verify
Where am i going wrong here? char* message = Hello World; if(RSA_sign(NID_md5, (unsigned char*) message, strlen(message), signature, slen, private_key) != 1) { The problem is that your RSA key is very small. A 256-bit RSA key can only sign up to 32 bytes. 11 bytes are lost due to PKCS1 padding. A raw MD5 signature is 5 bytes. Add to that the fact that your key isn't quite 256 bits (it's about 248) and that an X509 signature has overhead, and you hit the limit. A typical signature is 36 bytes. Add 11 bytes for PKCS padding and you get 47 bytes, or 376 bits. So a 256-bit RSA key is not going to cut it. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: how to verify if the public_key is valid to decrypt data using RSA_public_decrypt()
I'm using RSA to encrypt/decrypt some text. I encrypt the data using the private key and then decrypt it using RSA_public_decrypt(). One thing i noticed was that if the data was not encrypted using the correct private key that RSA_public_decrypt() will just set the output to giberish. Is there anyway to check if the public_key is the correct key to decrypt that data before actually decrypting it? That way i can bail out early and say invalid data file rather than parsing through a bunch of giberish? ~Shaun Feel free to implement this functionality any way that you want. You've specifically opted for the low-level APIs that don't provide this kind of functionality. So if you want it, either use it where it's provided or code it. Note that RSA_public_decrypt is only useful for signatures. Otherwise, you've turned RSA into a symmetric encryption algorithm and have to keep the public key secret. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: RAND_egd() blocking -- despite contract that states otherwise?
Yes. Hence the correct solution would be non-blocking with select()... Best regards, Lutz How do you determine (portably) if the socket you got from 'socket' is inside the legal range for FD_SET? Many platforms, including Linux, will happilly allow 'socket' to return values that are way out of range for FD_SET. And FD_SET has no error return. This will cause crashes on, for example, Linux applications that use more than 1,024 sockets. I bet that covers things like web servers that use OpenSSL. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: RAND_egd() blocking -- despite contract that states otherwise?
That's a great question. Indeed, this platform (AIX) does have /dev/random but apparently that too was exhausted because that is checked first in our implementation. I think the fault is truly with the system in question, because prngd should not have blocked in the manner it did. Despite this problem being a one-off, there is a push to fix the issue and guarantee it will never happen again. It was during my investigations that I noticed the blocking nature of the EGD lookups. Ben So what do you want to do if you run out of entropy? DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: RAND_egd() blocking -- despite contract that states otherwise?
Ben Sandee wrote: On Thu, Nov 6, 2008 at 9:11 PM, David Schwartz [EMAIL PROTECTED] wrote: There needs to be a call to fcntl(fd,F_SETFL,O_NONBLOCK) just after the socket() call and error status check. That will just waste CPU. The code will spin in each loop while (!success) loop until it gets what it wants. It will still not return any time soon, but will do so at 100% CPU instead of 0% CPU. Unless I'm missing something. It looks like the default error handler will catch EWOULDBLOCK and goto err, breaking out of the while() loop. If EWOULDBLOCK happened during connect() then 0 is returned, If it happens during read() or write() then -1 is returned. Is this an important discrepancy? Well then the suggested change (making the socket non-blocking) will still break the code, just differently. If it's made non-blocking, it will never get any entropy, since it will never wait for the daemon to reply. What does it mean to ask a daemon a question and get an answer without blocking? Does it mean the daemon must reply in no time at all? How's that supposed to happen? If the intent was really that it never block, then it will have to return if it would otherwise have to block but keep the same socket around talking to the daemon. This means that sometimes it will have to return with a live connection to the daemon, which it may never have an oppurtunity to close. Sounds like the interface is badly thought out. Perhaps the best reasonable compromise, short of changing the interface, is to set a limit (maybe 3 seconds or so) to how long RANG_egd can block (this would mean it will have to call 'poll' or the like). Yucky to do portably. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: RAND_egd() blocking -- despite contract that states otherwise?
There needs to be a call to fcntl(fd,F_SETFL,O_NONBLOCK) just after the socket() call and error status check. -Kyle H That will just waste CPU. The code will spin in each loop while (!success) loop until it gets what it wants. It will still not return any time soon, but will do so at 100% CPU instead of 0% CPU. Unless I'm missing something. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: How to use a hardware RNG with openssl?
On 2008.09.22 at 16:37:58 +0200, F. wrote: Any way to collect only from HRNG? You can write your own RAND_METHOD and encapsulate it in the engine module. Then you can load this engine via openssl.cnf and set default rand method to this engine. Really, this is not very good idea, because hardware random number generators are slow. Much better to mix in randomness from your hardware source into the OpenSSL PRNG. This is better for several reasons: 1) You can rate-limit how much you mix in. Say you mix in 1KB at startup and 128 bytes every 10 seconds after that. This will provide the same quality of randomness for cryptographic purposes, but will limit the effort. This will protect you against possible denial-of-service attacks where an attacker tries to make you use up more randomness than you have. Many HRNGs are vulnerable to this. 2) The OpenSSL PRNG is well-investigated. If your HRNG's output is not comparable in quality, your security could be compromised. For example, subtle bias in the output could have serious cryptographic consequences. 3) The OpenSSL PRNG is, by design, protected against non-random or defective inputs. So long as it has sufficient good input, no amount of bad input can hurt it. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: client crash or network issue?
Thank you again David, You are welcome. As for the network issue scenarios here are some details about the last case: 1)The server is running on UNIX, the client is running on windows or unix. unplug the client or the server. The server does not report anything! Logical, nothing has happened to the connection yet. it does not detect that its connection to the client is lost... It's not lost. There could be a backup link. It has no idea. SSL_read is not even called because my select does not detect any change on the socket This surprised me. What changed? Before, if you had sent a packet, it would have gotten through. Now, if you had sent a packet, it wouldn't have gotten through. But nothing that has actually happened or will happen has changed. There is no reason anything should happen in this case. The network may well recover before even any packets are dropped. It would be crazy to drop a connection in that case. 2)The server is running on windows, the client is either on windows or on unix. 2.1)unplug the server. It reports ECONNRESET. This is probably the bug you are talking about. How should I go about checking the interface here? any specific APIs to use? my server is in C/C++. Thanks. It's tricky. You would have to actually probe the state of the interface or keep up a test connection to a local machine and see if that test connection fails too. I still think your whole mechanism is wrong. Why should it matter to the *server* whether the client program crashed or something else happened? The server has no way to know what the consquences of a crash are for the client. It sounds like you're violating conceptual boundaries. 2.2)unplug the client. My server reports nothing, similar to 1). This again surprised me but I am by no means an expert on sockets. Why? The client could get plugged back in a second later. Do you want all your connections to break just because a wire vibrated for a split second? TCP is robust. While I can work around 2.1) by checking the interface as you suggested, I am at loss with 2.2) and 1). Because now my server has a situation where clients are no longer connected but it does not even know it... You are thinking about this all wrong. The connection is still just fine, it just has a potentially momentary delay/interruption. It has not failed yet. You don't want to treat split-second routing disruptions as hard failures, that's craziness. My server does a select on each client socket to wait for incoming messages, so I was hoping that a network disconnection is also an information that the select should detect but apparently this is not the case... A momentary connection disruption means nothing. Why should you care? Is there a way my server can be notified? No, because there is no way to know. The point at which the failure occurs has no way to know that it's a single point of failure. It has no way to know how long the failure will last. And as I said, there really has not been anything that has actually failed yet. if not, is there a way my server can proactively look for such clients? I am concerned about my server CPU usage in case they are too many disconnected clients... Unless an idiot designed the protocol, follow its specifications to handle this case. If the protocol didn't take this into account, it's seriously broken and should be redesigned. It sounds like you may have painted yourself into a corner. Protocols layered over TCP have to at least be designed by people who are very knowledgeable about TCP properties. I would give serious thought to starting over or implementing a client-side proxy to fix the protocol. The reason I need all of this is that my server is using some important resources for each client. If a client connects then there is a network issue, the client might have finished its work, exited, but the server is still using the resources... So why wasn't the protocol designed with this in mind? Thanks again for all your help. Good luck. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: Getting application data from the final packet in a handshake.
All - I am using OpenSSL with memory BIOs for the communication. I have everything working just fine, until I came across a server that sends Application data in the final packet of the TLS handshake. Specifically, Wireshark shows the following in its output : Change Cipher Spec, Encrypted Handshake Message, Application Data where I am normally used to just : Change Cipher Spec, Encrypted Handshake Message So, my question is, how do I get at the application data in that packet? After the call to SSL_connect() both SSL_pending() and BIO_ctrl_pending() are claiming that there are 0 bytes available to read. Is there a flag I need to enable? Or some other call? The BIO_read function exists for this exact purpose. There is no way to tell for sure whether an SSL_read or BIO_read (of an SSL bio) will be able to return application data other than to call it and see. The functions you are using only check for certain specific possible ways there could be pending data. They are not exhaustive. Your mistake is in trying to do everything twice, once to figure out what will happen and then again for real. Since you want to receive data if there is any, and there's no harm in trying if there isn't any, it is totally illogical to perform two expensive operations, the first to see if the second is necessary. It's more logical just to do one. If it's necessary, you win, one operation instead of two. If it's not, you break even, one operation either way. Your method not only has the extra cost of doing an operation twice if it's possible, but worse fails horribly if the two attempts are not precisely parallel, and there are many edge cases. This is just the one that's pestering you now. If you don't fundamentally fix your design, there will be another one tomorrow too. Just try to read. Don't try to figure out what will happen if you try. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: client crash or network issue?
Md Lazreg wrote: Actually the same question is valid even if I am not using SSL sockets. So is there a way to distinguish between if a socket was closed because of a client crash or because of a netwrok issue?. If yes, is there an equivalent under SSL sockets? You have three choices: 1) Always assume the client might return. Delay returning resources for a reasonable amount of time. 2) Guess based on the error code. For ECONNRESET, assume the client might come back. For ETIMEDOUT, assume it won't. For an apparently normal close (but at an unexpected time), assume it crashed. You'll be right some fraction of the time, depending on what types of errors happen. 3) Code a reliable method to tell. For example, code a way to probe if the client machine is still around (perhaps a separate daemon to report presence or report the crash of the client program). Code a proxy on the client (that is reliable enough to 'almost never' crash) that can report the loss of the other end of the proxy (the real client program) or similarly engineer a solution. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: FIPS and new releases of openssl
Hello, In appendix B of the openssl FIPS security policy it is stated that the module must be built with a particular tar file (openssl-fips-1.1.2.tar.gz) and a hmac hash value for the tar file is specified. Furthermore it is stated that there shall be no additions, deletions, or alterations of the set of files in the tar file as used during module build. Correct. The way I read this is that if you modify for instance the ASN.1 or SSL code (in order to fix a bug), then the FIPS validation is canceled. This does not make sense to me. Why can't higher level code be bug fixed without FIPS validation being canceled? Build the FIPS module, then fix the higher-level code, then build the rest of OpenSSL. So long as don't modify the source before building the FIPS module, you are fine. You can fix the code that doesn't go in the FIPS canister without violating FIPS, then link your fixed code with the canister. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: client crash or network issue?
Thanks David. Unfortunately option 1) and 3) are not possible for my clients. In other words, you cannot engineer a sensible option and have to fake it. That's fine, but solutions that aren't engineered tend to be poor. option 2) seems the way to go for me, but so far it proved unreliable. That was the downside of that option. Here are some scenarios I have been playing with: 1)Crash a client running on unix: The SSL_read returns 0 . The SSL error code is SSL_ERROR_SYSCALL [An SSL I/O error occurred]. The errno is 0! Seems reasonable. No unread data was pending, so the TCP connection closed normally. You would definitely infer a crash in this case. Network failures don't normally close connections. 2)Crash a client running on windows: The SSL_read returns -1 . The SSL error code is SSL_ERROR_SYSCALL [An SSL I/O error occurred]. The errno is ECONNRESET [Connection reset by peer] So there was some pending unread data in this case. You would definitely infer a crash in this case. A network failure won't reset a connection, but a rebooting host might. So you can't be sure the client didn't crash. 3)Leave the client running on unix or on windows and unplug the network: The SSL_read returns -1 . The SSL error code is SSL_ERROR_SYSCALL [An SSL I/O error occurred]. The errno is ECONNRESET [Connection reset by peer] Did you unplug the client or server? Was the server running Windows? You need to explain this case in detail. If you unplugged the *server* interface, then that's a very unusual special case that you need to specifically test for by checking the interface. (Due to an unfortunate Windows bug. It reports ECONNRESET when it loses a network interface even though the connection was *not* reset by the peer.) As you can see this does not seem to be reliable to distinguish between what really happened. The first two cases seem perfectly sensible. You didn't explain the third case in early enough detail for me to comment on it. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: SSL_ERROR_SYSCALL, errlist: No such file or directory
Calling SSL_accept. Error code: 5 error::lib(0):func(0):reason(0) Error: SSL_ERROR_SYSCALL, errlist: No such file or directory WSAGetLastError, rc=0 This is basically the APIs I call to get the above information. err = SSL_get_error(ssl, rc); printf(Error code: %d, err); ERR_error_string_n(ERR_get_error(), err_buf, sizeof(err_buf)); printf(Error: %s, err_buf); printf(Error: SSL_ERROR_SYSCALL, errlist: %s, sys_errlist[errno]); printf(WSAGetLastError, rc=%d, WSAGetLastError()); Windows client - Windows server (success).. Solaris client - Windows server (above error).. You leave out the most important piece of information -- what was the return value from SSL_accept?! None of your 'printf's include 'rc', which is the most important piece of information there is. If it's zero, as I suspect, then you're barking up completely the wrong tree. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: SSL_ERROR_SYSCALL, errlist: No such file or directory
So I can now see the Solaris side. It appears it gets gibberish, probably encrypted data. Does anyone know why it would appear that the socket is not decrypting the data? This same code works fine on a Windows system. SSL_ca_file: /opt/bf-567/Platform/keystore/CA.pem SSL_cert_file: /opt/bf-567/Platform/keystore/Cert.pem SSL_key_file: /opt/bf-567/Platform/keystore/Key.pem SSL_verify_mode: 0x01 SSL_version: TLSv1 SSL_cipher_list: ALL SSL_use_cert: 1 Making as SSL connection using socket IO::Socket::INET=GLOB(0x29bdfe8). SSL connection to agent. Socket is of type: ref(IO::Socket::SSL=GLOB(0x29bdfe8)) READ: ReadyLine: . Agent Connecting... READ: gibberish on the wire Well, we're kind of back to square one trying to help you, since we're looking someplace else entirely now. You really haven't given us any idea what your application is actually doing or what these log entries mean. If the 'READ' entries are displaying raw socket data as text, then it's logical that they would make no apparent sense. If it's decrypt SSL output, then that you're getting any output at all means that your code thinks the SSL negotation completed successfully, which is inconsistent with what I think you were seeing on the other side (accept failing, therefore no data could have been exchanged). My best guess is that your 'READ' lines are in fact showing raw socket data, so it's not surprising it looks like gibberish. That you expected it to be decrypted data suggests that there's some disconnect between what your code is doing and what you expect it to be doing. It's hard to tell without more details. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: SSL alert number 10 after quite exactly 1MB transfered
please tell me where the deadlock is. As far as I know a deadlock arise when one process locks a resource an other process requests and vice versa. A deadlock occurs when two or more agents are waiting for each other. Neither can make forward progress until the other does. This is precisely what can happen with your proxy and one or both of its endpoints. Perhaps I misinterpret the SSL_pending You fundamentally misunderstand how to design a proxy. A proxy must never refuse to make forward progress, yet yours does. This can lead to a deadlock. Consider: A - proxy - B Asumme that B is trying to read from A. Assume A is blocked in 'write'. Your proxy *must* read data from A to allow A to make further forward progress. However, suppose your proxy is blocked trying to *WRITE* to A. A is trying to write to you and is blocked and you are trying to write to A and are blocked. Without the proxy, there would be no problem, because B would not blocking trying to write to A, and if it couldn't write, it would read, allowing A to unblock itself. So, in summary, a proxy must never refuse to make forward progress, or it can cause a deadlock. However, if your code reads 10 data bytes from A, it will not do anything else until it sends those 10 bytes to B. This is true even if it could send data to A at that moment. Thus your proxy can deadlock. When you design a proxy, it is absolutely vital that you never, ever wait for an endpoint in any case where you could make forward progress. Your code breaks this fundamental rule. As a result, you may wind up waiting for an endpoint that is itself waiting for you (to do something else, which you refuse to do because you're waiting for that endpoint to itself do something else). DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: SSL alert number 10 after quite exactly 1MB transfered
Let me try one more time to explain the problem with an unrealistic, but I hope easy to follow, example. Consider: A - B Now, imagine A sends a message to B requesting some unit of data. B begins sending a very, very large chunk of data to A, many tens of MB. After 10 MB or so, A realizes this is not what it expected, so it sends an abort/close command to B. A then waits for B to close the connection in response to the abort/close command. B is multi-threaded, always trying to both receive and send concurrently, so it detects the abort command and stops sending. There is no way this can deadlock. Whenever A sends the abort/close, B will get it. B never waits for A to receive anything before it receives itself. Now, interpose your proxy: A - proxy - B Now, A sends the message to B requesting the data. B starts blasting the data to the proxy, which you blast to A. Now, A stops receiving the data and sends the abort/close command. But what happens? Your proxy will not read from A until it finishes sending the data it received from B. But A will not call 'read', it is waiting for the connection to close. So your proxy will wait forever trying to send to A, when it could make forward progress by receiving from A, sending to B, and then handling the connection close. So your proxy breaks a protocol that was unbreakable without the proxy. This is the easiest to understand breakage. It is not the only way you can introduct breakage. A proxy must not ever say I will not do X until I can do Y when being able to do Y requires someone else doing something. Assuming the protocol you are proxying worked in the first place, it was deadlock free. But if you introduce additional ordering dependencies (without knowing the existing ones) you can introduce deadlocks. In general, most protocols require a you must not refuse to receive just because you cannot send rule because this is the rule TCP itself provides. Your proxy violates this rule because it uses blocking sends when it has no idea whether a receive from that same endpoint would succeed. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Problem creating CA's -ssl_error_handshake_failure_alert
I all! Maybe I'm too much novice on this, but I want to create a certificate for each virtual host on my apache server (3 virtual hosts). So i created my own CA, then one for each virtual host, like this: Created the private CA and certificate: openssl genrsa -out SSC_CA.key 1024 openssl req -new -key SSC_CA.key -out SSC_CA.csr (then i have entered country, organization name, etc, with no passphrase) openssl x509 -req -days 365 -in SSC_CA.csr -out SSC_CA.crt -signkey SSC_CA.key Created for each server using the private CA: openssl genrsa -out intra01.key 1024 openssl req -new -key intra01.key -out intra01.csr (then i have entered country, organization name, etc, with no passphrase) openssl ca -in intra01.csr -cert SSC_CA.crt -keyfile SSC_CA.key -out intra01.crt openssl genrsa -out ssc01.key 1024 openssl req -new -key ssc01.key -out ssc01.csr (then i have entered country, organization name, etc, with no passphrase) openssl ca -in ssc01.csr -cert SSC_CA.crt -keyfile SSC_CA.key -out ssc01.crt openssl genrsa -out sec01.key 1024 openssl req -new -key sec01.key -out sec01.csr (then i have entered country, organization name, etc, with no passphrase) openssl ca -in sec01.csr -cert SSC_CA.crt -keyfile SSC_CA.key -out sec01.crt Then I configured each virtualhost on ssl.cnf with this lines (i copy only this ones for not a very long e-mail): SSLCertificateFile /usr/local/ssl/SSCCA/intra01.crt SSLCertificateKeyFile /usr/local/ssl/SSCCA/intra01.key SSLCACertificateFile /usr/local/ssl/SSCCA/SSC_CA.crt It appears to be all ok, the appache starts with no problem, but when i try to view the webpages firefox first tell me about the unknow certificate, i add the exception, ok, then after adding the exception i get this error: An error occurred during a connection to ssc01.dei.uc.pt. SSL peer was unable to negotiate an acceptable set of security parameters. (Error code: ssl_error_handshake_failure_alert) Please tell me, what i have done wrong? Is there anything that I fairly clear and I should understand in SSL and Certificates? Thank you in advance! David Carvalho __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Problem openssl: ssl_error_handshake_failure_alert
I all! Maybe I'm too much novice on this, but I want to create a certificate for each virtual host on my apache server (3 virtual hosts). So I created my own CA, then one for each virtual host, like this: Created the private CA and certificate: openssl genrsa -out SSC_CA.key 1024 openssl req -new -key SSC_CA.key -out SSC_CA.csr (then i have entered country, organization name, etc, with no passphrase) openssl x509 -req -days 365 -in SSC_CA.csr -out SSC_CA.crt -signkey SSC_CA.key Created for each server using the private CA: openssl genrsa -out intra01.key 1024 openssl req -new -key intra01.key -out intra01.csr (then i have entered country, organization name, etc, with no passphrase) openssl ca -in intra01.csr -cert SSC_CA.crt -keyfile SSC_CA.key -out intra01.crt openssl genrsa -out ssc01.key 1024 openssl req -new -key ssc01.key -out ssc01.csr (then i have entered country, organization name, etc, with no passphrase) openssl ca -in ssc01.csr -cert SSC_CA.crt -keyfile SSC_CA.key -out ssc01.crt openssl genrsa -out sec01.key 1024 openssl req -new -key sec01.key -out sec01.csr (then i have entered country, organization name, etc, with no passphrase) openssl ca -in sec01.csr -cert SSC_CA.crt -keyfile SSC_CA.key -out sec01.crt Then I configured each virtualhost on ssl.cnf with this lines (i copy only this ones for not a very long e-mail): SSLCertificateFile /usr/local/ssl/SSCCA/intra01.crt SSLCertificateKeyFile /usr/local/ssl/SSCCA/intra01.key SSLCACertificateFile /usr/local/ssl/SSCCA/SSC_CA.crt It appears to be all ok, the appache starts with no problem, but when I try to view the webpages firefox first tell me about the unknow certificate, i add the exception, ok, then after adding the exception I get this error: An error occurred during a connection to ssc01.dei.uc.pt. SSL peer was unable to negotiate an acceptable set of security parameters. (Error code: ssl_error_handshake_failure_alert) Please tell me, what i have done wrong? Is there anything that I fairly clear and I should understand in SSL and Certificates? Thank you in advance! David Carvalho
RE: SSL alert number 10 after quite exactly 1MB transfered
Hello list, I write a application which acts like a proxy/repeater between two ssl - endpoints. For my app I use OpenSSL 0.9.8g. The two endpoints connect to the app and idenfity themselves using a id (Both use the matrixssl implementation for ssl handling). Two matching id's start the repeating. Everything runs fine up to the transfer amount of quite exactly 1 megabyte, then the connection crashs and in repeat code I get this errormessage: Is there an error in the code? Yes, the code is prone to deadlock. The code implements the I will not start doing X until I finish doing Y logic. This is known to cause deadlocks in proxies, as one end or the other of the connection proxied inevitably has an I will not start doing Y until I finish doing X logic. You thus wind up with a proxy that could make forward progress in one direction but refuses to because it cannot make forward progress in the other direction. But that's not your problem. You're problem is that you are horribly abusing SSL_pending. SSL data may be neither in the socket buffer nor pending, and you ignore it. (For example, the SSL connection may have, in its buffer, an entire SSL protocol block. No data is pending, since the first byte of the block has not been analyzed yet, and no data is waiting on the socket.) In general terms, a general-purpose proxy can never say I could do X, but I won't do it *now*. You break this rule in two ways. One with SSL_pending (which checks for one type of forward progress while ignoring another) and by blocking in one direction even when you could make progress in the other. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: Simple non-blocking TCP connect
I was thinking about an alternate solution, using blocking sockets, and doing the connect on another thread. If the user cancels the operation I'd close the socket (BIO_free) and I guess the connect would return with an error and the thread would exit then. Seems a little dirty but it could simplify my life. What do you think? Cheers, Gabriel. I wouldn't recommend that for three reasons. First, you may be on a platform that doesn't support threads or doesn't support threads well. Second, there will always be a race window where the user might close the socket right as you're about to call 'connect'. If that happens, you may wind up 'connect'ing somoene else's socket. Third, it has a complexity and hackishness that increases the risk that odd things will happen. Calling 'getpeername' is a pretty common way to determine if a socket is connected. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: Simple non-blocking TCP connect
I was thinking about an alternate solution, using blocking sockets, and doing the connect on another thread. If the user cancels the operation I'd close the socket (BIO_free) and I guess the connect would return with an error and the thread would exit then. Seems a little dirty but it could simplify my life. What do you think? Cheers, Gabriel. I wouldn't recommend that for three reasons. First, you may be on a platform that doesn't support threads or doesn't support threads well. Second, there will always be a race window where the user might close the socket right as you're about to call 'connect'. If that happens, you may wind up 'connect'ing somoene else's socket. Third, it has a complexity and hackishness that increases the risk that odd things will happen. Calling 'getpeername' is a pretty common way to determine if a socket is connected. DS I just realized that I misunderstood you. Yes, that's a perfectly sensible workaround to use in your own code, so long as you can deal with the race condition issue. Here's a patch to crypto/bio/bss_conn.c that you can test: --- old/bss_conn.c 2008-10-27 17:55:22.0 -0700 +++ new/bss_conn.c 2008-10-27 17:57:18.0 -0700 @@ -291,9 +291,20 @@ static int conn_state(BIO *b, BIO_CONNEC ret=0; goto exit_loop; } else + { + struct sockaddr sad; + socklen_t l=sizeof(sad); + if(getpeername(b-num, sad, l)==0) c-state=BIO_CONN_S_OK; + else + { + BIO_set_retry_special(b); + b-retry_reason=BIO_RR_CONNECT; + goto exit_loop; + } + } break; case BIO_CONN_S_OK: ret=1; DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: Simple non-blocking TCP connect
Gabriel Soto wrote: { // Create BIO with some random nonexistent host. BIO *bio = BIO_new_connect(192.168.9.9:); if (bio == NULL) { // Failed to obtain BIO. return false; } // Set as non-blocking. BIO_set_nbio(bio, 1); // Attempt to connect. printf(BIO_do_connect: %ld\n, BIO_do_connect(bio)); printf(BIO_should_retry: %d\n, BIO_should_retry(bio)); // Try again. Not much sense in this, but let's see what happens. printf(BIO_do_connect: %ld\n, BIO_do_connect(bio)); printf(BIO_should_retry: %d\n, BIO_should_retry(bio)); } Output: BIO_do_connect: -1 BIO_should_retry: 8 BIO_do_connect: 1 BIO_should_retry: 8 Does this make sense? Why does BIO_do_connect() return 1 the second time? Can anybody just confirm that this is a strange behavior. Maybe I'm getting things wrong. Any example code of a client using non-blocking sockets will be greatly appreciated too. There's a bug in bss_conn.c. It assumes that with a pending non-blocking connection attempt, the absence of an error (SO_ERROR is 0) indicates that the connection completed successfully. The absence of an error *right* *now* does mean that the connection has or will complete successfully: case BIO_CONN_S_BLOCKED_CONNECT: i=BIO_sock_error(b-num); if (i) { BIO_clear_retry_flags(b); SYSerr(SYS_F_CONNECT,i); ERR_add_error_data(4,host=, c-param_hostname, :,c-param_port); BIOerr(BIO_F_CONN_STATE,BIO_R_NBIO_CONNECT_ERRO R); ret=0; goto exit_loop; } else c-state=BIO_CONN_S_OK; break; Notice how this assumes that if BIO_sock_error returns zero, the connection completed? This is a bogus inference. The absence of an error just means the connection attempt has not failed *yet* and tells you nothing about how it will ultimately turn out. Until OpenSSL is fixed, you must avoid retrying the connect until the socket becomes writable or errored. Calling 'connect' in the window between when a connection is attempted and when it succeeds or fails is unsafe, due to this bug. I'm not sure what the optimal fix is. One possible fix is to change the 'else' to call 'getpeername'. If the return is an 'ENOTCONN' error, then indicate that a retry is needed. Only set the state to 'BIO_CONN_S_OK' if 'getpeername' returns zero. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: why build shared openssl
Never ship a Shared OpenSSL library. Anyone can rebuild it to output the socket buffer to disk prior to encryption and replace yours. :-) A party to an encrypted conversation can put its contents in a full-page ad in the New York Times if they want to. There's no need to keep a conversation secret from its own parties. The two ends of the OpenSSL encryption engine are controlled by the same party. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: how do I determine blocking or nonblocking?
David Schwartz wrote: Which is pretty much the same as every other operation. If you call 'send' or 'write' on a blocking TCP socket, and you get a zero return, does that mean the data has been sent? No. It means the data is queued and the send is in progress. If you call 'shutdown' on a blocking socket and get a zero return, does that mean the connection has finished shutting down? No. It means the shutdown is in progress. I take point with DS on this single aspect. Please name the implementation of send() or write() that uses a zero return code to mean the data is queued and the send is in progress. I know of no such implementation. Sorry, I meant non-error. Not zero. That should read, If you call 'send' or 'write' on a blocking TCP socket, and you get a non-error return, ... A non-zero positive error return is always used to indicate that situation in all implementations of send() and write() I have come across. Correct, zero is failure for send/write. The interpretation of what zero means in respect of SSL_shutdown() is a matter for the OpenSSL documentation to clarify. I myself can not see the parallel that DS can see in respect of the send/write APIs - so please ignore this confusion DS introduces. My apologies for the mistake. I hope I didn't confuse anybody. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: ssl_ctx_new 140A90F1:lib(20):func(169):reason(241)
Most of the time but not all I get 140A90F1:lib(20):func(169):reason(241) from the error stack when I try to call sl_ctx_new. I am using 9.8i in a win32 environment. Any information on what the error message means would be much appreciated. The OpenSSL executable has the 'errstr' command for this purpose. The error indicates that the SSL v2 code could not load the MD5 routines. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: how do I determine blocking or nonblocking?
Documentation tells me that the SSL pointer should inherit the blocking property from the socket passed to SSL_set_fd. Right. However, when I call SSL_shutdown with the SSL handle, the return code I get is not an error or a shutdown completed but a shutdown in progress (return code= 0). Which is pretty much the same as every other operation. If you call 'send' or 'write' on a blocking TCP socket, and you get a zero return, does that mean the data has been sent? No. It means the data is queued and the send is in progress. If you call 'shutdown' on a blocking socket and get a zero return, does that mean the connection has finished shutting down? No. It means the shutdown is in progress. Documentation tells me that the shutdown will not return until the shutdown is completed (return code= 1) or an error condition is detected (return code= -1) if the SSL handle is blocking. I'm not sure what documentation that is, but it's incorrect. So now I am confused. How can I test the SSL handle to find out if it is blocking or not? The operation will only block if it has to. Operations won't gratuitously block. Specifically, operations try as hard as they can *not* to block until the other side does things unless that is necessary to take the data passed or give the data that needs to be returned. regards, Solveig DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: creating public RSA key and verifying signature
btw, when i try to get the error code by printf(Error code: %d, ERR_get_error()); i get Error code: 67567722 Your code says: result = RSA_public_decrypt(pValidationData.ulValidationDataLength, pValidationData.rgbValidationData, outputPlaintext, publicKey, RSA_PKCS1_PADDING); Are you 100% sure the data had PKCS#1 v1.5 padding? error:0407006A:rsa routines:RSA_padding_check_PKCS1_type_1:block type is not 01 DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: Getting the peer certificate encoding
Aravinda Babu wrote: Problem is our application will verify only DER format certificates. So if i get the peer certificate in PEM format , i will convert that into DER and i will verify the peer certificate. Is there any openSSL API which will tell me a'out the peer certificate encoding ? I want to know whether it is in PEM or DER ? Is the certificate in a memory buffer or a file? Either way, you can just look at the data. If it's PEM, the whole file will be printable text. The first few characters will be perhaps some number of newlines or empty spaces, but the first non-whitespace should be a '-'. If it's DER, there will be many non-printable characters. However, it's probably just easiest to try it both ways. If either of them works, you have a valid certificate. Just remember to clear the error stack after an expected and normal error. Otherwise, it might confuse you later when you see an invalid certificate type error because much earlier it worked on the second attempt. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: Issue getting enough entropy on Windows NT 4.0 system
Hello, The Windows NT 4.0 system has the workstation service stopped. This causes the following snippet from rand_win.c to return 0 if (netstatget(NULL, LLanmanWorkstation, 0, 0, outbuf) == 0) { RAND_add(outbuf, sizeof(STAT_WORKSTATION_0), 45); netfree(outbuf); } Add to this a large section of calls are #if 0 out due to a reported problem by Wolfgang Marczy and there isn't many places this function gets entropy from. Any suggestions? Why not grab some entropy from the system entropy provider? #include wincrypt.h bool GetSysEntropy(void *ptr, int len) { char namebuf[512]; HCRYPTPROV handle; DWORD count=500; if(!CryptGetDefaultProvider(PROV_RSA_FULL, NULL, CRYPT_MACHINE_DEFAULT, namebuf, count)) return false; if(!CryptAcquireContext(handle, NULL, namebuf, PROV_RSA_FULL, CRYPT_VERIFYCONTEXT|CRYPT_SILENT)) return false; if(!CrytpGenRandom(handle, len (BYTE *) ptr)) { CryptReleaseContext(handle, 0); return false; } CryptReleaseContext(handle, 0); return true; } DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: non-blocking version of SSL_peek
Actually before closing a TLS connection I need to make sure that no pending data is present on the that socket. So, calling SSL_peek would tell if this is the case or not. No, it won't. Okay, you call SSL_peek, and there's no pending data. Now, you're about to call SSL_shutdown. How do you know there's no pending data *NOW*? Just because there wasn't before, it doesn't mean there isn't now. The only way to know that there's no pending data when you call SSL_shutdown is for the protocol you are implementing to ensure that. Otherwise, you will always have a race. As you are saying that SSL_peek should be called before SSL_shutdown, No, there's no point. If you didn't know there was no data before SSL_peek, you still won't know there's no data *now* after. then how is it ensured that the connection gets closed only if all the data arrived on that socket is processed ? Does SSL_shutdown takes care of this ? or what is the significance of calling SSL_peek after SSL_shutdown ? No, the higher-level protocol takes care of this. When a request is completed, the other end will have nothing more to send. When you finish replying, what else would the other end send? If there's a keep the connection in case and close it after a timeout, the protocol handles a close with pending data smoothly (since there's always a race in a timeout). I am using SIP over TLS and it does not specify any such thing related to tls. If nobody else knows offhand, I'll do some research into SIP and see how it handles that case. No sane protocol requires you to race to shutdown and hope and pray the other end doesn't send some data at the wrong time. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: RE: Issue getting enough entropy on Windows NT 4.0 system
Thanks for the suggestionb but the RAND_poll function already pulls from the system right after the big #if 0 block as described below in the stetup for the calls. if (advapi) { /* * If it's available, then it's available in both ANSI * and UNICODE flavors even in Win9x, documentation says. * We favor Unicode... */ acquire = (CRYPTACQUIRECONTEXTW) GetProcAddress(advapi, CryptAcquireContextW); gen = (CRYPTGENRANDOM) GetProcAddress(advapi, CryptGenRandom); release = (CRYPTRELEASECONTEXT) GetProcAddress(advapi, CryptReleaseContext); } So, still looking for other suggestions. Umm, so what's the problem exactly? Did this fail to get entropy from the system? DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: non-blocking version of SSL_peek
Hi, Can anyone tell me if SSL_peek is a blocking or non-blocking call ? It can be either. When I use it inside my code, then the program blocks on this fuction call where there is no data on the socket. If you're using blocking socket calls, that's what will happen. The reason I want to use this call is that before closing the SSL connection by using 'SSL_shutdown', I want to make sure that there is no pending data present on this connection. SSL_peek won't help. You need to call SSL_shutdown first and then check for any pending data. No matter when and how you call SSL_peek, there will still be a point before you call SSL_shutdown and after you call SSL_peek. If your protocol requires you to do this, the protocol is broken and really should be fixed. If it doesn't, why do this? Is there any method to make the call SSL_peek non-blocking,i.e. it should return if there is no data present on SSL connection like that happens with tcp peek by using option MSG_PEEK|MSG_DONTWAIT. Or can SSL_pending be used for this purpose? Please suggest... I am using openSSL version 0.9.7b. What is your outer problem? Why do you think you need to do this? What protocol are you implementing over SSL? DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: Trouble with bidirectional shutdown
Thank you for your response. I have checked the error code using SSL_get_error. I get an SSL_ERROR_SYSCALL (5) return code, indicating an I/O, but the error queue is empty. My application continues to function. It is fetching an HTML document over an HTTPS connection. This is a documented bug in OpenSSL, however I don't know what the bugfix or workaround is. Here's the documentation for when SSL_shutdown returns 0: The shutdown is not yet finished. Call SSL_shutdown() for a second time, if a bidirectional shutdown shall be performed. The output of SSL_get_error(3) may be misleading, as an erroneous SSL_ERROR_SYSCALL may be flagged even though no error occurred. The problem is, how do you know when to call SSL_shutdown again? If it's immediate, will the problem simply repeat giving you an SSL_ERROR_SYSCALL again? I think one possible imperfect workaround is to call SSL_shutdown again if you get SSL_ERROR_SYSCALL. If you get SSL_ERROR_SYSCALL again, treat it as a successful shutdown. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: RSA Private Key Algortthm
Where can I find a detailed description of how to compute the RSA private key? Well structured C or C++ code might do. Thanks, Mike. http://en.wikipedia.org/wiki/RSA In the section Operation, the first set of 5 steps beginning with Choose two distinct large random prime numbers p and q documents the process of computing an RSA private key. If you want example code, the OpenSSL distribution includes that in appls/genrsa.c. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: Reading private key from Memory Buffer.
09dirkd+sRoXWShF8ctVVb4B1PAFTOBEa8diickehnAyEq6KhzLWpQqhqCnylETw\r\n Drys2uVaAzmRhS6tGJ2fdwPnlSLJrQbHuP938BkyxNhdYN8drfqb\r\n; You appear to have an extra ; here ---^ But that should give you a compilation error. -END RSA PRIVATE KEY-\r\n; That won't give you a compilation error. But it will cause things not to work. Consider: #include stdio.h int main(void) { char *string=This is one line.\r\n; This is another.\r\n; printf(%s, string); } Will produce: This is one line. The ';' after the end of the first line ends the statement. The second string becomes its own statement, which has no effect. If you compile with GCC and all warnings enabled, you should get a statement with no effect warning. But it is legal and should not generate an error. -Wall is your friend. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: Leaks X509
Stanislav Mikhailenko: Hello I use openssl 0.9.8i in my project under Win32. There are some leaks detected when i do just it: X509* x=X509_new(); X509_free(); It was in previous versions too. What should i do to remove this? Did you confirm that the memory was leaked and not actually still in use? To test this, repeat the code block to allocate and free two X509 objects and see if twice as much memory is leaked. If you see the same amount of memory leaked, that proves that something the code did the first time made the code use less memory the second time. This shows that the memory was not actually leaked, but was in fact in use -- and in fact was used by the second operation. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: Trouble with bidirectional shutdown
Solveig Viste wrote: I have an application which is occasionally hanging. I have tracked it down to an SSL_shutdown call. The value (0) returned from the shutdown call indicates that the shutdown is not finished. As happens with non-blocking sockets, sometimes the operation does not complete and you have to retry the operation later. The shutdown man page indicates that a second call to SSL_shutdown should cause a bidirectional shutdown, A subsequent retry of the operation will complete if and only if whatever the first shutdown was waiting for has happened. and I thought this is indeed what the application calls for. However, when I make the second call to SSL_shutdown, the value returned is still 0 (shutdown not finished) rather than 1 (shutdown complete) or -1 (shutdown not successful). Did you check the error code? Was it WANT_READ or WANT_WRITE? Did you wait for the appropriate operation to be ready? Is this recently added bahavior? Does the SSL handle need to have certain properties in order to get a bidirectional shutdown? You need to handle an organized shutdown the way you handle any other operation on a non-blocking connection that might take time to complete. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: XP/Vista/Office 2007/IE Compatibility Question
Isabel [EMAIL PROTECTED] wrote: 1) Is the software compatible with XP? If not, what is the compatible version and what are the costs involved in upgrading? OpenSSL is compatible with XP. OpenSSL is a library and you are probably using it through other programs. You need to investigate their compatability. 2) Is the software compatible with Vista? If not, what is the compatible version and what are the costs involved in upgrading? Yes. Same answer as '1'. 3) If the software is not independent of Office 2007, is it compatible with it? If not, what is the compatible version and what are the costs involved in upgrading? Independent. 4) If the software is not independent of IE, what versions of it are compatible? Independent. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: DTLS clue requested: epoch numbers
On Fri, 2008-09-26 at 13:46 -0700, David Woodhouse wrote: At the worst, I should be able to reverse-engineer the library I have. The first failure seems to have been a discrepancy in epoch numbers. Comparing behaviour of their library and 0.9.8e, I find that theirs is adding '00 01 00 00 00 00 00 00' to a digest at some point, while 0.9.8e adds '00 00 00 00 00 00 00 00'. This is called from tls1_mac(), when it's adding the 8 bytes of ssl-s3-read_sequence to the MAC. The 0.9.8e library then rejects the Server Hello because of the MAC failure, which was the original failure mode I was observing. If I hack EVP_DigestUpdate() to fix that single byte for that one call, then the MAC check in dtls1_process_record() succeeds, although fairly unsurprisingly I get a later failure -- in ssl3_get_finished() when s-s3-tmp.peer_finish_md doesn't contain what it should: 12778:error:1408C095:SSL routines:SSL3_GET_FINISHED:digest check failed:s3_both.c:235: I'm still entirely clueless about the protocol, but it seems the top 16 bits of ssl-s3-read_sequence are supposed to be an epoch number. But it's getting set to all zeroes in dtls1_reset_seq_numbers() even when the epoch is non-zero. Having narrowed it down that far, does anyone remember a change which might have caused this? I tried removing my hack from EVP_DigestUpdate and instead hacking dtls1_reset_seq_numbers() to call s2n(epoch, seq) to put the epoch in place after the memset. That makes no difference -- I still get the same later failure. Which I'll now investigate, but it's probably going to turn out to be due to the wrongness of my 'fix' for the epoch thing. As I said, I'm fairly clueless. I've converted the OpenSSL CVS history into git so that I can try to look through it, but I don't see anything which jumps out as being relevant. There's a commit entitled 'Liberate dtls from BN dependency. Fix bug in replay/update.' which helpfully hides an unspecified bug fix in amongst 300-odd lines of more cosmetic changes in one commit, but that doesn't seem to be it. As before, my test case is http://david.woodhou.se/dtls-test.c -- and needs to be run against a version of OpenSSL which still uses 0x100 for DTLS1_VERSION. -- dwmw2 __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: DTLS clue requested: epoch numbers
On Sun, 2008-09-28 at 18:56 +0100, David Woodhouse wrote: On Fri, 2008-09-26 at 13:46 -0700, David Woodhouse wrote: At the worst, I should be able to reverse-engineer the library I have. The first failure seems to have been a discrepancy in epoch numbers. And the others are due to patches which were committed to OpenSSL later -- in particular, 'RFC4347 says HelloVerifyRequest resets Finished MAC', which moves a call to ssl3_init_finished_mac(), and Make DTLS1 record layer MAC calculation RFC compliant, which sets the appropriate version numbers in the packet in tls1_mac(). The full patch against 0.9.8e which makes this work (or at least successfully negotiate and pass _some_ traffic -- I don't vouch for later epoch changes) is below. Next step would be to make it work in something newer. And preferably with the RFC-defined version of the protocol instead of the old one. Their client does seem to respond with the 'real' DTLS1 version if we try that instead of using DTLS1_BAD_VER. And it has a CCS header length of only 1 byte its responses, so it really is doing something different and not just parrotting the version number. But just taking 0.9.8f and setting the epoch in dtls1_reset_seq_numbers() as in the patch below isn't sufficient -- I get the same record mac failure that I started with. This time it's going to be a little harder to guess what variant of the new protocol they're using, because I don't have any implementation of that -- and I'm not even sure it's _working_ on the server side. So I suspect my best course of action now would be to somehow make it possible to use the older version of DTLS in a current OpenSSL, for compatibility? It's likely to be the only thing that's _tested_ against Cisco servers anyway. Index: ssl/d1_clnt.c === RCS file: /home/dwmw2/openssl-cvs/openssl/ssl/d1_clnt.c,v retrieving revision 1.3.2.6 diff -u -p -r1.3.2.6 d1_clnt.c --- ssl/d1_clnt.c 5 Dec 2005 17:32:19 - 1.3.2.6 +++ ssl/d1_clnt.c 28 Sep 2008 23:49:54 - @@ -214,8 +214,6 @@ int dtls1_connect(SSL *s) /* don't push the buffering BIO quite yet */ - ssl3_init_finished_mac(s); - s-state=SSL3_ST_CW_CLNT_HELLO_A; s-ctx-stats.sess_connect++; s-init_num=0; @@ -225,6 +223,10 @@ int dtls1_connect(SSL *s) case SSL3_ST_CW_CLNT_HELLO_B: s-shutdown=0; + + /* HelloVerifyRequest resets Finished MAC */ + ssl3_init_finished_mac(s); + ret=dtls1_client_hello(s); if (ret = 0) goto end; Index: ssl/d1_pkt.c === RCS file: /home/dwmw2/openssl-cvs/openssl/ssl/d1_pkt.c,v retrieving revision 1.4.2.5 diff -u -p -r1.4.2.5 d1_pkt.c --- ssl/d1_pkt.c29 Nov 2006 14:45:13 - 1.4.2.5 +++ ssl/d1_pkt.c28 Sep 2008 23:53:18 - @@ -1718,12 +1718,12 @@ dtls1_reset_seq_numbers(SSL *s, int rw) { unsigned char *seq; unsigned int seq_bytes = sizeof(s-s3-read_sequence); + int epoch; if ( rw SSL3_CC_READ) { seq = s-s3-read_sequence; - s-d1-r_epoch++; - + epoch = ++s-d1-r_epoch; pq_64bit_assign((s-d1-bitmap.map), (s-d1-next_bitmap.map)); s-d1-bitmap.length = s-d1-next_bitmap.length; pq_64bit_assign((s-d1-bitmap.max_seq_num), @@ -1738,10 +1738,11 @@ dtls1_reset_seq_numbers(SSL *s, int rw) else { seq = s-s3-write_sequence; - s-d1-w_epoch++; + epoch = ++s-d1-w_epoch; } memset(seq, 0x00, seq_bytes); + s2n(epoch,seq); } #if PQ_64BIT_IS_INTEGER Index: ssl/t1_enc.c === RCS file: /home/dwmw2/openssl-cvs/openssl/ssl/t1_enc.c,v retrieving revision 1.35.2.3 diff -u -p -r1.35.2.3 t1_enc.c --- ssl/t1_enc.c16 Feb 2007 20:40:07 - 1.35.2.3 +++ ssl/t1_enc.c28 Sep 2008 23:43:45 - @@ -738,8 +738,8 @@ int tls1_mac(SSL *ssl, unsigned char *md md_size=EVP_MD_size(hash); buf[0]=rec-type; - buf[1]=TLS1_VERSION_MAJOR; - buf[2]=TLS1_VERSION_MINOR; + buf[1]=(unsigned char)(ssl-version 8); + buf[2]=(unsigned char)(ssl-version 0xff); buf[3]=rec-length8; buf[4]=rec-length0xff; -- dwmw2 __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: Help reagarding socket calls in SSL needed
Hi SSL experts, I am using the s_client.c and the s_server.c for my ssl client and server. I need to find the socket calls such as send and recv. ie SSL_write( ), SSL_read( ), bio_read( ), bio( ) write etc will finally have to make a call to the socket calls such as send and recv as I guess. I need to access these socket calls. But I dont know how to find them. Can anyone please could help me? Thanks, Prashanth Is your question, How can I use OpenSSL but not have it do the socket operations? If so, look into bio pairs. The 'ssltest.c' program in the 'ssl' directory is an example of how to do this. It looks like this: Other SSL connection - IO bio - SSL state - SSL bio - Application And the terms means: Other SSL connection: The other end of the SSL connection. IO bio: Whatever you have to do to act like a socket connected to the other SSL connection. SSL state: The SSL connection on your end. SSL bio: The buffer between the SSL engine and the application. Application: The part that sends and receives plaintext that needs to be exchanged with the other end. Just remember, if you use BIO pairs, there are four things you have to do: 1) If you receive anything from the other SSL endpoint, you need to give it to the IO BIO. 2) If any data appears on the IO BIO, you have to send it to the other SSL endpoint. 3) If your application wants to send anything to the other SSL endpoint, you need to give it to the SSL bio. 4) If any data appears on the SSL bio, you need to process it as received data. Do not assume that these operations will associate. That is, do not assume that just because you received data from the other SSL endpoint and gave it to the IO BIO, data will appear on the SSL bio. In this case, OpenSSL manages the SSL connection as a state machine hooked up to two BIOs. You are responsible for sending to and receiving from both BIOs as the SSL engine operates between them. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
Re: DTLS clue requested.
On Tue, 2008-09-23 at 23:12 -0700, nagendra modadugu wrote: Hi David, unfortunately I've been out of touch with the developments to DTLS for some time. I forwarded your message to Eric Rescorla who worked with Cisco to get their implementation working. Thanks. I suspect that Cisco has proprietary patches that they haven't disclosed (or don't know how to). Hm, I was hoping that it wasn't any deliberate proprietary patches, but rather just an incompatibility because they were using a pre-RFC snapshot of the protocol. Why use a standard protocol but then hack it up with extra proprietary nonsense? At the worst, I should be able to reverse-engineer the library I have. Most functions will be identical, and once I have it down to a list of known-different functions I can take a closer look at each one. Armed with the unmodified source code and a disassembler it shouldn't be particularly hard to work out the differences. But that's a pain, especially as I'm mostly clueless about the protocol so wouldn't be able to make many educated guesses -- it'd all be brainless grunt-work. So far, I've noticed that their library is calling tls1_change_cipher_state() for a second time during my test case (both after receiving the Server Hello, while the real OpenSSL only does so once. [EMAIL PROTECTED] anyconnect]$ LD_LIBRARY_PATH=. ./dtls-test Found AES128-SHA cipher at 28 SSL_SESSION is 200 bytes EVP_CipherInit_ex 0x980d5f0 0x27e7a0 (nid 1a3) (nil) 0x980d5b8 0x980d5d8 0 Key:: 0b 33 d2 ef 9a 99 d6 d5 01 0f c5 83 6c 2f 8b 49 IV:: d0 8f 1f 6b 5f 20 28 9a 99 e8 2c 88 c8 41 78 bf EVP_CipherInit_ex 0x980d778 0x27e7a0 (nid 1a3) (nil) 0x980d5a8 0x980d5c8 2 Key:: cf f5 ef f9 fe f9 09 af 7b b9 8b df 11 1e 23 14 IV:: 9e 73 c8 be 5a 93 fc ad b5 37 c1 11 eb d0 fa 65 Success Child done. [EMAIL PROTECTED] anyconnect]$ LD_LIBRARY_PATH=/home/dwmw2/working/openssl-0.9.8e ./dtls-test Found AES128-SHA cipher at 29 SSL_SESSION is 200 bytes EVP_CipherInit_ex 0x8df2640 0x2957a0 (nid 1a3) (nil) 0x8df2608 0x8df2628 0 Key:: 0b 33 d2 ef 9a 99 d6 d5 01 0f c5 83 6c 2f 8b 49 IV:: d0 8f 1f 6b 5f 20 28 9a 99 e8 2c 88 c8 41 78 bf Child done. DTLS connection returned 0 13867:error:14101119:SSL routines:DTLS1_PROCESS_RECORD:decryption failed or bad record mac:d1_pkt.c:466: -- dwmw2 __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: FIPS-capable curl: Solaris 9 - fingerprint does not match
I am rather confused why people need to drop out of FIPS mode. The Federal Information Processing Standard dictates that FIPS-validated cryptography be used for everything that requires cryptographic transformation for storage (or really anything that enters or leaves the cryptograpic security boundary). -Kyle H In many cases, FIPS actually results in (you might reasonably think, at least) reduced security. It's not unusual to have three settings: A) Non-FIPS, where even algorithms too weak to qualify for FIPS use are allowed so long as they are still believed to be secure when used the way they are being used. For example, SSLv3 would usually be allowed in this mode. This might be less secure (and probably is) but might be secure enough for what you're doing. B) FIPS, where all FIPS rules are followed. There may be reduced functionality in this mode, you may not be able to interoperate in all the ways you might want to. Performance might be lower. C) Quasi-FIPS. All FIPS rules are followed, except where it is genuinely believed that these rules reduce security or are unreasonably impractical. For example, obvious bugfixes might be allowed, even if the code hadn't been re-FIPS checked. In the case of OpenSSL, you might allow changes to optimization or code generation flags. An obviously correct optimized SHA1 algorithm might be used, even if it hasn't been approved yet. (Or if it wasn't selected for the platform due to a detection bug.) The idea would be that you use mode A if you don't care about FIPS, mode C if you must be comply with the letter of FIPS, and mode B if you care about FIPS, but not to the point where you will let it hurt you. There are good reasons you might need B mode while you have a connection to a source that absolutely requires it, and then want to drop back into A or C mode. Note that I am not saying mode C is always better than mode B and the only reason to pick B is a hard 'legal' requirement. Mistakes can be made in optimization or obvious bug fixes and code building errors can be induced by compiler flag changes. One of the benefits of the FIPS process is the value of expert judgments about security made by actual experts. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: Using a memory BIO to decrypt a SSL Stream
I am trying to use a memory BIO to decrypt data from a TCP stream I am processing, I have followed the following steps and for some reason I am still not able to get the SSL_READ function to return anything but -1? I have looked at the archives and it appears that this method has worked for others ? So I am guessing I am missing something Simple (Hoping more like it J SSL_library_init() SSL_load_error_strings() meth =SSLv23_method() ctx = SSL_CTX_new(meth) ssl = SSL_new(ctx) SSL_CTX_use_PrivateKey_file() - Returns 1 from what I can tell is success(PEM) SSL_CTX_use_certificate_file() - Returns 1 from what I can tell is success (PEM) memBIO = BIO_new(BIO_s_mem()) BIO_write(memBIO, data, datasize) SSL_set_bio(ssl, memBIO, memBIO) SSL_read(); I guess I don't understand what you're trying to do. Are you trying to make an SSL session or are you trying to decrypt some static data? It looks to me like you may have a fundamental misunderstanding of what SSL does. Is your thinking something like this: I create an SSL session. Then I'll hand it some plaintext, it will encrypt it, and I'll send that to the server. When I get some encrypted data back, I'll give it to the SSL session, it will decrypt it, and give me that. If so, no. SSL is not a stream cipher or a block encryption/decryption engine. You need to think like this: I create an SSL session. Sometimes it will give me data to send to the server, and I'll hand that data to the server. If I get any data from the socket, I'll give that to the SSL session. If I have any data I want to encrypt and send, I'll give it to the SSL session. If it has any plaintext for me, I'll process it. Because you might receive a partial record, from which SSL_read can't return anything. And SSL_write might need to read some data from the SSL connection in order to complete negotiation. Or a million things might happen. Also, SSL is an active process. You cannot reconstruct a stored SSL session the same way you run one end of a connection. (It's not clear whether that's what you're trying to do. Where did you 'data' and 'datasize' come from?) DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]
RE: Using a memory BIO to decrypt a SSL Stream
Dave, It appears that my take on this was really off, thank you for your explanation, what I am trying to do is to create a utility like ssltap that will allow me the ability to pull decrypted data out of a connection between a browser and Apache. So it appears I need to build some kind of proxy that will sit between the two endpoints and take an encrypted stream in (let the session decode it) then (let the session encode it) write that back out to the original recipient? Am I getting warmer on this? That may or may not be possible. Here's the problem: When an SSL session is established, a shared secret is negotiated. Neither side has full control over this shared secret. Alice does not choose it. Bob does not choose it. But SSL is such that they wind up with the same one. So if instead of Alice talking to Bob, Alice talks to you and you talk to Bob, you have two choices: 1) Transparent: You can leave the shared secret establishment alone. In this case, you won't know the shared secret (but Alice and Bob will). How will you decrypt the session data? 2) Active: You can participate in the shared secret establishment. In this case, Alice and Bob will wind up with different shared secrets, and you will know both of them. But what if Alice signs her shared secret and sends it to Bob? Bob is expecting to receive his shared secret signed by Alice (since Bob expects Alice and Bob's shared secrets to be the same, but you made them not be.) How will you replace that with Bob's shared secret signed by Alice? How will you present the client with a certificate it trusts? In short, for most protocols (HTTPS), you will need the server's key or a wildcard certificate that the client trusts. For some protocols (those that do MITM rejection beyond just checking the server certificate), even that will not be enough. What is your outer problem? Why do you need to do this? If you have a legitimate need, there's probably a way to do it. If it's I want to steal people's credit card numbers when they send them to Amazon, then there is no way, by design. DS __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager [EMAIL PROTECTED]