Re: OpenSSL 1.1.1 Windows dependencies

2022-10-22 Thread David Harris
On 21 Oct 2022 at 13:50, Michael Wojcik via openssl-users wrote:

> > That was my initial thought too, except that if it were
> > firewall-related, the initial port 587 connection would be blocked,
> > and it isn't - the failure doesn't happen until after STARTTLS has
> > been issued.
> 
> Not necessarily. That's true for a first-generation port-blocking
> firewall, but not for a packet-inspecting one. There are organizations
> which use packet-inspecting firewalls to block STARTTLS because they
> enforce their own TLS termination, in order to inspect all incoming
> traffic for malicious content and outgoing traffic for exfiltration.

I now have wireshark captures showing the exchanges between the working 
instance and the non-working instance respectively; the problem is definitely 
happening after STARTTLS has been issued and during the TLS handshake. 
I'm not high-level enough to be able to make any sense of the negotiation data 
though. The wireshark capture is quite short (22 items in the list) and I don't 
mind making it available if it would be useful to anyone.

> > Furthermore, the OpenSSL
> > configuration is identical between the systems/combinations of
> > OpenSSL that work and those that don't.
> 
> Do you know that for certain? There's no openssl.cnf from some other
> source being picked up on the non-working system?

I'm pretty certain, but I'll get the customer to double-check.

Cheers!

-- David --



Re: OpenSSL 1.1.1 Windows dependencies

2022-10-21 Thread David Harris
On 21 Oct 2022 at 7:27, Richard Levitte wrote:

> Let me ask you this: on what Windows version was your application
> built?  Common wisdom would be to build on the oldest version...

My application is a very traditional Win32 application, and at the moment (and 
until circumstances *force* me to change) I build it in a Windows 7 SP2 VM. 
There's really no build-related reason why it shouldn't work on Server 2012, 
especially since the 1.1.1g build (which DOES work on the affected system) is 
built in exactly the same VM.

It's a puzzle, for sure.

Thanks for taking the time to look into this for me Richard.

Cheers!

-- David --



Re: OpenSSL 1.1.1 Windows dependencies

2022-10-21 Thread David Harris
On 20 Oct 2022 at 20:04, Michael Wojcik wrote:

> OpenSSL 1.1.1 uses Windows cryptographic routines in two areas I'm
> aware of: rand_win.c and the CAPI engine. I don't offhand see a way
> that a problem with the calls in rand_win.c would cause the particular
> symptom you described. My guess is that you're not using the CAPI
> engine, but you might check your OpenSSL configuration on the failing
> system.

For a variety of reasons to do with redistributables, I build OpenSSL as 
no-shared, and because of the compiler I prefer to use (an older build of 
Visual 
C), I have to compile with no-capi as well, so CAPI clearly isn't an issue in 
this 
case. But to be sure, I tried rebuilding OpenSSL with Visual C 2022 (using 
Visual C 2019 as the compile unit) and according to the customer, the result 
was the same.

> I think more plausible causes of this failure are things like OpenSSL
> configuration and interference from other software such as an endpoint
> firewall. Getting SYSCALL from SSL_accept *really* looks like
> network-stack-level interference, from a firewall or similar
> mechanism.

That was my initial thought too, except that if it were firewall-related, the 
initial 
port 587 connection would be blocked, and it isn't - the failure doesn't happen 
until after STARTTLS has been issued. Furthermore, the OpenSSL 
configuration is identical between the systems/combinations of OpenSSL that 
work and those that don't.

> Personally, if I ran into this, I'd just build OpenSSL for debug and
> debug into it. But I know that's not everyone's cup of tea.

Unfortunately, I don't have that level of access to the customer's systems. 

I was really just wondering if the combination of factors rang any bells with 
anyone before I started digging much deeper; it's altogether possible that I 
might 
just have to write this one off to experience and tell the user to use a 1.1.1g 
build 
of OpenSSL (which I build exactly the same way, and which works correctly in 
the same setup).

Thanks for the help - appreciated.

Cheers!

-- David --



OpenSSL 1.1.1 Windows dependencies

2022-10-19 Thread David Harris
Up front, I'd like to apologize if this is an FAQ or has been answered 
elsewhere 
on this list: my workload means that I simply can't keep as up-to-date as I 
would 
like.

I have a situation where my application fails to accept an incoming SSL 
handshake on Windows Server 2012, but the identical software running on 
Server 2019 accepts the same connection from the same remote client without 
a problem. Other types of client software (such as Thunderbird) connect to 
either system without any problems. The connecting client is a Windows Cash 
Register using Window's built-in crypto facilities. If I downgrade my app to 
OpenSSL 1.1.1g or earlier, the problem doesn't happen. With 1.1.1k or 1.1.1q, I 
get the error (I haven't built any versions of OpenSSL between k and q). In 
case 
it helps, the connection is an incoming SMTP connection on port 587, and 
STARTTLS is used to begin SSL negotiation.

SSL_accept returns -1, with an extended error of "SSL_ERROR_SYSCALL" (5), 
which I understand to be largely what it returns when it doesn't have a clear 
idea 
of what's gone wrong. The error queue is completely empty in this situation. 
The 
cert is a LetsEncrypt cert that loads without errors and works fine with other 
clients.

Do recent versions of OpenSSL 1.1.1 have dependencies on some Windows 
facility (winsock and wincrypt seem likely candidates) that might work on 
Server 
2019 but fail on Server 2012?

The version of my application that is in public release uses 1.1.1g, so isn't 
affected by this issue, but I'm slightly worried that I'm going to see an 
uptick in 
this type of problem if I release builds based on later versions of 1.1.1.

Does this ring any bells with anyone? Again, apologies if this is answered 
elsewhere - I *did* spend some time in Google but couldn't find anything that 
seemed relevant.

Thanks in advance for any advice.

Cheers!

-- David --



OpenSSL-3 ENGINESDIR development vs deployment

2022-10-07 Thread Wrestler, C David CTR (USA)
Background, earlier versions of my project were using OpenSSL 1.n.n, the output 
stayed within it's checkout directory, and the .DLLs deployed to where-ever the 
project was deployed.

Now trying to implement OpenSSL 3, after compiling it seems to be keep 
referring to the directories it was configured with.   -prefix   --openssldir
I can see the OPENSSLDIR, ENGINESDIR, MODULESDIR via openssl version -a   The 
man pages reference Environment variables, but those don't seem to have any 
effect, and the ENGINES related docs all say deprecated for PROVIDERS.


How do I set the openSSL 3, (openssl.exe, libssl-3.dll, and libcrypto-3.dll, 
openssl.cnf, etc)  to use the eventual installed location(s)?
reasons why I need to be able:
1.  The users will not be running from a directory structure like my 
development directories.

2.  Compiling to Program Files (x86) requires Administrator (or something)  
privileges which I don't have.

3.  Also compiling on our BuildServer, the build job will not have access to 
drive C:


Thanks,
cdw



Re: [lamps] [TLS] Q: Creating CSR for encryption-only cert?

2022-10-06 Thread von Oheimb, David
Hi Thom, Uri, et al,

I had responded to Uri on the openssl-users list on Oct 3rd at 21:12 +0200 as 
follows:

Requesting a cert in a CSR for a key pair that cannot be used for signing is 
indeed impossible in the widely used PKCS#10 format
(except if one break sthe PKCS#10 requirement of a self-signature, e.g., by 
applying a dummy signature).

A viable solution is to use a different CSR format, such as CRMF.
This format is the preferred one by CMP and CMC (while they also support 
PKCS#10) because it is much more flexible.
CRMF does not strictly require to provide a proof-of-possession (PoP), and it 
offeres also indirect ways of doing a PoP.
For instance, for encryption keys the new cert can be returned by the CA in 
encrypted form (using the new public key) to the EE,
and the EE will only be able to make use of the cert if it is able to decrypt 
it, which proves possession of the private key.

Here are two additions to that:

In order for the CA to actually get the PoP for encryption-only keys, the EE 
needs to receive a strong value
derived from the decrypted contents of the new cert, such as a hash value of 
the decrypted cert.
In CMP this is achieved using the certConf message (which the CA acknowledges, 
as usual, with a pkiConf response).
See also https://www.rfc-editor.org/rfc/rfc4210#section-5.2.8

This procedure actually works with OpenSSL 3.0 and the Insta Demo CA, but so 
far only for RSA keys:

  export OPENSSL_CONF=/path/to/openssl/apps/openssl.cnf  # adapt as needed

  openssl genrsa -out insta.priv.pem  # or any other way of generating key

  openssl cmp -section insta -popo 2  # 1 means SIGNATURE, 2 means KEYENC


And some responses to today's email by Thom:

On Thu, 2022-10-06 at 09:58 +0200, Thom Wiggers wrote:
Op di 4 okt. 2022 om 17:07 schreef Blumenthal, Uri - 0553 - MITLL 
mailto:u...@ll.mit.edu>>:
CSR is supposed to be signed by the corresponding private key to prove 
possession. Obviously, it cannot be done with a key such as described above. 
How is this problem addressed in the real world?  With AuthKEM and KEMTLS, how 
would these protocols get their certificates?


Yeah, that's something that came up a few times while we were working on KEMTLS 
(and it eventually resulted in this paper by Güneysu, Hodges, Land, Ounsworth, 
Stebila, and Zaverucha [1]). They also have some nice references for the kinds 
of attacks that "sloppy" issuance could lead to in Appendix A. I've not tried 
to work out if they apply to TLS/KEMTLS/AuthKEM, but ruling them out anyway 
because of the many applications of certificates seems worth it.
[1]: https://www.douglas.stebila.ca/research/papers/CCS-GHLOSZ22/

Not checking the PoP (if this is what you mean here) would not be a good idea.
What you MUST do in any case is source authentication of the EE being the cert 
requester (i.e., proof of origin of the request).


A different naive approach for issuance (that I have done zero security 
analysis on) could be simply creating the cert for key PK and encrypting it to 
a key encapsulated to PK. Then the owner of PK would need to decrypt the 
certificate; using it the first time immediately proves possession. Of course, 
in the real-world we also have things like CT logs and such; and it would be 
terribly annoying if I could spam you with CT log alerts for yourwebsite dot 
com. So this approach doesn't seem very favorable over a "fancy" ZKP or 
interactive approach.

This indirect way of doing the PoP is essentially what I wrote above, while CMP 
nicely encapsulates the two round trips needed in a transaction:


CMP:apps/cmp.c:2793:CMP info: using section(s) 'insta' of OpenSSL configuration 
file 'apps/openssl.cnf'

CMP:apps/cmp.c:1953:CMP info: will contact http://pki.certificate.fi:8700/pkix/

CMP:crypto/cmp/cmp_client.c:166:CMP info: sending IR

CMP:crypto/cmp/cmp_client.c:186:CMP info: received IP

CMP:crypto/cmp/cmp_client.c:166:CMP info: sending CERTCONF

CMP:crypto/cmp/cmp_client.c:186:CMP info: received PKICONF

CMP:apps/cmp.c:2004:CMP info: received 1 extra certificate(s), saving to file 
'insta.extracerts.pem'

CMP:apps/cmp.c:2004:CMP info: received 1 enrolled certificate(s), saving to 
file 'insta.cert.pem'



We weren't aware of CRMF, so it seems I've got some reading to do as I write 
some paragraphs on KEM certificates in my PhD thesis :)

BTW, you may note that an update of RFC 4211 is in the pipeline:
https://datatracker.ietf.org/doc/html/draft-ietf-lamps-crmf-update-algs
as well as an update of RFC 4210, as well as an industrial CMP profile and a 
new RFC on details of algorithms that may be used with CMP(+CRMF):
https://datatracker.ietf.org/doc/html/draft-ietf-lamps-cmp-updates
https://datatracker.ietf.org/doc/html/draft-ietf-lamps-lightweight-cmp-profile
https://datatracker.ietf.org/doc/html/draft-ietf-lamps-cmp-algorithms

Cheers,
David



Re: creating CSR for encryption-only cert?

2022-10-03 Thread David von Oheimb
My pleasure!
OpenSSL supports CRMF and CMP since version 3.0.
EJBCA supports these since long, and there are also other CAs that support CMP 
and thus CRMF., such as the Insta CA.
Yet the support for encryption-based PoP by now likely is not strong - mostly 
because so far there was not much interest for it.
The OpenSSL CMP client implemenation does support sending cert requests without 
PoP,
and it should also support using encryption-based PoP, but I cannot recall 
having tried it out.
For simple examples of using CMP with the OpenSSL CLI, see at the bottom of the 
openssl-cmp man page.

David



On Mon, 2022-10-03 at 19:48 +, Blumenthal, Uri - 0553 - MITLL wrote:

David,

 

Thank you! That’s a great answer. It looks like OpenSSL does support CRMF? 
Would you or somebody else have an example of how to work with CRMF (to create 
it, and to process/sign it)?

 

Do you happen to know if CRMF is accepted by the “big players” in the CA field?

 

*Thank you again!*

-- 

V/R,

Uri

/ /

/There are two ways to design a system. One is to make it so simple there are 
obviously no deficiencies./

/The other is to make it so complex there are no obvious deficiencies./

/       
  -  C. A. R. Hoare/

 

 

*From: *David von Oheimb 
*Date: *Monday, October 3, 2022 at 15:13
*To: *Uri Blumenthal , openssl-users 

*Subject: *Re: Q: creating CSR for encryption-only cert?

 

Requesting a cert in a CSR for a key pair that cannot be used for signing is 
indeed impossible in the widely used PKCS#10 format
(except if one break sthe PKCS#10 requirement of a self-signature, e.g., by 
applying a dummy signature).

A viable solution is to use a different CSR format, such as CRMF.
This format is the preferred one by CMP and CMC (while they also support 
PKCS#10) because it is much more flexible.
CRMF does not strictly require to provide a proof-of-possession (PoP), and it 
offeres also indirect ways of doing a PoP.
For instance, for encryption keys the new cert can be returned by the CA in 
encrypted form (using the new public key) to the EE,
and the EE will only be able to make use of the cert if it is able to decrypt 
it, which proves possession of the private key.

David


On Mon, 2022-10-03 at 15:11 +, Blumenthal, Uri - 0553 - MITLL wrote:

> TLDR;
> Need to create a CSR for a key pair whose algorithm does not allow
> signing (either because it’s something like Kyber, or because
> restriction enforced by HSM). How to do it?
>  
> There are several use cases that require certifying long-term
> asymmetric keys that are only capable of encryption/decryption – but
> not signing/verification. That could be either because the algorithm
> itself does not do signing, or because the private key is generated
> and kept in a secure hardware that enforces usage restriction.
>  
> CSR is supposed to be signed by the corresponding private key to
> prove possession. Obviously, it cannot be done with a key such as
> described above. How is this problem addressed in the real world?
>  With AuthKEM and KEMTLS, how would these protocols get their
> certificates?
>  
> Thanks!
> --
> V/R,
> Uri Blumenthal  Voice: (781) 981-1638 
> Secure Resilient Systems and Technologies   Cell:  (339) 223-5363
> MIT Lincoln Laboratory 
> 244 Wood Street, Lexington, MA  02420-9108  
>  
> Web:     https://www.ll.mit.edu/biographies/uri-blumenthal
> Root CA: https://www.ll.mit.edu/llrca2.pem
>  




Re: Q: creating CSR for encryption-only cert?

2022-10-03 Thread David von Oheimb
Requesting a cert in a CSR for a key pair that cannot be used for signing is 
indeed impossible in the widely used PKCS#10 format
(except if one break sthe PKCS#10 requirement of a self-signature, e.g., by 
applying a dummy signature).

A viable solution is to use a different CSR format, such as CRMF.
This format is the preferred one by CMP and CMC (while they also support 
PKCS#10) because it is much more flexible.
CRMF does not strictly require to provide a proof-of-possession (PoP), and it 
offeres also indirect ways of doing a PoP.
For instance, for encryption keys the new cert can be returned by the CA in 
encrypted form (using the new public key) to the EE,
and the EE will only be able to make use of the cert if it is able to decrypt 
it, which proves possession of the private key.

David


On Mon, 2022-10-03 at 15:11 +, Blumenthal, Uri - 0553 - MITLL wrote:
> TLDR;
> Need to create a CSR for a key pair whose algorithm does not allow
> signing (either because it’s something like Kyber, or because
> restriction enforced by HSM). How to do it?
>  
> There are several use cases that require certifying long-term
> asymmetric keys that are only capable of encryption/decryption – but
> not signing/verification. That could be either because the algorithm
> itself does not do signing, or because the private key is generated
> and kept in a secure hardware that enforces usage restriction.
>  
> CSR is supposed to be signed by the corresponding private key to
> prove possession. Obviously, it cannot be done with a key such as
> described above. How is this problem addressed in the real world?
>  With AuthKEM and KEMTLS, how would these protocols get their
> certificates?
>  
> Thanks!
> --
> V/R,
> Uri Blumenthal  Voice: (781) 981-1638 
> Secure Resilient Systems and Technologies   Cell:  (339) 223-5363
> MIT Lincoln Laboratory 
> 244 Wood Street, Lexington, MA  02420-9108  
>  
> Web:     https://www.ll.mit.edu/biographies/uri-blumenthal
> Root CA: https://www.ll.mit.edu/llrca2.pem
>  


Re: Re: openssl req not working, error is "req: Use -help for summary."

2022-09-20 Thread von Oheimb, David
Dear Sergio,

please use a to-the-point email subject, not "openssl-users Digest, Vol 94, 
Issue 24".

You just made a small mistake with the below command:
after the "-subj" option its "/" (which denotes the empty Distinguished Name) 
is missing, or any other DN string,
and thus the subsequent "-addext" gets misinterpreted.

Unfortunately, the "openssl" CLI command so far did not provide a useful error 
message in such cases,
but some time ago I improved this. So with the current master version, the hint 
given is slightly better:


req: Extra option: "subjectKeyIdentifier=hash"

req: Use -help for summary.

and this will be available with OpenSSL 3.1.

BTW, if you want a validity period of exactly 100 years, you need to take into 
account 24 leap days/years,
so better use "-days 36524" than "-days 36500".

Best,
David


On Tue, 2022-09-20 at 09:30 +, A Z wrote:
Dear OpenSSL Users and Programmers,

I tried running the following command in Windows 64 bit Home edition,
and got the error:

>openssl req -nodes -newkey rsa:4096 -keyout pkey.pem -x509 -out cert.pem -days 
>36500 -subj -addext "subjectKeyIdentifier=hash"

req: Use -help for summary.



>openssl version

OpenSSL 3.0.0 7 sep 2021 (Library: OpenSSL 3.0.0 7 sep 2021)

In the email bundle reply, this line is suggested to generate a private key and 
a PEM certificate.  How can I get this to run on
the Windows 10 64 bit, even when in Administrator mode?

Sergio Minervini.



Re: help //java.security.NoSuchAlgorithmException: 1.2.840.113549.1.5.13 SecretKeyFactory not available

2022-08-27 Thread David von Oheimb
Hi, I'm not an expert on this topic, but this is looks like of interest here:
https://stackoverflow.com/questions/58488774/configure-tomcat-hibernate-to-have-a-cryptographic-provider-supporting-1-2-840-1

23 Aug 2022 10:34:51 李周华 :

> Hi , guys
> 
> 
>    I have use the follow openssl commands to create a key pair for android 
> app signing.
> 
>    Follow this link 
> https://source.android.com/docs/core/ota/sign_builds#manually-generating-keys
> 
> 
> / //# generate RSA key/
> 
> /openssl genrsa -3 -out temp.pem 2048/
> /Generating RSA private key, 2048 bit long modulus/
> /+++/
> /.+++/
> /e is 3 (0x3)/
> 
> 
> /# create a certificate with the public part of the key/
> /openssl req -new -x509 -key temp.pem -out releasekey.x509.pem -days 1 
> -subj '/C=US/ST=California/L=San Narciso/O=Yoyodyne, Inc./OU=Yoyodyne 
> Mobility/CN=Yoyodyne/emailAddress=yoyod...@example.com'/
> 
> 
> /# create a PKCS#8-formatted version of the private key/
> /openssl pkcs8 -in temp.pem -topk8 -outform DER -out releasekey.pk8 -nocrypt/
> 
> 
> /# securely delete the temp.pem file/
> /shred --remove temp.pem/
> 
> 
> The key file was successfully generated, but when I compile the entire 
> project signature app, the following error is reported:   
> 
> java.security.NoSuchAlgorithmException: 1.2.840.113549.1.5.13 
> SecretKeyFactory not available
> at java.base/javax.crypto.SecretKeyFactory.(SecretKeyFactory.java:122)
> at 
> java.base/javax.crypto.SecretKeyFactory.getInstance(SecretKeyFactory.java:168)
> at com.android.signapk.SignApk.decryptPrivateKey(SignApk.java:250)
> at com.android.signapk.SignApk.readPrivateKey(SignApk.java:272)
> at com.android.signapk.SignApk.main(SignApk.java:1210)
> 
> 
> My ubuntu version is 20.04.4 LTS 
> 
>   openjdk version is 11.0.15 2022-04-19
> 
>   openssl version is 1.1.1r-dev built on Mon Aug 22 11:19:51 2022 UTC
> 
> 
>  Any help is welcome.
> 
> 
> 
> 
> 
> 
> 
> **
> 
> 努比亚技术有限公司 基础框架团队  李周华
> 联系电话:18706866323
> 地址:西安市高新唐延南路10号中兴产业园A座101
> Email:0016003...@nubia.com
> **


Re: What is 'trusted certificate'

2022-07-16 Thread David von Oheimb
The below warning message looks a bit like it was produced by OpenSSL,
but pretty sure it actually comes from the freeradius server code, which
appears to use one of the OpenSSL certificate checking callback
mechanisms. So you should ask there what the exact intention for this
warning is and how to prevent it.

To me the below warnings looks strange because usually at depth 0 and 1
of a cert chain (i.e., at the positions of the end entity cert and any
subsequent intermediate cert) it is normal to have untrusted certs.
Usually only at the end of the chain you have a trusted cert that
represents the trust anchor for the chain.

Some information on the OpenSSL view on trusted/untrusted certs can be
fount
at 
https://beta.openssl.org/docs/manmaster/man1/openssl-verification-options.html

 David

On Fri, 2022-07-15 at 22:38 +0200, Kamil Jońca wrote:
> 
> I have freeradius server configured to use EAP-TLS
> (certificate baset authn)
> Since some time I have warning in logs:
> 
> --8<---cut here---start->8---
> Fri Jul 15 22:29:04 2022 : Warning: (TLS) untrusted certificate with
> depth [1] subject name
> /C=PL/ST=Mazowieckie/L=Warszawa/O=beta/OU=wifi/CN=beta-wifi-ca
> Fri Jul 15 22:29:04 2022 : Warning: (TLS) untrusted certificate with
> depth [0] subject name
> /C=PL/ST=Mazowieckie/O=beta/OU=wifi/CN=salamandra
> --8<---cut here---end--->8---
> 
> I took a look into code and it seems to be related to
> "X509_STORE_CTX_get0_untrusted(ctx)" function.
> I tried to search, but without success.
> Can anyone tell me when certificate is "trusted" in this context?
> (How to get rid this warning) or point to documentation/search keys
> 
> KJ
> 
> --
> http://wolnelektury.pl/wesprzyj/teraz/
> 


Re: error: wrong version number

2022-07-11 Thread David von Oheimb
Yes, the TLS diagnostics can be confusing:
it reports "wrong version" also when there is no TLS (version) being
used by the peer at all.

 David

On Mon, 2022-07-11 at 00:16 -0400, Viktor Dukhovni wrote:
> On Sun, Jul 10, 2022 at 02:41:23PM +, loic nicolas wrote:
> 
> > I am trying to connect my client to my server but I always receive
> > an
> > error.(ssl3_get_record:wrong version
> > number:../ssl/record/ssl3_record.c:331)
> > 
> > How can I get more information about the error and fix it? (the
> > error
> > is probably in my client)
> 
> Indeed, the client's packet to the server is not a TLS Record.
> 
> > openssl s_server -accept 127.0.0.1:3000 -key server.key -cert
> > server.cert -msg
> > 
> > <<< ??? [length 0005]
> >     20 ae c0 2e d6
> 
> Whatever the client is doing, it isn't starting a TLS session with a
> TLS handshake record containing a TLS client HELLO.
> 


Re: OpenSSL 3 HTTP client C++ example?

2022-06-22 Thread David von Oheimb
Hi again Beni,

On Wed, 2022-06-22 at 08:29 +0200, Benedikt Hallinger wrote:
> Hi David and thank you for your advice and example.

my pleasure.
I was about to send a slightly improved version of my example code
regarding the use of proxies and the expected content type - see
attached
and an extended sample invocation (of course, adapt "myproxy" as
needed):

https_proxy=myproxy ./http_client https://example.com && echo ok


> I tried to compile it, run onto errors tough.
> I just put the file into my openssl source tree, which is on commit:
> commit 9e86b3815719d29f7bde2294403f97c42ce82a16 (HEAD, 
> origin/openssl-3.0)

I've just tried myself using that commit (and default configuration)
and as expected everything works fine.

> $ gcc http_client.c -Iinclude -L. -lcrypto -lssl -o http_client
> /usr/bin/ld: ./libssl.a(libssl-lib-ssl_cert.o): in function 
> `add_uris_recursive':
> ssl_cert.c:(.text+0x116): undefined reference to `OSSL_STORE_open'
> /usr/bin/ld: ssl_cert.c:(.text+0x134): undefined reference to 
> `OSSL_STORE_eof'
> [...]

This issue is pretty surely unrelated to the example code itself
but most likely due to some general build issue you have, such as some
inconsistency with pre-installed OpenSSL versions.
Sorry that I do not have the time to provide further aid on such general
build issues.

 David

> 
> Am 2022-06-21 22:52, schrieb David von Oheimb:
> > Hallo Beni,
> > 
> > good that you ask.
> > 
> > Using the new HTTP client API with TLS (possibly via a proxy) indeed
> > is not easy so far.
> > I'm going to improve this by adding some high-level helper functions
> > and extending the docs.
> > 
> > A good starting point when looking for examples is, as usual, the
> > application code in apps/.
> > In this case, there is some pretty useful code in apps/lib/apps.c,
> > but it turns out that the adaptation of app_http_get_asn1() and
> > app_http_tls_cb()
> > 
> > for receiving plain text (rather than ASN.1 encoded data) from the
> > server
> > is not straightforward because OSSL_HTTP_get() may close the SSL
> > read
> > BIO prematurely.
> > Also the behavior of non-blocking BIOs makes things a little more
> > tricky than expected.
> > Meanwhile I got it working - see the example attached.
> > 
> > Example build and usage:
> > 
> > gcc http_client.c -Iinclude -L. -lcrypto -lssl -o http_client
> > 
> > ./http_client https://httpbin.org/ &; echo ok
> > 
> > Regards,
> >  David
> > 
> > On 20.06.22 10:54, Benedikt Hallinger wrote:
> > 
> > > Hi there,
> > > I currently try to get my hands dirty with C++ and  the new HTTPs
> > > client
> > > introduced with OpenSSL 3.
> > > However I struggle to get started. My goal is to open a https
> > > secured
> > > website and download its contents into a std::string for further
> > > parsing.
> > > 
> > > Does someone on the list know of a small basic example?
> > > I imagine that I'm not the first person implementing a HTTPs
> > > website
> > > 
> > > connector with OpenSSL 3.
> > > 
> > > Thank you for your support,
> > > Beni
> 
#include 
#include 
#include 

BIO *bio_err = NULL;

typedef struct app_http_tls_info_st {
const char *server;
const char *port;
int use_proxy;
long timeout;
SSL_CTX *ssl_ctx;
} APP_HTTP_TLS_INFO;

static const char *tls_error_hint(void)
{
unsigned long err = ERR_peek_error();

if (ERR_GET_LIB(err) != ERR_LIB_SSL)
err = ERR_peek_last_error();
if (ERR_GET_LIB(err) != ERR_LIB_SSL)
return NULL; /* likely no TLS error */

switch (ERR_GET_REASON(err)) {
case SSL_R_WRONG_VERSION_NUMBER:
return "The server does not support (a suitable version of) TLS";
case SSL_R_UNKNOWN_PROTOCOL:
return "The server does not support HTTPS";
case SSL_R_CERTIFICATE_VERIFY_FAILED:
return "Cannot authenticate server via its TLS certificate, likely due to mismatch with our trusted TLS certs or missing revocation status";
case SSL_AD_REASON_OFFSET + TLS1_AD_UNKNOWN_CA:
return "Server did not accept our TLS certificate, likely due to mismatch with server's trust anchor or missing revocation status";
case SSL_AD_REASON_OFFSET + SSL3_AD_HANDSHAKE_FAILURE:
return "TLS handshake failure. Possibly the server requires our TLS certificate but did not receive it";
default:
return NULL; /* no hint available for TLS error */
}
}

static BIO *app_http_tls_close(BIO *bio)
{
if (bio != NULL) {
BIO *cbio;
const char *hint = tls_error_hint();

if 

Re: How to convert .P12 Certificate (ECC crypted) to .PEMs

2022-05-27 Thread David von Oheimb
Hi Michael,

openssl pkcs12 -in "inCert.p12" -out "out.pem" -passin pass: -nodes

is sufficient to convert all credentials in the PKCS#12 file to a single
PEM file with the key being stored unencrypted.
Since OpenSSL 3.0, the outdated -nodes option has been deprecated; so
there better use -noenc.

To get the leaf cert only, your

openssl pkcs12 -in "inCert.p12" -clcerts -nokeys -out "outCert.pem" -
passin pass:

is adequate, while to get the related key only, it is sufficient to use

openssl pkcs12 -in "inCert.p12" -nocerts -noenc -out "outKey.pem" -
passin pass:


To decrypt any type of key, you can use e.g., 

openssl pkey -in "outTmpKey.pem" -out "outKey.pem" -passin pass:

All the commands mentioned above work regardless of the key type (RSA,
EC, etc.).
If you really need to handle (in this case: decrypt) specifically EC
keys, you can use, e.g.,

openssl ec -in "outTmpKey.pem" -out "outKey.pem" -passin pass:


On Wed, 2022-05-25 at 19:23 +, Lynch, Pat wrote:
> Try adding the following command line arguments:   -outform pem

This won't work because the openssl pkcs12 command does not have an -
outform option.
And for those having it such as openssl x509, it is not needed because
PEM is the default.

Regards,
 David

>  
> From: openssl-users On Behalf Of
> Beilharz, Michael
> Sent: Wednesday, May 25, 2022 3:10 AM
> To: 'openssl-users@openssl.org' 
> Subject: How to convert .P12 Certificate (ECC crypted) to .PEMs
>  
> Hi OpenSSLCommunity,
>  
> actual I have to convert a .P12 certificate (RSA crypted/created) into
> .PEM certificates,
> I use the following commands:
> openssl pkcs12 -in "inCert.p12" -clcerts -nokeys -out "outCert.pem" -
> passin pass:
> openssl pkcs12 -in "outCert.pem" -nocerts -out "outTmpKey.pem" -passin
> pass: -passout pass:
> openssl rsa -in "ouTmpKey.pem" -out "outKey.pem" -passin pass:
>  
> I can’t say, if these 3 commands are the best way, but they still work
> fine and I can use the outCert.pem and the outKey.pem.
>  
> Now I have to convert a .P12 certificate, which is crypte d/created
> with ECC.
>  
> The first command still works (I think so, ‘cause there are no
> errors):
> openssl pkcs12 -in "inCert.p12" -clcerts -nokeys -out "outCert.pem" -
> passin pass:
>  
> But not the rest of the commands. I tried to use theec orecparam
> parameter, but I couldn’t figure out how to use them correct.
>  
> I am happy about any help or hint
>  
>  
> Regards
> Michael
>  


Re: How to create a SAN certificate

2022-05-21 Thread David von Oheimb
Since OpenSSL 3.0,
one can use the -copy_extensions` option of openssl req to copy over any
SANs contained in the CSR to the cert being created
or use -addext to directly specify extensions without the need to use a
config file,
or simply use the -x509 and -subj options to build a cert from scratch
(without using a CSR) and add extensions on-the-fly, e.g., 
 openssl req -x509 -subj "/CN=test" -key ../prepare2/ca.key -
addext "subjectAltName = IP:1.2.3.4, DNS:test.com" -out ee.crt
or use the -new option of openssl x509 to build a cert from scratch
(without using a CSR) and add extensions on-the-fly, e.g., 
 openssl x509 -new -subj "/CN=test" -key ee.key -extfile <(printf
"subjectAltName = IP:1.2.3.4, DNS:test.com") -out ee.crt

Otherwise, as mentioned in the first answer quoted below, the classical
way involves a config file - for details see the manual file.

Yet even with older OpenSSL versions (such as 1.1.1f) you can do without
using a config file, e.g.,
 openssl x509 -req -signkey ee.key -in ee.req -extfile <(printf
"subjectAltName = IP:1.2.3.4, DNS:test.com") -out ee.crt
or
 openssl req -x509 -new -key ee.key -subj "/CN=test" -addext
"subjectAltName = IP:1.2.3.4, DNS:test.com" -out ee.crt

HTH,
 David

On Sat, 2022-05-21 at 06:45 -0400, Michael Richardson wrote:
> 
> Henning Svane  wrote:
>     > I am using OpenSSL 1.1.1f Is there a way to make a SAN
> certificate
>     > based on the CSR I have created in Exchange.  I need a self-
> signed
>     > certificate for testing.
> 
> I'm not exactly sure what you think a SAN certificate is.
> I guess one with a SubjectAltName extension.  Mostly, all certificates
> have
> that these days, but whether or not the Subject is entirely filled out
> is a
> different question.
> 
> To form a self-signed certificate from a CSR, use openssl req.
> You may need a configuration file, serial number, expiry and
> algorithm.
> You'll need access to the private key.
> 
> See: 
> https://datatracker.ietf.org/doc/html/draft-moskowitz-ecdsa-pki#section-4.2
> 
> Some of us maintain a document on generated test CAs for ECDSA and
> EDDSA
> key types at: 
> https://github.com/henkbirkholz/draft-moskowitz-ecdsa-pki
> while it is in the form of an IETF ID, it is not intended for
> publication.
> 
> --
> ]   Never tell me the odds! | ipv6 mesh
> networks [
> ]   Michael Richardson, Sandelman Software Works    | network
> architect  [
> ] m...@sandelman.ca  http://www.sandelman.ca/    |   ruby on
> rails    [
> 


signature.asc
Description: This is a digitally signed message part


Re: Bad exit code with pkeyutl -verify in 1.0.2f

2022-05-15 Thread David von Oheimb
Hi Philip,
I just had a look a look at the commit you referenced.
Indeed this bug got fixed there, apparently without this fact being mentioned 
there. This commit was part of OpenSSL_1_1_0-pre1, so presumably it was 
released with 1.1.0.


15 May 2022 06:14:14 Philip Prindeville :

> I know this is an ancient version, but I was wondering if this was a known 
> bug so I could figure out which release it was fixed in, as I have to disable 
> the check for the exit status in my regression tests:
> 
> [philipp@centos7 asterisk]$ openssl version
> OpenSSL 1.0.2k-fips  26 Jan 2017
> [philipp@centos7 asterisk]$ echo -n "Mary had a little lamb." | openssl dgst 
> -sha1 -binary > hash
> [philipp@centos7 asterisk]$ od -t x1 hash
> 000 4e 07 b8 c7 aa f2 a4 ed 4c e3 9e 76 f6 5d 2a 04
> 020 bd ef 57 00
> 024
> [philipp@centos7 asterisk]$ openssl pkeyutl -sign -inkey 
> tests/keys/rsa_key1.key -pkeyopt digest:sha1 < hash > signing
> [philipp@centos7 asterisk]$ echo $?
> 0
> [philipp@centos7 asterisk]$ od -t x1 signing
> 000 14 03 f6 e2 b5 62 fc a3 32 6c f3 a7 2b 65 ad fd
> 020 ae 32 41 d7 c5 29 37 51 cd a3 e6 e2 87 2d 6d f1
> 040 32 01 88 99 05 b2 7d 1c f4 88 ef 3a 1b 49 8b 1a
> 060 47 0a 6b 11 a0 21 ea d6 1d 52 38 3d cb f4 ad 8b
> 100 6e b1 ab bb f3 2e 7d 83 2a 9c 18 a9 6a 48 f6 52
> 120 dc 30 86 5d 07 07 8f 45 ad 56 c5 25 3b 9c ef c7
> 140 ce 40 dd 74 6a cc 3b c5 ea d8 54 b4 d2 d9 81 25
> 160 71 91 be 08 5a 78 33 7d d8 45 2d 45 da f8 08 e1
> 200
> [philipp@centos7 asterisk]$ openssl pkeyutl -verify -inkey 
> tests/keys/rsa_key1.pub -pubin -sigfile signing -pkeyopt digest:sha1 < hash
> Signature Verified Successfully
> [philipp@centos7 asterisk]$ echo $?
> 1
> [philipp@centos7 asterisk]$
> 
> 
> I'm unclear why it says "Signature Verified Successfully" but then exits with 
> 1.
> 
> It looks like it was fixed here:
> 
> https://github.com/openssl/openssl/commit/7e1b7485706c2b11091b5fa897fe496a2faa56cc#diff-91617164072ee6a7ebbae1d9aecf2916064cedf9623c56b3ae46b1d310a50963R296
> 
> although the commit doesn't mention an explicit bug.
> 
> Was 1.0.2 using "issues" in Github, or were bugs tracked somewhere else?  I 
> can't remember...
> 
> Thanks,
> 
> -Philip


Fwd: Utility of self-signed certs - Re: Questions about legacy apps/req.c code

2021-12-22 Thread David von Oheimb
Yeah, self-signed certs are absolutely useful - you just need to be very 
careful which ones you trust for what.


Such certs are widely used to provide trust anchor information, 
typically of root CAs,

but conceptually and pragmatically, as Jordan also stated below,
they can make much sense even for end entities, such as locally known 
and trusted servers or email users.


I spent quite some effort to get their (optional) acceptance re-enabled 
in Thunderbird:
https://bugzilla.mozilla.org/show_bug.cgi?id=1523130 
<https://bugzilla.mozilla.org/show_bug.cgi?id=1523130>
but even one of their security(?) experts did not get my point and 
refused support.


    David

On 22.12.21 22:13, Jordan Brown wrote:

On 12/22/2021 1:08 PM, Philip Prindeville wrote:

I see there being limited application (utility) of self-signed certs, since 
they're pretty much useless from a security perspective, because they're 
unanchored in any root-of-trust.


They're OK once you take a leap of faith, check the fingerprint, or 
copy the certificate out of band.


In some senses they are *better* than a CA-based cert, because once 
established they are not vulnerable to CA compromise.

--
Jordan Brown, Oracle ZFS Storage Appliance, Oracle Solaris


Fwd: Utility of self-signed certs - Re: Questions about legacy apps/req.c code

2021-12-22 Thread David von Oheimb
Yeah, self-signed certs are absolutely useful - you just need to be very 
careful which ones you trust for what.


Such certs are widely used to provide trust anchor information, 
typically of root CAs,

but conceptually and pragmatically, as Jordan also stated below,
they can make much sense even for end entities, such as locally known 
and trusted servers or email users.


I spent quite some effort to get their (optional) acceptance re-enabled 
in Thunderbird:
https://bugzilla.mozilla.org/show_bug.cgi?id=1523130 
<https://bugzilla.mozilla.org/show_bug.cgi?id=1523130>
but even one of their security(?) experts did not get my point and 
refused support.


    David

On 22.12.21 22:13, Jordan Brown wrote:

On 12/22/2021 1:08 PM, Philip Prindeville wrote:

I see there being limited application (utility) of self-signed certs, since 
they're pretty much useless from a security perspective, because they're 
unanchored in any root-of-trust.


They're OK once you take a leap of faith, check the fingerprint, or 
copy the certificate out of band.


In some senses they are *better* than a CA-based cert, because once 
established they are not vulnerable to CA compromise.

--
Jordan Brown, Oracle ZFS Storage Appliance, Oracle Solaris


PKCS#10 CSR generation and bulky crypto library - Re: Questions about legacy apps/req.c code

2021-12-22 Thread David von Oheimb

@Philip,

it should not be hard to copy the core code from apps/req.c and cut out 
all parts not needed for generating a PKCS#10 CSR (including its 
self-signature).
Yet beware that a general-purpose library function that has (at least) 
the flexibility offered by that app would need a non-trivial set of 
parameters.


I do not like to separate the code sections that handle the alternative 
case of generating a self-signed cert
because there are strong similarities with generating a PKCS#10 CSR, so 
a split would introduce quite some redundancy.
(The code would deserve some further cleanup, but this is a general 
issue that holds for many, if not all, those apps.)


@Kyle,

your comments regarding the (self-)signature key to be used for CSR 
signing vs. cert signing are not really to the point being asked.


Also your comments on OpenSSL library code size are a side topic here, 
though I fully agree that it would be great if
the crypto lib was relieved from much bulk (to which various people 
including myself have added quite a bit recently)
that would much better fit in higher-level library. I suggested 
<https://github.com/openssl/openssl/pull/4992> this 4 years back, but so 
far the project members have not
found time for this. Later I re-phrased the issue later as a major FR: 
https://github.com/openssl/openssl/issues/13440 
<https://github.com/openssl/openssl/issues/13440>


Regards,

    David


On 22.12.21 19:58, Kyle Hamilton wrote:
From a conceptual perspective, I think "creating a CSR" should be 
different than "signing a CSR with a given keypair", and on that 
reason alone I'd separate them, allowing some small code duplication.


The difference between "signing with a certified key" and "signing 
with its own key" is really just a matter of determining the IssuerDN 
to put into the tbsCertificate, and that can be either an automatic 
process (a flag on the certificate generation call, an automatic 
verification that the signing key matches the key to be signed, the 
certificate generation call being provided a NULL certificate or DN to 
identify the signer, or something else) or a manual process (require 
library clients to know the lore that a self-signed key also needs to 
copy the SubjectDN to the IssuerDN).


But, "generate a certificate" isn't something I'd personally put into 
the basic SSL or crypto handling libraries. The reason is because 
OpenSSL is still used in many embedded systems that will never use 
that functionality, and putting code paths in place that will never be 
used is both a waste of code space and potentially an invitation for 
attackers to exploit their presence. (The same goes for key 
generation, to a degree, but the value of new key generation can at be 
either limited to Denial of Service or, at best, reset the device for 
a new deployment.)


I know it'll never happen, but I'd love to see another 
libcrypto/libssl client library (libx509, maybe?) be used for the more 
esoteric aspects of creating and verifying certificates.


-Kyle H

On Tue, Dec 21, 2021, 22:25 Philip Prindeville 
<mailto:philipp_s...@redfish-solutions.com>> wrote:


Hi,

I'm trying to add a library routine (or routines) to generate a
CSR and make that available to users of Openssl at the API level.

I'm thinking the shortest path might be to extract code from
apps/req.c as we know it's correct.

My only problem (so far) is dealing with the multiple places it
bifurcates based on gen_x509 (versus newreq) -- which David
pointed out to me in a separate mail thread back in mid-October.

What would be the downside to having two completely different code
paths for handling -x509 (and gen_x509) i.e. a self-signed
certificate versus generating a CSR?

The latter would allow me to move the CSR code into a library and
have the app exercise that API.

The only downside I can see is that the self-signed certificate
path might need to duplicate some of the library code.

Is that acceptable?

Thanks,

-Philip



PKCS#10 CSR generation and bulky crypto library - Re: Questions about legacy apps/req.c code

2021-12-22 Thread David von Oheimb

@Philip,

it should not be hard to copy the core code from apps/req.c and cut out 
all parts not needed for generating a PKCS#10 CSR (including its 
self-signature).
Yet beware that a general-purpose library function that has (at least) 
the flexibility offered by that app would need a non-trivial set of 
parameters.


I do not like to separate the code sections that handle the alternative 
case of generating a self-signed cert
because there are strong similarities with generating a PKCS#10 CSR, so 
a split would introduce quite some redundancy.
(The code would deserve some further cleanup, but this is a general 
issue that holds for many, if not all, those apps.)


@Kyle,

your comments regarding the (self-)signature key to be used for CSR 
signing vs. cert signing are not really to the point being asked.


Also your comments on OpenSSL library code size are a side topic here, 
though I fully agree that it would be great if
the crypto lib was relieved from much bulk (to which various people 
including myself have added quite a bit recently)
that would much better fit in higher-level library. I suggested 
<https://github.com/openssl/openssl/pull/4992> this 4 years back, but so 
far the project members have not
found time for this. Later I re-phrased the issue later as a major FR: 
https://github.com/openssl/openssl/issues/13440 
<https://github.com/openssl/openssl/issues/13440>


Regards,

    David


On 22.12.21 19:58, Kyle Hamilton wrote:
From a conceptual perspective, I think "creating a CSR" should be 
different than "signing a CSR with a given keypair", and on that 
reason alone I'd separate them, allowing some small code duplication.


The difference between "signing with a certified key" and "signing 
with its own key" is really just a matter of determining the IssuerDN 
to put into the tbsCertificate, and that can be either an automatic 
process (a flag on the certificate generation call, an automatic 
verification that the signing key matches the key to be signed, the 
certificate generation call being provided a NULL certificate or DN to 
identify the signer, or something else) or a manual process (require 
library clients to know the lore that a self-signed key also needs to 
copy the SubjectDN to the IssuerDN).


But, "generate a certificate" isn't something I'd personally put into 
the basic SSL or crypto handling libraries. The reason is because 
OpenSSL is still used in many embedded systems that will never use 
that functionality, and putting code paths in place that will never be 
used is both a waste of code space and potentially an invitation for 
attackers to exploit their presence. (The same goes for key 
generation, to a degree, but the value of new key generation can at be 
either limited to Denial of Service or, at best, reset the device for 
a new deployment.)


I know it'll never happen, but I'd love to see another 
libcrypto/libssl client library (libx509, maybe?) be used for the more 
esoteric aspects of creating and verifying certificates.


-Kyle H

On Tue, Dec 21, 2021, 22:25 Philip Prindeville 
<mailto:philipp_s...@redfish-solutions.com>> wrote:


Hi,

I'm trying to add a library routine (or routines) to generate a
CSR and make that available to users of Openssl at the API level.

I'm thinking the shortest path might be to extract code from
apps/req.c as we know it's correct.

My only problem (so far) is dealing with the multiple places it
bifurcates based on gen_x509 (versus newreq) -- which David
pointed out to me in a separate mail thread back in mid-October.

What would be the downside to having two completely different code
paths for handling -x509 (and gen_x509) i.e. a self-signed
certificate versus generating a CSR?

The latter would allow me to move the CSR code into a library and
have the app exercise that API.

The only downside I can see is that the self-signed certificate
path might need to duplicate some of the library code.

Is that acceptable?

Thanks,

-Philip



Question About OpenSSL 3.0, FIPS and Solaris Support

2021-12-07 Thread David Dillard via openssl-users
Hi,

I'm hoping someone can shed some light on something that's confusing me.  In 
the blog post about the FIPS 
submission<https://www.openssl.org/blog/blog/2021/09/22/OpenSSL3-fips-submission/>
 it states that one of the platforms that's being tested is "Oracle Solaris 
11.4 on Oracle SPARC M8-1".  However, on the platform policy 
page<https://www.openssl.org/policies/platformpolicy.html> it lists a number of 
Solaris platforms, all of which are currently "unadopted".  How should people 
interpret that?  That the initial release of OpenSSL 3.0 was supported on 
Solaris, but no releases after that are?  Or something else?


Thanks,

David



Re: Creating a CSR using OpenSSL v1.1.1

2021-10-12 Thread David von Oheimb

On 13.10.21 01:32, Philip Prindeville wrote:

Is there demo code for creating a CSR?

demos/x509/mkreq.c seems to have gone away a while ago...

Thanks!
What I generally take as demo/sample code is the OpenSSL apps 
implementation in apps/ ,
though that can be rather complicated due to many options, which also 
holds for apps/req.c .
You can follow there the code sections starting with the call to 
X509_REQ_new_ex().


Sometimes interesting code snippets may be found also in test/ , but not 
for CSR generation.


    David


Re: Creating a CSR using OpenSSL v1.1.1

2021-10-12 Thread David von Oheimb

On 13.10.21 01:32, Philip Prindeville wrote:

Is there demo code for creating a CSR?

demos/x509/mkreq.c seems to have gone away a while ago...

Thanks!
What I generally take as demo/sample code is the OpenSSL apps 
implementation in apps/ ,
though that can be rather complicated due to many options, which also 
holds for apps/req.c .
You can follow there the code sections starting with the call to 
X509_REQ_new_ex().


Sometimes interesting code snippets may be found also in test/ , but not 
for CSR generation.


    David


Re: Causes SSL_CTX_new to return NULL

2021-08-31 Thread David von Oheimb
Hello Hiroshi,

unfortunately the memory allocation failure reporting of OpenSSL is
still unsystematic;
see also https://github.com/openssl/openssl/issues/6251.

SSL_CTX_new() is pretty complex and can fail for many reasons.
In the case you quote below, its call of
EVP_get_digestbyname("ssl3-md5") fails for some reason.
Since you get this behavior not all the time, its is clear that this
cannot be due to a statically determined
reason (such as the MD5 implementation not being available), so this
must be due to lack of memory.
It might be also due to some (other) issue with multi-threading, but
very likely not.

    David



On 31.08.21 03:19, 青木寛 / AOKI,HIROSHI wrote:
> I would like some advice as to why I am getting NULLs returned as a result of 
> calling SSL_CTX_new.
>
> The library I'm using is OpenSSL 1.1.1k.
> The argument to SSL_CTX_new is TLS_server_method().
> The message retrieved by ERR_get_error and ERR_error_string was the following.
>   "error:140A90F2:SSL routines:SSL_CTX_new:unable to load ssl3 md5 routines".
> The phenomenon does not always occur, but sometimes it does.
>
> In the environment where the problem occurred, many services were running and 
> memory was scarce, so I suspect that lack of memory was the cause.
> Are there any other possible causes?
> 
> Hiroshi Aoki
>


Re: OpenSSL API CRL Revoke Check: Coverage

2021-08-30 Thread David von Oheimb
Hello Dennis,

here are answers to your questions.

  * All CRL signatures are (by default) verified - otherwise status
checking by CRLs would be insecure. The function used is
def_crl_verify() in crypto/x509/x_crl.c
  * All CRLs are kept in the X509_STORE such that they can be reused for
multiple cert verification calls, which typically have their own
X509_STORE_CTX.
When the cert chain has been build during verification of the target
cert,
the public keys of the intermediate (untrusted, but then verified)
CA certs are used to verify the CRL signatures.
  * One needs to interpret "Untrusted objects should not be added in
this way." in the context of the preceding sentence :
"X509_STORE_add_cert() and X509_STORE_add_crl() add the respective
object to the X509_STORE's local storage."
Certs can be trusted or not, but CRLs are not trusted by themselves.
So the above sentence is in fact a bit misleading
and should better be re-phrased to: "Untrusted certificates should
not be added in this way."

Regards,

    David

On 28.08.21 03:52, bl4ck ness wrote:
>
> Hello,
>
> I'm trying to use OpenSSL to validate a certificate chain with CRLs.
> To achieve this, I create a X509_STORE and add trusted (root)
> certificates into it via X509_STORE_add_cert(). I also add CRLs
> published by root and intermediate CAs into the store using
> X509_STORE_add_crl(). Then I create a X509_STORE_CTX for this store
> and using X509_STORE_CTX_init() function I set intermediate certs via
> its chain parameter and target (leaf) cert via its x509 parameter.
>
> When I verify cert chain using X509_verify_cert:
>
>   * Are these CRLs checked for a valid digital signature (both CRLs
> root & intermediate) ?
>   * Since store should only contain trusted root certificates why
> should I add CRLs published by intermediate certificates into the
> store but not to somewhere else (for example ctx)?
>   * Documentation for X509_STORE_add_crl "Untrusted objects should not
> be added in this way". What does this mean?
>
>
> Dennis K.


problems with too many ssl_read and ssl_write errors

2021-08-18 Thread David Bowers via openssl-users
  *   We have a server that has around  2025 clients connected at any instant.
  *   Our application creates a Server /Listener socket that then is converted 
into a Secure socket using OpenSSL library. This is compiled and built in a 
Windows x64 environment.  We also built the OpenSSL for the Windows. The 
Listener socket is created with a default backlog of 500. The Accept socket is 
non-blocking socket and waits for connections
  *   Every Client makes a regular blocking connection to the Server. The 
Server accepts the connection after which the Client socket is converted to a 
secure socket using the OpenSSL Library.
  *   The connections are coming at a rate of about 10 connections /second ?  
Not sure about this number.
  *   We are able to connect to all the clients in a few minutes and it stays 
like that for some time.  There constant exchange of messages between 
Server(COS) and clients without issues.
  *   The application logic is to keep trying to connect every timeout.
  *   After maybe a few hours/days we see the clients dropping connections.  
The logs indicate the SSL_Read or SSL_Write on the Server fails for a client 
with SSL_Error number 5 (SSL_ERROR_SYSCALL) and the equivalent Windows error of 
WSATimeOut.  We then observe the WSAECONNRESET as the Client closed connection. 
 We see this behavior for multiple sites.
  *   The number of Clients disconnected starts increasing and we see the logs 
in the Client where the server refuses any more connections form Clients 
(10061- WSAECONNREFUSED) There is nothing to indicate this state in the server 
logs. Our theory is the backlog is filled and Server refusing further 
connections.
  *   We are trying to find why we get the SSL_Read/SSL_Write Error as it a 
Blocking socket. We cannot use to a non-blocking socket due to platform and 
application limitation





Re: Parsing subject/issuer strings in X.509

2021-07-23 Thread David von Oheimb
What I use is

    X509_NAME *nname = parse_name(string, MBSTRING_ASC, 1, desc);

which is not an official API function but defined in apps/lib/apps.c:

/*
 * name is expected to be in the format /type0=value0/type1=value1/type2=...
 * where + can be used instead of / to form multi-valued RDNs if canmulti
 * and characters may be escaped by \
 */
X509_NAME *parse_name(const char *cp, int chtype, int canmulti, const
char *desc)

Would be good to have such a function as part of the X.509 API.

    David

On 23.07.21 07:49, Viktor Dukhovni wrote:
>> On 22 Jul 2021, at 9:29 pm, Philip Prindeville 
>>  wrote:
>>
>> I'm wondering what the function is that takes a string and returns X509_NAME 
>> with the attribute/value pairs of the parsed DN.
> There is no such function in general, since the are many potential
> string forms of X.509 names, not all of which are unambiguously
> machine readable.
>
> There are various functions for augmenting a partially built name
> with an attribute-value pair, but the parsing of a string a list
> of such attribute-value pairs is up to you. :-(
>


Re: [openssl CMP with pkcs11 engine]

2021-07-12 Thread David von Oheimb
Hi Marc,

I just came across your below message of March, which arrived in a bit
weird form (I removed duplicate text sections in the below quote) and
appears unanswered - sorry for that.

It has been a while since I last used engines, but the following variant

   -newkey org.openssl.engine:pkcs11:

should work, rather than

  -engine pkcs11 -keyform engine

because the latter pertains to all key options used, including -key,
which is not what you want.

HTH,

    David


On 25.03.21 18:56, mbalembo wrote:
>
> Hello all,
>
>
> I'm trying to do a CMP request using openssl with a private key inside
> a pkcs11 device (on linux).
> So i'm using opsenssl 3.0.0 alpha 13.
>
> I did compile fine (./config --prefix=/opt/openssl enable-deprecated
> --openssldir=/usr/local/ssl -Wl,-rpath=/opt/openssl/lib),
> but i ran into trouble when compiling libp11 to get my pkcs11 engine.
> (i had a similar issue while trying to use tpm2-tss-engine)
> I can't find a way to build openssl with ERR_put_error() symbol.
> I know it's deprecated so i changed the code in libp11 to use
> ERR_raise() instead, but again the symbol is also missing.
> I ended up removing the function call in the engine as a dirty fix,
> but i'd like to have a better solution.
>
>
> So, with everything compiled, I tried to use the engine only and
> create a CSR first.
>
> # /opt/openssl/bin/openssl req -new -engine pkcs11 -keyform engine
> -key
> 
> "pkcs11:model=SLB9670;manufacturer=Infineon;serial=;token=tpm2-token;id=%c1%b2%36%b2%eb%53%f0%4f%ea%24%1a%4d%01%ac%d1%9e%fe%11%19%6d;object=test;type=private;pin-value=00"
> -subj "" -out testpkcs11.csr
>
>
> and, everything works so far !
>
> but i get errors when trying to do a CMP request with the engine,
> thing is, i'm not so sure of the command.
>
> # /opt/openssl/bin/openssl cmp -cmd ir -engine pkcs11 -server  server>:8080 -path ejbca/publicweb/cmp/WKS-RA-Bootstrap_auth -cert
>  -key file: -keypass
> file: -keyform engine -newkey
> 
> "pkcs11:model=SLB9670;manufacturer=Infineon;serial=;token=tpm2-token;id=%c1%b2%36%b2%eb%53%f0%4f%ea%24%1a%4d%01%ac%d1%9e%fe%11%19%6d;object=test;type=private;pin-value=00"
> -subject '' -certout testcmppkcs11.pem -trusted <> my
> root CA> -reqexts san -config /opt/conf/openssl_reqext.cnf
>
>
> i get the following error :
>
> cmp_main:apps/cmp.c:2728:CMP info: using section(s) 'cmp' of
> OpenSSL configuration file '/opt/conf/openssl_reqext.cnf'
> cmp_main:apps/cmp.c:2737:CMP info: no [cmp] section found in
> config file '/opt/conf/openssl_reqext.cnf'; will thus use just
> [default] and unnamed section if
> present
> 
> Engine "pkcs11"
> set.  
>   
> Format not
> recognized!  
> 
> The key ID is not a valid PKCS#11
> URI   
> 
> The PKCS#11 URI format is defined by RFC7512
>   
>
> The legacy ENGINE_pkcs11 ID format is also still accepted for
> now 
> Format not recognized!
>   
> 
>
> The key ID is not a valid PKCS#11
> URI   
>   
>
> The PKCS#11 URI format is defined by
> RFC7512   
> 
>
> The legacy ENGINE_pkcs11 ID format is also still accepted for
> now   
> PKCS11_get_private_key returned
> NULL  
>   
> Could not read private key for CMP client certificate from
> org.openssl.engine:pkcs11:file:/foo/usine.boot.key.pem   
> 00E01783A47F:error:1380:engine
> routines:ENGINE_load_private_key:failed loading private
> key:crypto/engine/eng_pkey.c:78: 
> cmp_main:apps/cmp.c:2879:CMP error: cannot set up CMP context 
>
>
> I'm quite confuse about the PKCS11 error since i know from the req
> comman

Re: CMP mock server OldCertID check behavior

2021-07-12 Thread David von Oheimb
Hello Petr,

thank you for your message and filing the related issue at
https://github.com/openssl/openssl/issues/16041.
I very much appreciate such feedback on the new CMP implementation and
its tests.

You are right that the behavior of the mock server appears pretty
strange regarding the checks on the |oldCertID| in |kur| messages.
This is because for the HTTP-based OpenSSL-internal CMP test cases the
mock server deals, as you noticed, with just a single certificate.
For this reason, the short-circuit comparison given in |cmp_mock_srv.c|
is sufficient but at least would have deserved an explaining comment and
documentation.

In order to make the mock server more generally useful, I've extended it
in https://github.com/openssl/openssl/pull/16050
by giving the option -ref_cert to specify an independent reference
certificate to be used for the checks done for |kur| and |rr| messages.

Kind regards,

    David

On 08.07.21 13:17, Petr Gotthard wrote:
>
> Hello,
>
>  
>
> I am trying to renew a certificate via CMP and authenticate the
> request using the same cert.
>
>  
>
> I start the mock server:
>
> openssl cmp -port 8080 -srv_trusted test-ca-cert.pem \
>
>     -srv_key test-server-key.pem -srv_cert test-server-cert.pem \
>
>     -rsp_cert test-client-cert2.pem -rsp_capubs test-ca-cert.pem &
>
>  
>
> And run the client:
>
> openssl cmp -cmd kur -server localhost:8080/pkix/ -srvcert
> test-server-cert.pem \
>
>     -key test-client-key.pem -cert test-my-cert.pem \
>
>     -newkey test-client-key2.pem -certout test-my-cert2.pem
>
>  
>
> However, the CMP server(?) compares the serial number of the old
> client certificate with the serial of the new (enrolled) certificate
> and fails. (I can make the enrollment succeed if I force the old and
> the new certificate to have the same serial.)
>
>  
>
> CMP error: received error:PKIStatus: rejection; PKIFailureInfo:
> badRequest; StatusString: "wrong certid"; errorCode: 1DBD;
> errorDetails: CMP routines, wrong certid
>
>  
>
> What am I doing wrong, please? It is quite obvious the new certificate
> will have a different certid, isn’t it?
>
>  
>
>  
>
> Kind Regards,
>
> Petr
>


Re: OpenSSL CNG engine on GitHub

2021-07-02 Thread David von Oheimb
Hello Reinier,

around five years back I was looking for such an implementation as an
alternative to the rather limited CAPI engine, mostly because the
C(rypto )API does not support ECC.
The only thing I found at that time was
https://mta.openssl.org/pipermail/openssl-dev/2016-June/007362.html and
I do not know how it evolved since them.
So I am very pleased to see that meanwhile there is a way of using core
features of Windows CAPI Next Generation (CNG) from OpenSSL.

Many thanks to RTI for providing this as open-source development under
the Apache license.
I currently do not have the time for a closer look or even trying it
out, but this looks very good and well documented.
In particular,
https://openssl-cng-engine.readthedocs.io/en/latest/using/openssl_commands.html
gives a nice example how to use the Windows cert & key store.
Porting this to the new OpenSSL crypto provider interface will likely
lift the limitation regarding RSA-PSS support, which lacks just due to
the engine interface.

Cheers,

    David


On 01.07.21 19:49, Reinier Torenbeek wrote:
> Hi,
>
> For anyone interested in leveraging Windows CNG with OpenSSL 1.1.1,
> you may want to check out this new OpenSSL CNG Engine project on
> GitHub: https://github.com/rticommunity/openssl-cng-engine . The
> associated User's Manual is on
> ReadTheDocs: https://openssl-cng-engine.readthedocs.io/en/latest/index.html
> .
>
> The project implements the majority of the EVP interface, to leverage
> the BCrypt crypto implementations, as well as a subset of the STORE
> interface, for integration with the Windows Certificate and
> Keystore(s), via the NCrypt and Cert APIs. It has been tested with
> 1.1.1k on Windows 10, with Visual Studio 2017 and 2019. It is released
> under the Apache-2.0 license.
>
> Any feedback is welcome, please send it to me or open an issue on GitHub.
>
> Best regards,
> Reinier


Re: [EXTERNAL] Re: GNU Make erroring on makefile

2021-07-01 Thread David von Oheimb
On Thu, 01 Jul 2021 15:22:46 +0200, Joe Carroll wrote:

> I'm getting a "missing separator" error on line 56.
Would be good to add a note to the top of both Makefile and makefile for
which flavor of make they are intended,
and maybe we can add some check to them that gives a more to-the-point
hint if an unsuitable one is used.

>  I do not have access to nmake.exe.

Everyone who uses a VC-* configuration should have access to cl.exe and
nmake.exe.

    David


On 01.07.21 16:55, Joe Carroll wrote:
> Thanks Matt.  That clears it up.
>
>
>
> -Original Message-
> From: openssl-users [mailto:openssl-users-boun...@openssl.org] On Behalf Of 
> Matt Caswell
> Sent: Thursday, July 1, 2021 9:40 AM
> To: openssl-users@openssl.org
> Subject: Re: [EXTERNAL] Re: GNU Make erroring on makefile
>
>
> On 01/07/2021 15:06, Joe Carroll wrote:
>> Windows 10
>> perl Configure VC-WIN64A
> The VC-WIN64A target generates a Makefile suitable for consumption by 
> nmake. Hence its not possible to use GNU make with it.
>
> [...]
>
>
> Matt
>
>
>
>
>> -Original Message-
>> From: Richard Levitte [mailto:levi...@openssl.org]
>> Sent: Thursday, July 1, 2021 8:25 AM
>> To: Joe Carroll 
>> Cc: openssl-users@openssl.org
>> Subject: [EXTERNAL] Re: GNU Make erroring on makefile
>>
>> How did you configure, and on what platform?
>>
>> On Thu, 01 Jul 2021 15:22:46 +0200,
>> Joe Carroll wrote:
>>> Has anyone successfully used GNU Make as part of the install process for 
>>> version 1.1.1k or later?
>>> I'm getting a "missing separator" error on line 56.  I do not have access 
>>> to nmake.exe.
>>>   
>>> !IF "$(DESTDIR)" != ""
>>>
>>>


Re: Compilation issues

2021-06-29 Thread david raingeard
s13.c:48

ssl/record/rec_layer_s3.c:1056

ssl/record/rec_layer_s3.c:1059

ssl/record/rec_layer_s3.c:1062

Sent Record

Header:

  Version = TLS 1.2 (0x303)

  Content Type = Alert (21)

  Length = 2

ssl/record/rec_layer_s3.c:1067 SSL_TREAT_AS_TLS13(s)=1
s->enc_write_ctx=0x

ssl/record/rec_layer_s3.c:1076

ssl/record/rec_layer_s3.c:1079

Level=fatal(2), description=bad record mac(20)


ssl/record/rec_layer_s3.c:1312

ssl/record/rec_layer_s3.c:1315

0:error:1408F119:SSL routines:ssl3_get_record:decryption failed or bad
record mac:ssl/record/ssl3_record.c:698:

---

no peer certificate available

---

No client certificate CA names sent

Server Temp Key: X25519, 253 bits

---

SSL handshake has read 4796 bytes and written 241 bytes

Verification: OK

---

New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384

Secure Renegotiation IS NOT supported

Compression: NONE

Expansion: NONE

No ALPN negotiated

Early data was not sent

Verify return code: 0 (ok)

-

Le mar. 29 juin 2021 à 18:06, Jan Just Keijser  a écrit :

> On 29/06/21 11:58, david raingeard wrote:
> > Hello,
> >
> > Technically, why prevents openssl 1.1.1g from compiling correctly on some
> > operating systems like Solaris 2.6, CentOS 7.8,... ?
> >
> >
> you will have to provide more details - openssl 1.1.1g compiles just
> fine on CentOS 7 (7.9 in my case).
>
> Can't talk about Solaris 2.6 , other than that it has been out of
> support since July 2006.
>
> HTH,
>
> JJK
>
>


Compilation issues

2021-06-29 Thread david raingeard
Hello,

Technically, why prevents openssl 1.1.1g from compiling correctly on some
operating systems like Solaris 2.6, CentOS 7.8,... ?

thank you !


openssl 1.1.1 debugging

2021-06-24 Thread david raingeard
hello

is it possible to have some kind of debug server which will always use the
same data, so i can debug the code ?

i mean i have openssl working with tls 1.3 and ssl3 on unbuntu, which i
could compare the logs with the ones on the sparc, so i can find out where
it goes wrong ?

thank you


openssl 1.1.1k on solaris 2.6 sparc

2021-06-24 Thread david raingeard
Hello,

I compiled it using sun compiler, with some modifications to the source
code.

However :)

 openssl s_client -connect google.com:443 -tls1_2

works fine !

But
  openssl s_client -connect google.com:443 -tls1_3

fails on CRYPTO_memcmp.

For easy debugging, I have made a copy of  CRYPTO_memcmp in gcm128,
called CRYPTO_gcm128_memcmp.

Here is what I get (added some log ). As you can see, ctx->Xi.c and tag
don't match.

I have looked for hours to find why, with no luck yet.
Any idea how to debug this ? Some tests to run to check if everything is ok
?


crypto/modes/gcm128.c:1931 ctx->EK0.u[0]=a2e1d0203e9a02ca
crypto/modes/gcm128.c:1932 ctx->EK0.u[1]=9fc11c97afde22db
crypto/modes/gcm128.c:1933 ctx->Xi.u[0]=a22699a2cb77c69d
crypto/modes/gcm128.c:1934 ctx->Xi.u[1]=5af190e82eeffaf3
crypto/modes/gcm128.c:1937 after xor:
crypto/modes/gcm128.c:1938 ctx->Xi.u[0]=c74982f5edc457
crypto/modes/gcm128.c:1939 ctx->Xi.u[1]=c5308c7f8131d828
crypto/modes/gcm128.c:1941
crypto/modes/gcm128.c:1834 CRYPTO_gcm128_memcmp
len=16
00^a7
c7^c1
49^4d
82^51
f5^0b
ed^25
c4^ae
57^26
c5^d2
30^66
8c^33
7f^82
81^0f
31^75
d8^a4
28^e0
crypto/modes/gcm128.c:1842 CRYPTO_gcm128_memcmp
crypto/modes/gcm128.c:1957 ret = 255


Re: How to dump all certificates from a file?

2021-04-07 Thread David von Oheimb
I also had this problem several years back but did not find the nifty
though counter-intuitive workaround using cr2pkcs7 given below.

Since then I've been using a Perl script like this:

> #!/usr/bin/perl
> $/ = '-END CERTIFICATE-';
> while(<>) {
> if(m|$/|s) {
> print STDERR "## $ARGV ##\n";
> system "echo '$_' | openssl x509 -noout -text";
> }
> }

which unfortunately does not work with "TRUSTED CERTIFICATE".

I think the x509 command should be extended to print all certs.

David

On 7 April 2021 04:58:38 CEST, Nan Xiao  wrote:
> Hi Viktor,
> 
> > By "a file" you clearly mean a "PEM file" with one or more certificates
> exclosed in "-BEGIN ...".."-END ..." delimiters.
> 
> Yes, this is what I mean.
> 
> > openssl crl2pkcs7 -nocrl -certfile somefile.pem |
> opessl pkcs7 -print_certs -text
> 
> Works like a charm! Thanks very much for your time and quick response!
> 
> Best Regards
> Nan Xiao
> 
> On Wed, Apr 7, 2021 at 10:46 AM Viktor Dukhovni
>  wrote:
> >
> > On Wed, Apr 07, 2021 at 10:14:42AM +0800, Nan Xiao wrote:
> >
> > > Greetings from me! By default openssl-x509 can only dump one
> > > certificate from the file:
> >
> > By "a file" you clearly mean a "PEM file" with one or more certificates
> > exclosed in "-BEGIN ...".."-END ..." delimiters.  With that
> > proviso, the command in question is:
> >
> > openssl crl2pkcs7 -nocrl -certfile somefile.pem |
> > opessl pkcs7 -print_certs -text
> >
> > The output format can be tweaked slightly, though not quite as much as
> > will "openssl x509".  See the pkcs7(1) manpage for details.
> >
> > --
> > Viktor.
> 


OpenSSL chain build error diagnostics - Re: Why does OpenSSL report google's certificate is "self-signed"?

2021-04-03 Thread David von Oheimb
Hi Nan, Viktor, et al.,

/From: openssl-users https://mta.openssl.org/mailman/listinfo/openssl-users>> On Behalf Of
Viktor//Dukhovni //Sent: Wednesday, 31 March, 2021 10:31/
> Most likely you haven't configured a suitable CAfile and/or CApath,
> which contains the root CA that ultimately issued Google's certificate.

Yeah, that is the usual reason.

> It looks like Google includes a self-signed root CA in the wire
> certificate chain,
>
Not really. @Viktor, see the diagnostic output of the alternative call

   openssl s_client -connect google.com:443

that Nan provided below (and which is easy to reproduce):

> ---
> Certificate chain
>  0 s:C = US, ST = California, L = Mountain View, O = Google LLC, CN =
> *.google.com
>i:C = US, O = Google Trust Services, CN = GTS CA 1O1
>  1 s:C = US, O = Google Trust Services, CN = GTS CA 1O1
>i:OU = GlobalSign Root CA - R2, O = GlobalSign, CN = GlobalSign
> ---
This chain does not include the root cert (which would be by GlobalSign
in this case).

@all, contrbuting to the discussion that spawned over the last couple of
days on whether the server should include the root of its chain:
IMO is should be advised not to include the root cert (i.e., the trust
anchor).
While the (needless) extra amount of data is usually not a problem,
the main problem that I see is that the receiver may be mislead to
accept the root cert as trusted although when received this way it is
not trustworthy.
Instead, when verifying the server chain, the receiver must already have
a trust store containing (root) certs that are considered trusted,
and for the chain received from the server there should be a suitable
trust anchor (which typically takes the form of a self-signed cert) in
that trust store.


> and if no match is found in the trust store,
> you'll get the reported error.
The reason must be something else. Note that the error was
X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT,
which means that the chain built contains only one element, and this
element is self-signed and not trusted.
So it cannot be the chain  *.google.com ->  GTS CA 1O1 -> GlobalSign.

@Nan, I find this error very unexpected - something pretty strange must
have happened in your application.
If no suitable trusted root is available in the trust store, the error
thrown should have been
20 ("unable to get local issuer certificate") =
X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY.

BTW, many of those OpenSSL verify error codes are IMHO pretty hard to
(correctly) understand and therefore should be re-phrased for clarity.
And unfortunately OpenSSL by default does not give much further
diagnostics on cert verification errors.
I advise using `X509_STORE_CTX_print_verify_cb()` which I added last
year to the master as part of the CMP contribution.
This can be done simply as follows:

    X509_STORE_set_verify_cb(my_X509_STORE, X509_STORE_CTX_print_verify_cb);

On X509_verify_cert() error, this provides in the error queue not only
the error code and string, but also the cert for which the error occurred
as well as the set of untrusted certs and the set of trust anchor certs
that were available for chain building in the current X509_STORE_CTX.

Regards,

   David


On 31.03.21 07:49, Nan Xiao wrote:
> Hi OpenSSL users,
>
> Greetings from me!
>
> I am using the master branch of OpenSSL and testing client-arg program
> (in demos/bio) with "google.com:443":
>
> # LD_LIBRARY_PATH=/root/openssl/build gdb --args ./client-arg -connect
> "google.com:443"
> ..
> (gdb)
> 91 if (BIO_do_connect(sbio) <= 0) {
> (gdb)
> 97 if (BIO_do_handshake(sbio) <= 0) {
> (gdb) p ssl->verify_result
> $1 = 18
>
> The connection is successful, but the ssl->verify_result is 18, i.e.,
> X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT. I am a little confused why
> OpenSSL reports google's certificate is "self-signed"? And it should
> be not. The following result is from "openssl s_client":
>
> # openssl s_client -connect google.com:443
> CONNECTED(0003)
> depth=2 OU = GlobalSign Root CA - R2, O = GlobalSign, CN = GlobalSign
> verify return:1
> depth=1 C = US, O = Google Trust Services, CN = GTS CA 1O1
> verify return:1
> depth=0 C = US, ST = California, L = Mountain View, O = Google LLC, CN
> = *.google.com
> verify return:1
> ---
> Certificate chain
>  0 s:C = US, ST = California, L = Mountain View, O = Google LLC, CN =
> *.google.com
>i:C = US, O = Google Trust Services, CN = GTS CA 1O1
>  1 s:C = US, O = Google Trust Services, CN = GTS CA 1O1
>i:OU = GlobalSign Root CA - R2, O = GlobalSign, CN = GlobalSign
> ---
>
> Anyone can give some clues? Thanks very much in advance!
>
> Best Regards
> Nan Xiao
>


Version compatibility issues - Re: openssl development work / paid

2021-03-26 Thread David von Oheimb
Embedded Devel,

my sympathy - I know this can be painful and frustrating.

>From which old OpenSSL version to which target version do you need to
get the code updated?
And as info to whoever may be considering picking up this task: which is
your timeline for that?

Within OpenSSL we are currently discussing how to handle version
compatibility issues
with the upcoming version 3.0 at
https://github.com/openssl/openssl/issues/14628.

Can you give some concrete typical examples which exact issues you are
facing?

    David

On 25.03.21 13:58, Floodeenjr, Thomas wrote:
> If your problem is the migration from 1.0.2 to 1.1.1, I have attached my 
> porting notes, if that helps.
>
> -Tom 
>
> -Original Message-
> From: openssl-users  On Behalf Of Embedded 
> Devel
> Sent: Wednesday, March 24, 2021 8:02 PM
> To: openssl-users@openssl.org
> Subject: openssl development work / paid
>
> I tried to get through this on my own, not being a openssl developer, made 
> progress but still no joy
>
> so we had an app that was written some 8-10 years ago, which worked fine for 
> client/server tls
>
> update to today, no longer functional, deprecations in openssl cause errors
>
> it is not a large app, and i believe if someone were to resolve the openssl 
> issues it would work again
>
> whos up for making some money ?
>
>
> Thanks
>


Hoping to get a working example of SFTP in PHP

2021-01-24 Thread David Spector
This question may be considered off-topic, since is not directly about 
using the OpenSSL library. Let me know if you want me to delete this 
posting.


I have a question about uploading a file (text.txt) securely in PHP 
using the SFTP protocol and a public/private key pair. I have posted 
this question is several fora, but no one seems to know for sure how to 
do it. Although I've received much advice, none of the actual published 
examples are known to work, and do not work for me. I know my parameters 
are okay, since they work in CoreFTP and PuTTY, two manual applications 
that implement SFTP with private keys. Also, a short FTP insecure 
example works perfectly for me. There seem to be many subtle 
undocumented issues with getting any of the published example code to 
work. I'm inexperienced with SFTP in PHP and am hoping to locate a 
working example.


So far I have tried a number of examples using the ssh2 PHP extension 
and the cURL PHP extension. They fail, and the error messages, if any, 
are uninformative.


A little background:

I do website development work for two small companies on a Windows 
computer (Apache/PHP 7), uploading to a remote production server 
(VPS/cPanel/Centos). My local software versions are: Windows 10 
Hone/2004, Bitnami WAMP/7.4.13-0, Apache/2.4.46 (Win64), OpenSSL/1.1.1h, 
PHP/7.4.13, libssh2/1.9.0.


Where I am now is that I used cPanel on the remote server to install an 
SSH/SFTP public key for a particular account. I then copied the 
generated encrypted x.key and x.ppk private key files to my local 
computer. Now I am ready to upload files securely, if only PHP made the 
task easy. Unfortunately, it does not.


I know that phpseclib is available, but it is a very large PHP source 
code library, so I'd rather use the php_ssh2.dll or cURL PHP extension, 
the OpenSSL PHP extension (if it can do SFTP), or something similar.


In summary, I'm looking for someone who has got SFTP file upload with 
private key working in PHP to share their code with me. I pledge to test 
the code and post it further to help others with this difficult but 
widely useful problem.


Re: Parsing and generating CBOR certificates?

2021-01-21 Thread David von Oheimb
I'd welcome support for CBOR(-encoded) certificates since they can save
a lot of space
for both the data itself and the code handling it, which may be vital
for IoT scenarios, for instance.
It looks like the standardization of their definition got pretty far
already.

Although it is certainly possible to convert between DER-encoded ASN.1
(or at least its subset needed for X.509 certs) and CBOR,
this is not strictly needed since there is a definition of natively
signed CBOR certs.
Thus all the ASN.1 fuzz, which is bulky and error-prone to implement and
use, can be avoided then.

https://tools.ietf.org/html/draft-mattsson-cose-cbor-cert-compress writes:

   The use of natively signed CBOR certificates removes the need for
   ASN.1 encoding, which is a rich source of security vulnerabilities.


It may be also worth noting in this context that due to it sheer size
the OpenSSL code itself is not suited for constrained systems.
Yet even then it would make sense if OpenSSL supported CBOR certs
because they could be used by TLS peers on constrained systems.
Moreover, when using only natively signed CBOR certs it should be possible
(though likely hard to achieve with the current strongly ASN.1 entangled
libcrypto code)
to build OpenSSL without any ASN.1 support, which should reduce code
size drastically.

I suggest opening a feature request at
https://github.com/openssl/openssl/issues

Regards,
    David

On 21.01.21 02:07, Blumenthal, Uri - 0553 - MITLL wrote:
> On 1/20/21, 19:42, "Benjamin Kaduk"  wrote:
>>And again, where do you believe such a conversion is specified?
> What do you mean "specified"? There's an ASN.1 "specification" of the 
> certificate format, which theoretically can be encoded into whatever - DER, 
> PER, OER, etc. One such tool (https://github.com/mouse07410/asn1c.git that I 
> use) generates from ASN.1 file codecs for many encoding formats, and is able 
> to convert between them.
>
> Unfortunately, there's no ASN.1 -> CBOR codec generator, AFAIK, which is why 
> I'm asking here.
>
>>   The IETF internet-draft I reference is a way to do so, but it is (to 
>> repeat)
>>   very much a work in progress.
> Understood. Do you know if there's any code behind it? Or just the "theory"?
>
> Thanks!
>
> On Thu, Jan 21, 2021 at 12:35:24AM +, Blumenthal, Uri - 0553 - MITLL 
> wrote:
>> I meant not "CBOR protocol" (which,  in all likelihood, doesn't and 
>> shouldn't exist) but CBOR encoding of X.509 certificates (which, hopefully, 
>> does exists).
>>
>> At least, I'm looking for a tool that would convert between these two 
>> encodings (DER and CBOR) for specific objects (X.509-conformant 
>> certificates).
>>
>> Thanks
>>
>> Regards,
>> Uri
>>
>>> On Jan 20, 2021, at 19:26, Kaduk, Ben  wrote:
>>>
>>> No.  OpenSSL does not include any CBOR protocol support.
>>> I'm also not sure what you mean by "CBOR-encoded certificate"; I don't
>>> know of any such thing other than
>>> https://datatracker.ietf.org/doc/draft-mattsson-cose-cbor-cert-compress/
>>> which is very much still a work in progress.
>>>
>>> -Ben
>>>
>>> 
>>> From: Blumenthal, Uri - 0553 - MITLL 
>>> Sent: Wednesday, January 20, 2021 4:22 PM
>>> To: openssl-users
>>> Subject: Parsing and generating CBOR certificates?
>>>
>>> I need to work with CBOR-encoded certificates. Is there any way to use 
>>> OpenSSL to parse and/or generate certs in CBOR encoding?
>>>
>>> Thanks
>>>
>>> Regards,
>>> Uri


Re: Directly trusted self-issued end-entity certs - Re: How to rotate cert when only first matching cert been verified

2021-01-01 Thread David von Oheimb
On 01.01.21 08:07, 定平袁 wrote:
> @David von Oheimb <mailto:d...@ddvo.net>
> Thank you so much for your deep investigation!
My pleasure!

> With subjectKeyIdentifier and authorityKeyIdentifier extensions, it
> works like a charm!
Good to hear.
I've meanwhile submitted a pull request that fixed the behavior also  in
case no SKID and AKID are included in the certs
and briefly mentioned your use case there:
https://github.com/openssl/openssl/pull/13748

> So, the former statements I found on this page
> <https://www.openssl.org/docs/man1.0.2/man3/SSL_CTX_load_verify_locations.html>
> only applies to CA cert, not EE cert.
> How to pick up cert from trust store(or cert container as you say)
> is decided by different implementation themselves, do I understand
> correctly?
It looks like my explanations were a bit mistakable.
Although self-signed (and more generally, self-issued) EE certs are out
of scope of RFC 5280, OpenSSL still tries to build a cert chain for them
and then to verify it.
Please also note that I did not write "cert container", but that these
certs are essentially just a convenient container /for a public key/.
In other words, they have the /format/ of an X.509 certificate, but the
only thing that really matters in such a cert is the public key.
Yet since they look like a certificate, they can be used where a
certificate is expected, e.g., in TLS handshake and in trust stores.

> Since GnuTls and golang could pick up the right cert in this kind of
> scenario,
> they must implement their own logic to pick up the right cert, do you
> think OpenSSL
> will implement this logic too? Or it's a more appropriate approach to
> just
> use the extensions you suggested?
With the fix mentioned above, chain building and verification will
always succeed,
regardless how the cert looks like because in this case it is sufficient
to find the target certificate in the trust store,
without having to check and further data that may be included in it.
Although not required by RFC 5280 for such a cert, OpenSSL does check
for its expiration
(and may check policy restrictions etc.) because this is helpful in most
application scenarios.

Regards,

    David


> David von Oheimb mailto:d...@ddvo.net>> 于2020年12月26日周六
> 下午5:17写道:
>
> On 25.12.20 00:35, 定平袁 wrote:
>> @David von Oheimb <mailto:d...@ddvo.net> I will update to a new
>> version and try again.
>
> Good. Ideally try also a current 3.0.0 alpha release because there
> have been some changes to cert chain building and verification
> recently.
>
>> To append cert is to make sure new cert and old cert both exist
>> in trust store, thus when server switches cert, it can be trusted
>> by client.
> Understood, but my point was on a different aspect:
> The chain building will take the first matching cert, so if you
> want to prefer the new cert, it must be in the list *before* the
> old one -
> in other words, prepend the new cert to the list rather than
> appending to it.
>
>> @Jochen actually, the certs have different SN, which indeed is
>> not consistent with the man doc
>
> Different certs with the same issuer indeed *must* have different
> SNs (except in the special case I mention below).
> See also RFC 5280 section 4.1.2.2
> https://tools.ietf.org/html/rfc5280#section-4.1.2.2:
>
>   It MUST be unique for each certificate issued by a given CA
>  (i.e., the issuer name and serial number identify a unique 
> certificate). 
>
>
> Yet there is a different inconsistency in what you write:
>
>> The thing that confuses me is that CURL (compiled with gnutls)
>> and Golang works.
>> below is my ca.crt file, I am not sure where it went wrong, maybe
>> just my wrong behavior?
> You refer to them as CA certs, but they are not: they do no have a
> basicConstraints field with the cA bit set.
> And as far as I understand your scenario, they are not used to
> issue other certs but by some (TLS) server,
> so they really are end-entity (EE) certs, not CA certs, and it
> looks like this is correct in your application scenario.
>
> Directly trusted self-issued EE certs (which may be self-signed or
> not) are a special situation.
> This has been clarified in RFC 6818 (which updates RFC 5280)
> https://tools.ietf.org/html/rfc6818#section-2:
>
> | Consistent with Section 3.4.61 
> <https://tools.ietf.org/html/rfc6818#section-3.4.61> of X.509 (11/2008) 
> [X.509 <https://tools.ietf.org/html/rfc6818#ref-X.509>], we note
> | that use of self-issued certificates and self-signed certificates
> | issued by entities other than CAs are outs

Directly trusted self-issued end-entity certs - Re: How to rotate cert when only first matching cert been verified

2020-12-26 Thread David von Oheimb
On 25.12.20 00:35, 定平袁 wrote:
> @David von Oheimb <mailto:d...@ddvo.net> I will update to a new version
> and try again.

Good. Ideally try also a current 3.0.0 alpha release because there have
been some changes to cert chain building and verification recently.

> To append cert is to make sure new cert and old cert both exist in
> trust store, thus when server switches cert, it can be trusted by client.
Understood, but my point was on a different aspect:
The chain building will take the first matching cert, so if you want to
prefer the new cert, it must be in the list *before* the old one -
in other words, prepend the new cert to the list rather than appending
to it.

> @Jochen actually, the certs have different SN, which indeed is not
> consistent with the man doc

Different certs with the same issuer indeed *must* have different SNs
(except in the special case I mention below).
See also RFC 5280 section 4.1.2.2
https://tools.ietf.org/html/rfc5280#section-4.1.2.2:

  It MUST be unique for each certificate issued by a given CA
 (i.e., the issuer name and serial number identify a unique certificate). 


Yet there is a different inconsistency in what you write:

> The thing that confuses me is that CURL (compiled with gnutls) and
> Golang works.
> below is my ca.crt file, I am not sure where it went wrong, maybe just
> my wrong behavior?
You refer to them as CA certs, but they are not: they do no have a
basicConstraints field with the cA bit set.
And as far as I understand your scenario, they are not used to issue
other certs but by some (TLS) server,
so they really are end-entity (EE) certs, not CA certs, and it looks
like this is correct in your application scenario.

Directly trusted self-issued EE certs (which may be self-signed or not)
are a special situation.
This has been clarified in RFC 6818 (which updates RFC 5280)
https://tools.ietf.org/html/rfc6818#section-2:

| Consistent with Section 3.4.61 
<https://tools.ietf.org/html/rfc6818#section-3.4.61> of X.509 (11/2008) [X.509 
<https://tools.ietf.org/html/rfc6818#ref-X.509>], we note
| that use of self-issued certificates and self-signed certificates
| issued by entities other than CAs are outside the scope of this
| specification.  Thus, for example, a web server or client might
| generate a self-signed certificate to identify itself.  These
| certificates and how a relying party uses them to authenticate
| asserted identities are both outside the scope of RFC 5280 
<https://tools.ietf.org/html/rfc5280>.

So the path building and verification, as well as other checks defined
RFC 5280, does not apply to them at all!
They are essentially just a convenient container for a public key, where
it is optional to check expiration etc.


Unfortunately, when using such certs for TLS connections etc., still
verification is done on them, which may fail.
After renaming your ca.crt file to ee.crt for clarity and extracting the
first cert in ee1.crt and the second one in ee2.crt,
when verifying these directly trusted certs one gets the problem you
reported:

openssl verify -x509_strict -trusted ee.crt ee1.crt
ee1.crt: OK

openssl verify -x509_strict -trusted ee.crt ee2.crt
C = US, ST = CA, L = Palo Alto, O = VMware, CN = nsxmanager.pks.vmware.local
error 18 at 0 depth lookup: self signed certificate
error ee2.crt: verification failed

So as I wrote before, unfortunately the path building picks up the first
matching cert from ee.crt,
which is the one in ee1.crt (i.e., your old one), and does not try the
second one (i.e., your new one).
This happens also with the latest OpenSSL pre-3.0.0 master.


A solution is to add both the subjectKeyIdentifier and
authorityKeyIdentifier extensions to your certs,
for instance like this:

echo >ee.cnf "
prompt = no
distinguished_name = my_server
x509_extensions = my_exts
[my_server]
commonName = test
[my_exts]
basicConstraints = CA:false
subjectKeyIdentifier=hash
authorityKeyIdentifier = keyid"

openssl req -config ee.cnf -new -x509 -out ee1.crt -nodes -keyout ee1.pem
openssl req -config ee.cnf -new -x509 -out ee2.crt -nodes -keyout ee2.pem
cat ee1.crt ee2.crt >ee.crt

The subjectKeyIdentifier and authorityKeyIdentifier extensions are
generally recommend
(and actually required to add for certs that are RFC 5280 compliant)
because they help for correct chain building, and indeed also in this
case they do:

openssl verify -x509_strict -trusted ee.crt ee1.crt
ee1.crt: OK
openssl verify -x509_strict -trusted ee.crt ee2.crt
ee2.crt: OK

Regards,

    David




Re: How to rotate cert when only first matching cert been verified

2020-12-23 Thread David von Oheimb
定平袁 you are welcome.

The OpenSSL version you are using is way too old!
Do not use version 1.1.0, 1.0.x, and anything older - those versions are
unsupported and must be considered insecure.

Yet since both your old and new server cert are not expired and have the
same subject, keyIdentifier, and serial number,
and you appended the new server cert to your list, it is no surprise
that the certificate chain building algorithm will pick up the old one.
For efficiency reasons, no other (equally applicable) certificates will
be tried.
I've just clarified this and some further details in
https://github.com/openssl/openssl/pull/13735.

I think Michael Wojcik already gave the right hint to solve your problem
two days before:

> Why are you appending it to the file containing the existing certificate?

So I suggest you better prepend the new certificate to that file rather
than appending it,
or even better, remove the old (non-matching) certificate from that file.

Hope this helps,

    David


P.S.: I will be unavailable for several days, too.

On 23.12.20 04:15, 定平袁 wrote:
> @David Thanks for you help!
> This is my openssl version, and the self compiled curl backend
> ```
> $ openssl version
> OpenSSL 1.0.2g  1 Mar 2016
>
> $ ldd /usr/bin/openssl  |grep ssl
> libssl.so.1.0.0 => /lib/x86_64-linux-gnu/libssl.so.1.0.0
> (0x7f3099799000)
>
> $ ldd ./lib/.libs/libcurl.so |grep ssl
> libssl.so.1.0.0 => /lib/x86_64-linux-gnu/libssl.so.1.0.0
> (0x7f8720fd4000)
> ```
> the system built-in curl binary:
> ```
> $ ldd /usr/bin/curl  |grep tls
> libcurl-gnutls.so.4 => /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4
> (0x7f4b7fa07000)
> libgnutls.so.30 => /usr/lib/x86_64-linux-gnu/libgnutls.so.30
> (0x7f4b7e851000)
> ```
> Actually, the old cert and new cert both are not expired yet, just the
> old cert is not consistent with server side. The new cert has the same
> content with server side imported cert(after replaced).
>
> David von Oheimb mailto:d...@ddvo.net>> 于2020年12月22日周二
> 下午10:27写道:
>
> @定平袁, which version of OpenSSL are you using?
>
> I've just checked: since OpenSSL 1.1.0, expired certificates are
> effectively not used for chain building.
>
>     David
>
> On 20.12.20 02:02, 定平袁 wrote:
>> the exact behavior:
>>
>> When looking up CA certificates, the OpenSSL library will first
>> search the certificates in *CAfile*, then those in *CApath*.
>> Certificate matching is done based on the subject name, the key
>> identifier (if present), and the serial number as taken from the
>> certificate to be verified. If these data do not match, the next
>> certificate will be tried. If a first certificate matching the
>> parameters is found, the verification process will be performed;
>> no other certificates for the same parameters will be searched in
>> case of failure.
>>
>> why no other certificates for the same parameters will be searched?
>>
>> 定平袁 mailto:pkudingp...@gmail.com>>
>> 于2020年12月20日周日 上午8:59写道:
>>
>> Hello everyone,
>>
>> Recently I am trying to rotate a cert, and the client uses
>> python requests lib, which leverages openssl. Here is my steps:
>>
>> 1. Generate a new cert, and append it to the cert file(at
>> this point, there are 2 certs in the file, first is old cert,
>> second is new, they have the same Subject), restart client
>> side process, (no problem here, because first cert matching
>> server side cert, and it verifies successfully)
>> 2. Replace server side with new cert.
>>
>> As soon as I issue step #2, the client side process starts to
>> show error “certificate verify failed”. This would cause
>> downtime to my apps. I am new to this, not sure if there is
>> anything wrong regarding my usage or understanding. But I
>> found this page
>> 
>> https://www.openssl.org/docs/man1.0.2/man3/SSL_CTX_load_verify_locations.html,
>> it says the exact behavior like my test:
>>
>> If several CA certificates matching the name, key identifier,
>> and serial number condition are available, only the first one
>> will be examined. This may lead to unexpected results if the
>> same CA certificate is available with different expiration
>> dates. If a "certificate expired" verification error occurs,
>> no other certificate will be searched. Make sure to not have
>> expired certificates mixed with valid ones.
>>
>> So I am wondering how to rotate cert in such a case? It would
>> be very helpful if anyone could help on this. Thanks.
>>
>> BTW, I tested the same cert file with CURL (compiled with
>> gnutls), it works fine.
>>
>> Regards
>> Dingping
>>


Re: Cert hot-reloading

2020-08-31 Thread David Arnold
A SSL_CTX api seem like a good idea to provide additional guarantees to
applications.

Maybe Openssl - used as a library - can return to the other legacy
applications that the certificate is "deemed not valid any more" whenever
they try to use an outdated pointer?

This ought to be a transparent scenario for a legacy application which *at
the same time* also do frequent cert rolling.

Would it be appropriate to record some excerpts of this discussion in
github gist? I can be the secretary, if that would be uncontroversial.

El lunes, 31 de agosto de 2020, Viktor Dukhovni 
escribió:

> On Mon, Aug 31, 2020 at 11:00:31PM -0500, David Arnold wrote:
>
> > 1. Construe symlinks to current certs in a folder (old or new / file by
> file)
> > 2. Symlink that folder
> > 3. Rename the current symlink to that new symlink atomically.
>
> This is fine, but does not provide atomicity of access across files in
> that directory.  It just lets you prepare the new directory with
> non-atomic operations on the list of published files or file content.
>
> But if clients need to see consistent content across files, this does
> not solve the problem, a client might read one file before the symlink
> is updated and another file after.  To get actual atomicity, the client
> would need to be sure to open a directory file descriptor, and then
> openat(2) to read each file relative to the directory in question.
>
> Most application code is not written that way, but conceivably OpenSSL
> could have an interface for loading a key and certchain from two (or
> perhaps even more for the cert chain) files relative to a given
> directory.  I know how to do this on modern Unix systems, no idea
> whether something similar is possible on Windows.
>
> The above is *complicated*.  Requiring a single file for both key and
> cert is far simpler.  Either PEM with key + cert or perhaps (under
> duress) even PKCS#12.
>
>
> > Does it look like we are actually getting somewhere here?
>
> So far, not much, just some rough notes on the obvious obstacles.
> There's a lot more to do to design a usable framework for always fresh
> keys.  Keeping it portable between Windows and Unix (assuming MacOS will
> be sufficiently Unix-like) and gracefully handling processes that drop
> privs will be challenging.
>
> Not all applications will want the same approach, so there'd need to be
> various knobs to set to choose one of the supported modes.  Perhaps
> the sanest approach (but one that does nothing for legacy applications)
> is to provide an API that returns the *latest* SSL_CTX via some new
> handle that under the covers constructs a new SSL_CTX as needed.
>
> SSL_CTX *SSL_Factory_get1_CTX(SSL_CTX_FACTORY *);
>
> This would yield a reference-counted SSL_CTX that each caller must
> ultimately release via SSL_CTX_free() to avoid a leak.
>
> ... factory construction API calls ...
> ctx = SSL_Factory_get1_CTX(factory);-- ctx ref count >= 1
> SSL *ssl = SSL_CTX_new(ctx);-- ctx ref count >= 2
> ...
> SSL_free(ssl);  -- ctx ref count >= 1
> SSL_CTX_free(ctx);  -- ctx may be freed here
>
> To address the needs of legacy clients is harder, because they
> expect an SSL_CTX "in hand" to be valid indefinitely, but now
> we want to be able age out and free old contexts, so we want
> some mechanism by which it becomes safe to free old contexts
> that we're sure no thread is still using.  This is difficult
> to do right, because some thread may be blocked for a long
> time, before becoming active again and using an already known
> SSL_CTX pointer.
>
> It is not exactly clear how multi-threaded unmodified legacy software
> can be ensured crash free without memory leaks while behind the scenes
> we're constantly mutating the SSL_CTX.  Once a pointer to an SSL_CTX
> has been read, it might be squirreled away in all kinds of places, and
> here's just no way to know that it won't be used indefinitely.
>
> --
> Viktor.
>


Re: Cert hot-reloading

2020-08-31 Thread David Arnold
1. Construe symlinks to current certs in a folder (old or new / file by
file)
2. Symlink that folder
3. Rename the current symlink to that new symlink atomically.

On OpenSSL side statd would have to follow through on symlinks - if it
shouldnt do so.

This is +- how kubernetes atomically provisions config maps and secrets to
pods.

So there is a precedence for applications to follow this pattern.

I totally agree, that those constraints shall be put on applications in
order to have the freedom to focuse on a sound design.

If openssl really wanted to make it easy it would provide an independent
helper that would do exactly this operation on behalf of non-complying
applications.

Does it look like we are actually getting somewhere here?

I'd still better understand why atomic pointer swaps can be difficult and
how this can be mitigated. I'm sensing a bold move for a sounder
certificate consumption is possible there too (with potential upsides
further down). Do I sense right?


El lunes, 31 de agosto de 2020, Viktor Dukhovni 
escribió:

> > On Aug 31, 2020, at 10:57 PM, Jakob Bohm via openssl-users <
> openssl-users@openssl.org> wrote:
> >
> > Given the practical imposibility of managing atomic changes to a single
> > POSIX file of variable-length data, it will often be more practical to
> > create a complete replacement file, then replace the filename with the
> > "mv -f" command or rename(3) function.  This would obviously only work
> > if the directory remains accessible to the application, after it drops
> > privileges and/or enters a chroot jail, as will already be the case
> > for hashed certificate/crl directories.
>
> There is no such "impossibility", indeed that's what the rename(2) system
> call is for.  It atomically replaces files.  Note that mv(1) can hide
> non-atomic copies across file-system boundaries and should be used with
> care.
>
> And this is why I mentioned retaining an open directory handle, openat(2),
> ...
>
> There's room here to design a robust process, if one is willing to impose
> reasonable constraints on the external agents that orchestrate new cert
> chains.
>
> As for updating two files in a particular order, and reacting only to
> changes in the one that's updated second, this behaves poorly when
> updates are racing an application cold start.  The single file approach,
> by being more restrictive, is in fact more robust in ways that are not
> easy to emulate with multiple files.
>
> If someone implements a robust design with multiple files, great.  I for
> one don't know of an in principle decent way to do that without various
> races, other than somewhat kludgey retry loops in the application (or
> library) when it finds a mismatch between the cert and the key.
>
> --
> Viktor.
>
>


Re: Cert hot-reloading

2020-08-30 Thread David Arnold
Should aspects of an implementation be configurable behavior with a 
sane default? I'd guess so...


Hot-plugging the pointer seems to force atomicity considerations 
down-stream, which might be
educationally a good thing for openssl to press for. It also addresses 
Jordan's use case, for however
application specific it might be. For compat reasons, a "legacy" mode 
which creates a new context
for *new* connections might be the necessary "bridge" into that 
transformation.


For change detection: I think "on next authentication" has enough (or 
even better) guarantees over a periodic loop.


For file read atomicity: What are the options to keep letsencrypt & co 
at comfort? Although the hereditary
"right (expectation) for comfort" is somewhat offset by a huge gain in 
functionality. It still feels like a convincing deal.


- add a staleness check on every change detection? (maybe costly?)
- consume a tar if clients want those guarantees? (opt-out or opt-out?)




On Sun, Aug 30, 2020 at 19:54, Kyle Hamilton  wrote:
I'm not sure I can follow the "in all cases it's important to keep 
the key and cert in the same file" argument, particularly in line 
with openat() usage on the cert file after privilege to open the key 
file has been dropped.  I agree that key/cert staleness is important 
to address in some manner, but I don't think it's necessarily 
appropriate here.


I also don't think it's necessarily okay to add a new requirement 
that e.g. letsencrypt clients reconcatentate their keys and certs, 
and that all of the Apache-style configuration guides be rewritten to 
consolidate the key and cert files. On a simple certificate renewal 
without a rekey, the best current practice is sufficient.  (As well, 
a letsencrypt client would possibly need to run privileged in that 
scenario to reread the private key file in order to reconcatenate it, 
which is not currently actually necessary.  Increasing the privileges 
required for any non-OS service for any purpose that isn't related to 
OS kernel privilege requirements feels a bit disingenuous.)


Of course, if you want to alter the conditions which led to the best 
current practice (and impose retraining on everyone), that's a 
different matter.  But I still think increasing privilege 
requirements would be a bad thing, under the least-privilege 
principle.


-Kyle H

On Sun, Aug 30, 2020, 18:36 Viktor Dukhovni 
mailto:openssl-us...@dukhovni.org>> 
wrote:

On Sun, Aug 30, 2020 at 05:45:41PM -0500, David Arnold wrote:

 > If you prefer this mailing list over github issues, I still want 
to ask

 > for comments on:
 >
 > Certificate hot-reloading #12753
 > <<https://github.com/openssl/openssl/issues/12753>>
 >
 > Specifically, my impression is that this topic has died down a 
bit and
 > from the linked mailing list threads, in my eye, no concrete 
conclusion

 > was drawn.
 >
 > I'm not sure how to rank this motion in the context of OpenSSL
 > development, but I guess OpenSSL is used to producing ripple 
effects,

 > so the man-hour argument might be a genuinely valid one.
 >
 > Please inform my research about this issue with your comments!

 This is a worthwhile topic.  It has a few interesting aspects:

 1.  Automatic key+cert reloads upon updates of key+cert chain 
PEM

 files.  This can be tricky when processes start privileged,
 load the certs and then drop privs, and are no longer able
 to reopen the key + cert chain file.

 - Here, for POSIX systems I'd go with an approach where
   it is the containing directory that is restricted to
   root or similar, and the actual cert files are group
   and or world readable.  The process can then keep
   the directory file descriptor open, and then openat(2)
   to periodically check the cert file, reloading when
   the metadata changes.

 - With non-POSIX systems, or applications that don't
   drop privs, the openat(2) is not needed, and one
   just checks the cert chain periodically.

 - Another option is to use passphrase-protected keys,
   and load the secret passphrase at process start from
   a separate read-protected file, while the actual
   private key + cert chain file is world readable,
   with the access control via protecting the passphrase
   file.

 - In all cases, it is important to keep both the private
   key and the cert in the same file, and open it just
   once to read both, avoiding races in which the key
   and cert are read in a way that results in one or
   the other being stale.

 2.  Having somehow obtained a new key + cert chain, one
 now wants to non-disruptively apply them to running
 servers.  Here there are two potential approaches:

 - Hot plug a

Cert hot-reloading

2020-08-30 Thread David Arnold

Hi,

If you prefer this mailing list over github issues, I still want to ask 
for comments on:


Certificate hot-reloading #12753
<https://github.com/openssl/openssl/issues/12753>

Specifically, my impression is that this topic has died down a bit and 
from the linked mailing list threads, in my eye, no concrete conclusion 
was drawn.


I'm not sure how to rank this motion in the context of OpenSSL 
development, but I guess OpenSSL is used to producing ripple effects, 
so the man-hour

argument might be a genuinely valid one.

Please inform my research about this issue with your comments!

BR, David A



NASM virus issues.

2020-06-27 Thread David Harris
I normally compile OpenSSL with "no-asm", but this time I thought I'd try 
installing NASM and seeing what difference, if any, it actually made.

I downloaded NASM from the official site (which I believe to be 
http://www.nasm.us) and, as I always do with anything I source from outside my 
firewall, ran it through virustotal 
(https://www.virustotal.com/gui/home/upload).

It reports 11 different scanners out of 72 finding malware in the file 
(nasm-2.15.01-installer-x86.exe). Now, one or two reports from Virustotal is 
normal - there are so many scanners out there that there are bound to be 
occasional false-positives... But 11 is more than I have ever seen on something 
that supposedly wasn't infected. Interestingly, VirusTotal did not have cached 
results for this file, meaning that nobody else has tested it in the last month 
or 
so.

Google didn't reveal any insight, and the NASM project doesn't have any contact 
options that don't involve registration or mailing lists or I'd report this to 
them. 
There is no mention of anything like this in their forum.

11 reports is too many for me to feel safe using this product, so for now I'll 
keep 
using no-asm, and hope that it's not going to get more deprecated than it 
apparently is at present (based on the comments in INSTALL).

If anyone on the list has a NASM account or knows any of the maintainers, 
could they pass this on? They really should be aware of it.

Cheers!

-- David --



Re: OpenSSL 1.1.1g test failures

2020-06-26 Thread David Harris
On 26 Jun 2020 at 11:55, Matt Caswell wrote:

> No - this is not normal output. We would expect the self tests to pass
> on Windows

> >  The ONLY
> > non-standard thing I do is change the /MD switch (link to the DLL
> > versions of the runtime libraries) to /MT (static link the runtimes)
> > because I don't want to have external dependencies in my production
> > environments
> 
> How exactly do you make this change? By editing the Makefile? Have you
> tried it without doing this? My guess is that this is exactly the
> cause of the problem. AppLink is all about dealing with differences in
> MS runtimes.

Assumption, as they say, is the mother of all fu??ups...

In this case, the failed assumption was that a non-standard modification I had 
been making for many years would continue to work simply because it had in 
the past.

Matt is, of course, quite right. When I changed the "/MT" back to "/MD" in 
CNF_CFLAGS and rebuilt everything, it all worked like clockwork.

My thanks to you Matt - you've solved my problem.

Is there a standard (i.e, approved) way of using the static RTLs instead of the 
DLL ones? Or is my only option to modify the applink code so that it checks its 
environment in a different way? The problem with the dynamic RTLs is that my 
application is often used in environments where the user may not have 
sufficient 
rights to install the redistributables - whereas, if I use the static versions, 
the 
code is a little bigger, but there's no redistributable installation required 
and I 
never run into rights issues.

Again, thank you for the assistance, Matt - I appreciate it.

Cheers!

-- David --



OpenSSL 1.1.1g test failures

2020-06-26 Thread David Harris
Environment: Windows 7 (I know, I know - I just hate Windows 10).
Compiler: Visual Studio, have tried both VS2008 Pro and VS2019 Pro
OpenSSL Build: 1.1.1g, retrieved from OpenSSL.org last night

I've been attempting to build OpenSSL 1.1.x since it came out, but each time I 
do so, 
I find that, while it compiles and links cleanly, it fails about 50% of its 
self tests when 
I perform "nmake test". It has been this way for several releases. By "fail" I 
mean 
that there's a stream of "dubious..." outputs that look like this excerpt:

-- Cut here 
...
test\recipes\03-test_internal_siphash.t . ok
test\recipes\03-test_internal_sm2.t . ok
test\recipes\03-test_internal_sm4.t . ok
test\recipes\03-test_internal_ssl_cert_table.t .. 
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/1 subtests 
test\recipes\03-test_internal_x509.t  ok
test\recipes\03-test_ui.t ... ok
test\recipes\04-test_asn1_decode.t .. 
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/1 subtests 
test\recipes\04-test_asn1_encode.t .. 
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/1 subtests 
...
-- Cut here 

Each time I went through the process, I saw the long string of self-test 
failures and 
decided I'd put off migrating to 1.1.1 until it was sorted out, but it's been 
the same for 
at least four releases now. I finally decided I needed to track down what was 
going 
on, so I extrapolated how to run the failing tests manually with more verbose 
output 
from the OpenSSL wiki pages (which are just a little out of date).

It appears that for at least the first twenty or thirty of these failures, the 
reason is 
because the test application has been compiled without including the required 
Applink code - a verbose output typically looks like this:

-- Cut here 
O:\ >perl test\recipes\05-test_idea.t
1..1
OPENSSL_Uplink(5C790330,08): no OPENSSL_Applink
..\ideatest.exe => 1
not ok 1 - running ideatest
#   Failed test 'running ideatest'
#   at util/perl/OpenSSL/Test/Simple.pm line 77.
# Looks like you failed 1 test of 1.
-- Cut here 

Is this just the way it is? I would have thought that 50% self-test failure 
would be 
ringing alarm bells everywhere if it were common, so I can only conclude that 
there's 
something odd about my environment, or that I'm doing something wrong, but this 
is 
about as vanilla a build process as I can possibly make it. I follow the steps 
for 
Win32 in INSTALL, and as I said at the start of this message, the nmake process 
goes cleanly, not a single warning or error. The ONLY non-standard thing I do 
is 
change the /MD switch (link to the DLL versions of the runtime libraries) to 
/MT 
(static link the runtimes) because I don't want to have external dependencies 
in my 
production environments (I lived in "DLL Hell" for so long that I'm now quite 
paranoid 
about that). This change has never caused problems in the past, and doesn't 
seem 
to be relevant to the problems I'm seeing.

I've been building OpenSSL myself for a number of years, most recently with the 
end-of-life v1.0.2 builds, which always go without a hitch. As I remarked, I've 
been 
putting off moving to v1.1.1 because I'm so uneasy about these self-test 
failures, but 
I can't continue doing that any longer as TLS3 starts coming on stream.

Anyone have any insights into what I'm doing wrong, or what I can do about 
this? I'm 
very reluctant to use the software in production if it can't pass its own 
self-test 
regime, even if it appears to work normally otherwise.

Comments most welcome.

Cheers!

-- David --



[openssl-users] Passing custom CFLAGS,LDFLAGS to configure ?

2017-10-27 Thread David Barishev
Hello,
I am building a custom script for building openssl for android, and i want
to use unified headers which are enabled by default with ndk r15+.
For this i need to pass custom CFLAGS and LDFLAGS, which i was able to
successfully compile openssl when patching the makefile by myself.
How to do it directly from configure ?

Thanks all !
-- 
*Have a nice day   David Barishev.*
-- 
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Self signed cert issue

2017-09-15 Thread David H. Madden
On 15-Sep-2017 06:24, Richard Olsen wrote:
> When i click on advanced i see
> 
> "host.local.com uses and invalid security certificate. The certificate is
> not trusted because the issuer certificate is unknown. The server might not
> be sending the appropriate intermediate certficates. An addistional root
> certificate may need to be imported.

This is what you should expect to see.  Your browser is telling you that
your self-signed server certificate isn't part of a chain, where the top
of the chain is some CA that the browser trusts (because the top-level
CA is in a configuration file somewhere).

You may be able to import the self-signed server certificate into the
browser as a trusted root, but the slightly-better option is to set up
your own top-level CA (whose certificate you import into the browser),
and then use that CA to create your server and client certificates.

It's a bit more work, but also more useful if you ever want to issue
certificates for a different server, different client, or issue a new
certificate after one expires (and not have to update all the
self-signed stuff.)

Regards,
-- 
Mersenne Law ·  www.mersenne.com  · +1-503-679-1671
Small Business, Startup & Intellectual Property Law
9600 S.W. Oak Street Suite 500 Tigard, Oregon 97223



smime.p7s
Description: S/MIME Cryptographic Signature
-- 
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


[openssl-users] Introduce a TLS application library - a proposal on the overall OpenSSL code structure

2017-09-05 Thread David von Oheimb
Back on 13 May 2016 I had proposed by email to a couple of people
including Rich Salz
a third library level (on top of crypto and ssl) with more high-level,
application-oriented code.
His response was:
> That is a really interesting idea.  Please bring this up on openssl-dev 
> mailing list.

Then I posted that by mistake unfortunately not in the right forum but at:
https://groups.google.com/forum/#!topic/mailing.openssl.dev/FOL2afc3cb8

I quote my post here for convenience:
> So far, the OpenSSL code has essentially a three-level structure with
> a hierarchy of two libraries and a command-line application at the top:
>
> apps/openssl
> libssl
> libcrypto
>
> In the apps/ directory there is various generally useful code like
> handling crypto-related files and messages, general TLS client/server
> and CA functionality, implementing parts of protocols like S/MIME,
> CRL, and OCSP, and certainly more to come.
>
> While this code serves as a model for using the libraries and it can
> be used in a limited way by invoking the openssl application binary,
> it cannot be re-used directly. Other applications that need similar
> functionality need to copy/re-implement and then maintain portions of
> that code.
>
> On the other hand, the libraries contain some code that is actually
> too high-level for them, for instance the minimal HTTP client as part
> of the crypto library (crypto/ocsp/ocsp_ht.c).
>
> It would be very helpful to introduce a further level in the hierarchy
> consisting of a more application-oriented library:
>
> apps/openssl
> libtlsapps <-- new (with tentative name here)
> libssl
> libcrypto
>
> Then all more high-level and application support functionality will go
> there. This would make much of the generally useful code that so far
> resides in the apps/ folder directly accessible to other
>  applications at the programming level, i.e., in the form of a
> library/API, with all the re-usability advantages that this brings. It
> would also relieve libcrypto from more application-/high-level topics
> like HTTP.
>
> This library would also form an ideal condensation point for further
> high-level uses of TLS that may in the future get integrated with
> OpenSSL, like CMP and EST implementations. 

I recently learned that LibreSSL 
already/meanwhile has something in this direction:

  * libtls : a new TLS library,
designed to make it easier to write foolproof applications

I believe this would be of great benefit also for OpenSSL itself.

-- 
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] [openssl-dev] How to use BIO_do_connect(), blocking and non-blocking with timeout, coping with errors

2017-09-05 Thread David von Oheimb
/[ Further below I quote my first two messages including my original
questions and tentative code,//
// since Cc'ing to openssl-users did not work when I tried first. In
this way I hope to get further, //
// more detailed responses by people with specific experience on the
issues I mentioned,//
// possibly even concrete feedback how to enhance my code or where to
find a better solution. ]/


On 09/01/2017 06:32 PM, Salz, Rich via openssl-users wrote:
>
> FWIW, there’s a ‘libtls’ library from the libre folks that might be
> worth looking at.
>
This looks very nice. Yet is this of any practical benefit when using
OpenSSL?

> If you come up with useful snippets we can start by posting them to
> the wiki, for example
>
Which wiki do you mean? I could not find anything related on
https://wiki.openssl.org/

Anyway, most people (including me) would not search through wikis for
finding useful code,
and it would be much more useful if code like the bio_connect() function
I mentioned below
was readily available within the OpenSSL libraries, in an official
high-level API.
/[ Since this is worth a topic of its own, I'll write more on this in my
next email. ]/

More low-level code that is already used by the crypto lib itself (e.g.,
using select() in rand_unix.c)
would be better packaged into abstractions within that library, for
instance the socket_wait() function
for waiting on a socket with a timeout that I proposed below. I'd
contribute pull requests for those I'm aware of.


On 29.08.2017 16:15, Salz, Rich via openssl-dev wrote:
>> Getting the client connect right appears surprisingly messy when one
>> needs to cope with all kinds of network error situations including
>> domain name resolution issues and temporarily unreachable servers.
>> Both indefinitely blocking and non-blocking behavior (i.e., connection
>> attempts with and without a timeout) should be supported.
> It is a complicated issue and hard to get right for all definitions of right 
> for all applications ☺
Hmm - on the one hand, good to get confirmation that I did not just miss
a simple way of out the maze, ...
> A set of API’s that set up all the TLS “metadata”, and took a connected 
> socket might be a way through the maze.  For example:
> SSL *SSL_connection(int socket, const char *servername, …whatever…)
... on the other hand, it's a pity that such a high-level API does not
(yet) exist, at least not in OpenSSL.

How about adding at least some clearly useful abstractions like the
below socket_wait() function,
which would reduce code duplication in the OpenSSL crypto lib and apps
and help application developers?

Maybe other OpenSSL users have specific experience on error and timeout
handling for BIO_do_connect() etc.
and can comment in more detail on the (approximate) solution,
bio_connect(), that I gave below?

On 28.08.2017 13:46, David von Oheimb wrote:
> Hi all,
>
> I'm currently enhancing HTTP(S) clients based on OpenSSL in several
> flavors, in particular a CMP client, which in turn uses simple HTTP
> clients for contacting CRL distribution points or OCSP responders.
>
> Getting the client connect right appears surprisingly messy when one
> needs to cope with all kinds of network error situations including
> domain name resolution issues and temporarily unreachable servers.
> Both indefinitely blocking and non-blocking behavior (i.e., connection
> attempts with and without a timeout) should be supported.
>
> Since these are pretty general problems I wonder why there there is
> rather limited support via generic higher-level OpenSSL or C library
> functions, or at least I was unable to find it. Instead, the OpenSSL
> apps contain code that calls BIO_do_connect directly (or the equivalent
> BIO_do_handshake), in particular query_responder() in apps/ocsp.c.
> (The situation is similar for the subsequent exchange of data via the
> BIO, optionally with a timeout).
>
> So I constructed my own abstraction, called bio_connect, which took
> quite some effort testing network error situations. Please see below its
> code including comments on some strange behavior I experienced and my
> workarounds for that. Does this code make sense, or do I miss anything?
>
> How about adding such a function for instance to crypto/bio/bio_lib.c?
>
> BTW, my code uses a handy generic helper function, socket_wait, for
> waiting for read/write form/to a socket, with a given timeout. Since
> several instances of that pretty common code pattern using select() are
> spread over the OpenSSL apps (and crypto lib), I suggest adding this
> function to the library. Where would be a good place to put it?
>
> Thanks,
>   David
>> /* returns -1 on error, 0 on timeout, 1 on success */
>> int bio_connect(BIO *bio, int timeout) {
>> int blocking;
>>   

[openssl-users] Compiling OpenSSL 1.1.0e with AF_ALG engine

2017-02-22 Thread David Oberhollenzer
Hi,

I'm trying to compile OpenSSL 1.1.0e with the afalg engine on a
recent CentOS 7. I removed the kernel version check for the
afalg engine from the Configure script since AFAIK the CentOS
kernel should have all of that back ported. I ran the following
configure command:

$ ./Configure linux-x86_64 shared enable-engine enable-dso \
  enable-afalgeng --prefix=/opt/openssl --openssldir=/opt/openssl


After make, I get an afalg.so in the output, but after installing
it and running openssl speed I get complaints about bind_engine
not being exported:


$ /opt/openssl/bin/openssl speed -evp aes-128-cbc -engine afalg
invalid engine "afalg"
140034190133056:error:2506406A:DSO support
routines:dlfcn_bind_func:could not bind to the requested symbol
name:crypto/dso/dso_dlfcn.c:178:symname(bind_engine):
/opt/openssl/lib/engines-1.1/afalg.so: undefined symbol: bind_engine
140034190133056:error:2506C06A:DSO support routines:DSO_bind_func:could
not bind to the requested symbol name:crypto/dso/dso_lib.c:185:
140034190133056:error:260B6068:engine routines:dynamic_load:DSO
failure:crypto/engine/eng_dyn.c:427:
140034190133056:error:2606A074:engine routines:ENGINE_by_id:no such
engine:crypto/engine/eng_list.c:339:id=afalg
140034190133056:error:25066067:DSO support routines:dlfcn_load:could not
load the shared
library:crypto/dso/dso_dlfcn.c:113:filename(libafalg.so): libafalg.so:
cannot open shared object file: No such file or directory
140034190133056:error:25070067:DSO support routines:DSO_load:could not
load the shared library:crypto/dso/dso_lib.c:161:
140034190133056:error:260B6084:engine routines:dynamic_load:dso not
found:crypto/engine/eng_dyn.c:414:
...


Running readelf on afalg.so confirms that the symbol is indeed not
in the binary. Am I missing some magic configure options or is there
some other problem?


Thanks,

David
-- 
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Alert number 43

2016-11-02 Thread David Li
Hi Jeff,
I am not sure I can post the entire cert here. Is there any part  in
particular that would be useful to debug the Alert Number 43 problem?

David

On Tue, Nov 1, 2016 at 8:07 PM, Jeffrey Walton <noloa...@gmail.com> wrote:
>> When I tested a remote server using s_client, it responded with:
>>
>> verify return:1
>>
>> 139790582232992:error:14094413:SSL routines:SSL3_READ_BYTES:sslv3
>> alert unsupported certificate:s3_pkt.c:1259:SSL alert number 43
>>
>> 139790582232992:error:1409E0E5:SSL routines:SSL3_WRITE_BYTES:ssl
>> handshake failure:s3_pkt.c:598:
>>
>>
>> I found the the following URL about this:
>>
>> http://stackoverflow.com/questions/14435839/ssl-alert-43-when-doing-client-authentication-in-ssl?answertab=oldest#tab-top
>>
>> My question: Does this indicate something wrong with server side
>> certificate like the URL said?
>
> Netscape Cert Type was recently removed, IIRC.
>
> OpenSSL servers [used to?] have a bug where they can't use the EC key
> pair they generated for use with an EC-based certificate. Also see
> http://wiki.openssl.org/index.php/Elliptic_Curve_Cryptography#Named_Curves.
>
> Post the certificate. Use `openssl s_client -connect :
> -tls1 -servername  | openssl x509 -text -noout`
>
> Jeff
> --
> openssl-users mailing list
> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
-- 
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


[openssl-users] Alert number 43

2016-11-01 Thread David Li
Hi,

When I tested a remote server using s_client, it responded with:

verify return:1

139790582232992:error:14094413:SSL routines:SSL3_READ_BYTES:sslv3
alert unsupported certificate:s3_pkt.c:1259:SSL alert number 43

139790582232992:error:1409E0E5:SSL routines:SSL3_WRITE_BYTES:ssl
handshake failure:s3_pkt.c:598:


I found the the following URL about this:

http://stackoverflow.com/questions/14435839/ssl-alert-43-when-doing-client-authentication-in-ssl?answertab=oldest#tab-top

My question: Does this indicate something wrong with server side
certificate like the URL said?

Thanks.

David
-- 
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


[openssl-users] Programmatically determine latest versions

2016-10-14 Thread David Turner
Hi,

Is there a straightforward way to programmatically determine the current set of 
latest released versions of OpenSSL?

The context is that we perform automatic security audits of some of our systems 
and one of the tickboxes is "uses the latest version of OpenSSL". At the moment 
we check the website (and some mirrors) and do a bit of munging of HTML to try 
and extract the latest version number, but this is not terribly pleasant. It 
would be awesome if there were a JSON document (or similar) that contained 
roughly the same information as the HTML table at 
https://www.openssl.org/source/. Is there such a document? For instance, is the 
list of tags in Github appropriately reliable?

If not, could such a document be created?

Many thanks,



-- 
David Turner 
Principal Developer 
Operations & Planning Systems Division 
Tracsis 

Tracsis Operations and Planning Systems Division is a Division of Tracsis plc 
and comprises Tracsis plc (05019106), Tracsis Rail Consultancy Limited 
05047148), Safety Information Systems Limited trading as COMPASS (02588404) and 
Datasys Limited (04225250), all subsidiaries of Tracsis plc with a registered 
office at Leeds Innovation Centre,103 Clarendon Road, Leeds, LS2 9DF. VAT 
Registration No: 945 7876 61.




-- 
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Openssl 1_1_0 compatibility question

2016-09-06 Thread david

At 09:25 AM 9/5/2016, you wrote:

david wrote:

> On the client:
> openssl enc -salt -a -A -aes128 -pass pass:123
>
> On the server:
> openssl enc -d -salt -a -A -aes128 -pass pass:123
>
> When the ENCRYPTING software is 1_0_2h and the
> decrypting software is 1_0_1e on Linux or 1_0_2h on Windows,
> the decryption successfully recovers the value "abcde".
>
> When the encrypting software is 1_1_0 and the
> decrypting software is 1_0_1e on Linux or 1_0_2h on Windows,
> it fails with the message:
>
> bad decrypt
> 139701985818440:error:06065064:digital envelope routines:
> EVP_DecryptFinal_ex:bad decrypt:evp_enc.c:596:
>


Reason:
v1.1.0 is using the wrong key(from pass) to decrypt.

 v1.0.x: md5 is default digest
 v1.1.0: sha256 is default digest

Solution:
Specify the digest used to create the key.

 Add '-md md5' to the version 1.0.2 decryption command line,
 or add '-md sha256' to the v1.0.x  encryption command line.



Thanks for this.  I must have missed the change in default-digest 
algorithm in the release notes.
David 


--
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


[openssl-users] Openssl 1_1_0 compatibility question

2016-09-03 Thread david

Folks

In the home-grown application I have, data is encrypted on Windows 
clients and decrypted on Centos servers, all with OpenSSL, using a 
shared symmetric password.


My clients have been running OpenSSL versions 1.0.* with each new 
version being installed on Windows (using 
https://slproweb.com/download/Win64OpenSSL...) with no compatibility 
issues, EXCEPT when I switched from 1.0.2h to 1.1.0.


My servers are running whichever is supported by Centos systems -- 
currently 1.0.1e-fips.


My methods do the following, with my real values replaced by fixed 
values in this example:


On the client: Encrypt the value "abcde" with a password "123" with salt
  Windows command: echo abcde | openssl enc -salt -a -A -aes128 -pass pass:123

On the server: Decrypt the salted message with the password "123", 
and recover the value "1".
  Linux command: echo (the output of the above) | openssl enc -d 
-salt -a -A -aes128 -pass pass:123


When the ENCRYPTING software is 1_0_2h and the decrypting software is 
1_0_1e on Linux or 1_0_2h on Windows, the decryption successfully 
recovers the value "abcde".
When the encrypting software is 1_1_0 and the decrypting software is 
1_0_1e on Linux or 1_0_2h on Windows, it fails with the message:


bad decrypt
139701985818440:error:06065064:digital envelope 
routines:EVP_DecryptFinal_ex:bad decrypt:evp_enc.c:596:


Or, in summary
When both the encrypting and decrypting software are both 1_1_0, or 
both 1_0_2(e..h), the decryption succeeded.  If the versions were 
different, it failed.


Is this a feature or a bug?  Is there some setting I should have different?

Thanks in advance

David



--
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Obtaining PKCS7 data length

2016-09-02 Thread David
On 02/09/2016 16:39, Dr. Stephen Henson wrote:
> On Tue, Aug 30, 2016, David wrote:
>> How can I obtain the length of the overall sequence which contains PKCS7
>> signed data?  This is important because the length I already have may be
>> longer than the actual PKCS7 data.
>>
> I'm curious: why do you want that information?

I am loading PKCS7 data from Windows Portable Executable files which is
used for code signing ("Authenticode").

The file structure itself gives a length for the relevant data that I
pass to d2i_PKCS7().  However there may be trailing data which does not
relate to the PKCS7 structure.

My requirement for the length is to spot errors or abuse by comparing
the length parsed by OpenSSL to the PE specific headers, e.g. to detect
issues like MS13-098 [1].

> If you want the entire length of the parsed data you can use d2i_PKCS7() to
> parse the buffer: the passed pointer is then incremented to immediately follow
> the PKCS7 structure. You can then get the length by subtracting the
> start of the buffer.

Thank you - this works fine.

David

1 - https://technet.microsoft.com/en-gb/library/security/2915720
-- 
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


[openssl-users] Obtaining PKCS7 data length

2016-08-30 Thread David
Hi,

I have some PKCS7 data which I can read like this with OpenSSL:

$ openssl asn1parse -i -inform der -in data.dat
0:d=0  hl=4 l=16208 cons: SEQUENCE
4:d=1  hl=2 l=9 prim:  OBJECT:pkcs7-signedData
.. more ..

I can load it in code like so:

// buf contains the raw data, len the length
BIO *bio = BIO_new_mem_buf(buf, len);

PKCS7 *pkcs7 = d2i_PKCS7_bio(bio, NULL);
if (!pkcs7) {
// die
}
printf("Success!");

This works fine and I can successfully obtain signer information etc.
However I'd like to obtain the length value as parsed from the input
data. In my example this was 16208, seen in the second line of the ASN1
output.

I noticed there is a length attribute to the PKCS7 structure (see
include/openssl/pkcs7.h) but pkcs7->length is always zero when I print it.

How can I obtain the length of the overall sequence which contains PKCS7
signed data?  This is important because the length I already have may be
longer than the actual PKCS7 data.

David
-- 
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


[openssl-users] Minimum openssl configuration for ssl/tls smtp email support?

2016-07-11 Thread David F.
Hi,

What configuration parameters (NO-XXX) should be passed for the
openssl library to be built to support standard TLS/SSL required for
sending emails through the public smtp servers but at the least amount
of code needed.I have it working (only calls a few BIO_ and/or
SSL_ functions) but adds 1MiB+ to the program .exe size.   I'd like to
get that down to less than 200K (other libraries claim to do it under
50K).

Thanks.
-- 
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Firefox problems with two way SSL auth

2016-02-23 Thread David Balažic
Apparently it is OpenSSL bug/ticket number 2288.
Hopefully fixed sometime...

Regards,
David

On 12 February 2016 at 18:09, David Balažic <xerces9+...@gmail.com> wrote:
> Hi!
>
> Tomcat released version 8.0.32 which bundles OpenSSL 1.0.2e (see below)
> The issue remains (with the change that now IE can not connect at all,
> it complains about some TLS stuff, did not look into it).
>
> Any hints how to tackle this problem are welcome.
>
> Version details (from tomcat startup log):
> Loaded APR based Apache Tomcat Native library 1.2.4 using APR version 1.5.1.
> OpenSSL successfully initialized (OpenSSL 1.0.2e 3 Dec 2015)
>
> Regards,
> David
>
>
> On 8 January 2016 at 17:02, David Balažic <xerces9+...@gmail.com> wrote:
>> Hi!
>>
>> I encounter this issue when using Firefox to access tomcat (that is
>> using openssl) with client cert authentication.
>>
>> After a certain timeout, the web application does not "see" the
>> clients certificate in requests.
>>
>> The problem happens on different operating systems (Window,s Linux)
>> and browsers.
>>
>> I reported it to tomcat and Firefox, with not much response.
>>
>> There is a simple test case in comment 1 of the tomcat bug (see below).
>>
>> Could someone assist in finding the cause of the problem?
>> I also have pcap traces (somewhere) of working and non working network 
>> traffic.
>>
>>
>> Latest tested configuration:
>> tomcat 8.0.30, using OpenSSL 1.0.1m 19 Mar 2015
>> Firefox 43.0.4
>> OS: Windows 7 Pro SP1 64bit
>>
>> The tomcat bug with much details:
>>
>> https://bz.apache.org/bugzilla/show_bug.cgi?id=58244
>>
>> Firefox bug report (not much details):
>> https://bugzilla.mozilla.org/show_bug.cgi?id=1231406
>>
>> Regards,
>> David Balažic
-- 
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Firefox problems with two way SSL auth

2016-02-12 Thread David Balažic
Hi!

Tomcat released version 8.0.32 which bundles OpenSSL 1.0.2e (see below)
The issue remains (with the change that now IE can not connect at all,
it complains about some TLS stuff, did not look into it).

Any hints how to tackle this problem are welcome.

Version details (from tomcat startup log):
Loaded APR based Apache Tomcat Native library 1.2.4 using APR version 1.5.1.
OpenSSL successfully initialized (OpenSSL 1.0.2e 3 Dec 2015)

Regards,
David


On 8 January 2016 at 17:02, David Balažic <xerces9+...@gmail.com> wrote:
> Hi!
>
> I encounter this issue when using Firefox to access tomcat (that is
> using openssl) with client cert authentication.
>
> After a certain timeout, the web application does not "see" the
> clients certificate in requests.
>
> The problem happens on different operating systems (Window,s Linux)
> and browsers.
>
> I reported it to tomcat and Firefox, with not much response.
>
> There is a simple test case in comment 1 of the tomcat bug (see below).
>
> Could someone assist in finding the cause of the problem?
> I also have pcap traces (somewhere) of working and non working network 
> traffic.
>
>
> Latest tested configuration:
> tomcat 8.0.30, using OpenSSL 1.0.1m 19 Mar 2015
> Firefox 43.0.4
> OS: Windows 7 Pro SP1 64bit
>
> The tomcat bug with much details:
>
> https://bz.apache.org/bugzilla/show_bug.cgi?id=58244
>
> Firefox bug report (not much details):
> https://bugzilla.mozilla.org/show_bug.cgi?id=1231406
>
> Regards,
> David Balažic
-- 
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


[openssl-users] Firefox problems with two way SSL auth

2016-01-08 Thread David Balažic
Hi!

I encounter this issue when using Firefox to access tomcat (that is
using openssl) with client cert authentication.

After a certain timeout, the web application does not "see" the
clients certificate in requests.

The problem happens on different operating systems (Window,s Linux)
and browsers.

I reported it to tomcat and Firefox, with not much response.

There is a simple test case in comment 1 of the tomcat bug (see below).

Could someone assist in finding the cause of the problem?
I also have pcap traces (somewhere) of working and non working network traffic.


Latest tested configuration:
tomcat 8.0.30, using OpenSSL 1.0.1m 19 Mar 2015
Firefox 43.0.4
OS: Windows 7 Pro SP1 64bit

The tomcat bug with much details:

https://bz.apache.org/bugzilla/show_bug.cgi?id=58244

Firefox bug report (not much details):
https://bugzilla.mozilla.org/show_bug.cgi?id=1231406

Regards,
David Balažic
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


[openssl-users] v1.1.0-pre1 - Trouble compiling with (1) no-threads and (2) no-psk no-srp

2015-12-26 Thread David Boulding
v1.1.0-pre1 on linux


(1) Compiling with "no-threads " gives error on lines 173 and 379 in async.c.

possible cause: async_fibre_makecontext() function

async_posix.h @ line 57: #if defined(OPENSSL_SYS_UNIX) && 
defined(OPENSSL_THREADS)

seems threads is required?



(2) Compiling with no-psk and no-srp gives error on line 1692 in statem_clnt.c

possible cause: with no-psk and no-srp nop'd out by #ifndef's above, line 1692 
starts with "else if" instead of "if"



Neither of these errors occurred in 1.0.2e.
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] openssl des-ede3-cbc does not match with Java one

2015-11-25 Thread David García
Exactly, that's my point, I have to integrate with a third party API, so I
can't do anything else but to send the ciphered text as expected.

Anyway, thanks for your explanation on this issue, I'll take it into
account and try to contact third party support team.


Thanks.

2015-11-25 11:23 GMT+01:00 Viktor Dukhovni <openssl-us...@dukhovni.org>:

> On Wed, Nov 25, 2015 at 11:14:48AM +0100, David García wrote:
>
> > Viktor, you pointed me to the right way. I was missing the -nopad flag in
> > the openssl command.
>
> Not using padding is fragile and can lead to subtle data corruption.
> Perhaps not padding is safe and correct in your case, but I am
> skeptical and you should be too.  If you're constrained to interoperate
> with existing code that is not padding, that code is questionable,
> but you may have no choice but to follow suite.  If you're free to
> choose formats, you should probably pad.
>
> --
> Viktor.
> ___
> openssl-users mailing list
> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
>



-- 
David
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] openssl des-ede3-cbc does not match with Java one

2015-11-25 Thread David García
Viktor, you pointed me to the right way. I was missing the -nopad flag in
the openssl command.

I don't need to do the padding through the cipher algorithm because I do
the 0 padding manually before executing the ciphering.

Now it matches. This is the command I am using (for this manual example I
am providing an already multiple of 8 string, so I have removed the first
char of the input string for testing):

echo -n 05863330 | openssl enc -e -des-ede3-cbc -K
'b2aec78eb50e05f2a60b9efa20b82c903e6cad4f3bd2027b' -iv  -nopad |
openssl enc -base64


Thanks Viktor.

2015-11-25 10:39 GMT+01:00 Viktor Dukhovni <openssl-us...@dukhovni.org>:

> On Wed, Nov 25, 2015 at 09:18:15AM +0100, David García wrote:
>
> > H6cr2yN8oWV6AUY/JlknQw==
>
> Decrypting in ECB mode you get:
>
> $ echo H6cr2yN8oWV6AUY/JlknQw== |
> openssl base64 -d |
> openssl enc -d -des-ede3 -K
> 'b2aec78eb50e05f2a60b9efa20b82c903e6cad4f3bd2027b' -nopad |
> hexdump -ve '/1 "%02x"'; echo
> 3030353836332fa02cdc247ba662
>
> > but is not exactly the same result I get for the same input in my Java
> and
> > PHP examples. In those ones I get:
> >
> > H6cr2yN8oWUVY3a6/Vaaow==
>
> Decrypting in ECB mode you get:
>
> $ echo H6cr2yN8oWUVY3a6/Vaaow== |
> openssl base64 -d |
> openssl enc -d -des-ede3 -K
> 'b2aec78eb50e05f2a60b9efa20b82c903e6cad4f3bd2027b' -nopad |
> hexdump -ve '/1 "%02x"'; echo
> 3030353836332fa72bdb237ca165
>
> The initial 8-byte blocks are identical, but the trailing blocks
> differ subtly.  The hexdump of the OpenSSL ciphertext is:
>
> $ echo H6cr2yN8oWV6AUY/JlknQw== |
> openssl base64 -d |
> hexdump -ve '/1 "%02x"'; echo
> 1fa72bdb237ca1657a01463f26592743
>
> If you XOR the common first block of ciphertext into each of the
> second decrypted blocks you get:
>
> $ perl -le '
> for ( (0x2fa02cdc247ba662, 0x2fa72bdb237ca165) ) {
> printf "%016x\n", ($_ ^ 0x1fa72bdb237ca165)
> }'
> 3007070707070707
> 3000
>
> What you see is the effect of PKCS#5 padding in the case of OpenSSL,
> and zero-padding (which is not reversible and not suitable for
> encrypting ciphertext that is a not a multiple of 8 bytes in length)
> in Java.  You've failed to configure the correct padding mode.
>
> --
> Viktor.
> ___
> openssl-users mailing list
> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
>



-- 
David
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] openssl des-ede3-cbc does not match with Java one

2015-11-25 Thread David García
Thanks, you are rigth. I did a test with

echo -n 005863330

and

echo 005863330

and the last one adds the new line character.

I also checked that openssl is not adding this new line character. Now with
this command:

echo -n 005863330 | openssl enc -e -des-ede3-cbc -K
'b2aec78eb50e05f2a60b9efa20b82c903e6cad4f3bd2027b' -iv  -nosalt |
openssl enc -base64

I get:

H6cr2yN8oWV6AUY/JlknQw==

but is not exactly the same result I get for the same input in my Java and
PHP examples. In those ones I get:

H6cr2yN8oWUVY3a6/Vaaow==


In the Java and PHP examples the input data is hardcoded in text:

text "005863330"
key "b2aec78eb50e05f2a60b9efa20b82c903e6cad4f3bd2027b"


Regards.

2015-11-24 18:19 GMT+01:00 Jay Foster <jayf0s...@roadrunner.com>:

> It is very likely that your text file also contains a newline at the end,
> so getting the same result as with the echo command would be expected.  If
> it is indeed the newline that is making the difference, you could try using
> the echo command with the '-n' option to suppress it.
>
> Jay
>
>
> On 11/24/2015 9:12 AM, David García wrote:
>
> Sorry, still not getting the same result, now with the command:
>
> echo 005863330 | openssl enc -e -des-ede3-cbc -K
> 'b2aec78eb50e05f2a60b9efa20b82c903e6cad4f3bd2027b' -iv  -nosalt |
> openssl enc -base64
>
> I get:
>
> H6cr2yN8oWXn2RxiDqnXLg==
>
> but I should get:
>
> H6cr2yN8oWUVY3a6/Vaaow==
>
>
> BTW I get the same result if the text in the echo is between '' or is read
> from a text file.
>
> 2015-11-24 18:07 GMT+01:00 David García <garcia.narb...@gmail.com>:
>
>> You are right Viktor, that was my problem.
>>
>> Thank you very much for your help Viktor and Michael.
>>
>> 2015-11-24 18:00 GMT+01:00 Viktor Dukhovni < <openssl-us...@dukhovni.org>
>> openssl-us...@dukhovni.org>:
>>
>>> On Tue, Nov 24, 2015 at 05:55:42PM +0100, David García wrote:
>>>
>>> > openssl enc -e -des-ede3-cbc -in myfile.txt -k
>>> > 'b2aec78eb50e05f2a60b9efa20b82c903e6cad4f3bd2027b' -iv 
>>> -nosalt |
>>> > openssl enc -base64
>>>
>>> Please read Michael's message carefully.  Note the comment about
>>> "-k" vs. "-K" (upper-case).
>>>
>>> --
>>> Viktor.
>>> ___
>>> openssl-users mailing list
>>> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
>>>
>>
>>
>>
>> --
>> David
>>
>
>
>
> --
> David
>
>
> ___
> openssl-users mailing list
> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
>
>
>
> ___
> openssl-users mailing list
> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
>
>


-- 
David
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] openssl des-ede3-cbc does not match with Java one

2015-11-24 Thread David García
You are right Viktor, that was my problem.

Thank you very much for your help Viktor and Michael.

2015-11-24 18:00 GMT+01:00 Viktor Dukhovni <openssl-us...@dukhovni.org>:

> On Tue, Nov 24, 2015 at 05:55:42PM +0100, David García wrote:
>
> > openssl enc -e -des-ede3-cbc -in myfile.txt -k
> > 'b2aec78eb50e05f2a60b9efa20b82c903e6cad4f3bd2027b' -iv  -nosalt |
> > openssl enc -base64
>
> Please read Michael's message carefully.  Note the comment about
> "-k" vs. "-K" (upper-case).
>
> --
> Viktor.
> ___
> openssl-users mailing list
> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
>



-- 
David
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] openssl des-ede3-cbc does not match with Java one

2015-11-24 Thread David García
Sorry, still not getting the same result, now with the command:

echo 005863330 | openssl enc -e -des-ede3-cbc -K
'b2aec78eb50e05f2a60b9efa20b82c903e6cad4f3bd2027b' -iv  -nosalt |
openssl enc -base64

I get:

H6cr2yN8oWXn2RxiDqnXLg==

but I should get:

H6cr2yN8oWUVY3a6/Vaaow==


BTW I get the same result if the text in the echo is between '' or is read
from a text file.

2015-11-24 18:07 GMT+01:00 David García <garcia.narb...@gmail.com>:

> You are right Viktor, that was my problem.
>
> Thank you very much for your help Viktor and Michael.
>
> 2015-11-24 18:00 GMT+01:00 Viktor Dukhovni <openssl-us...@dukhovni.org>:
>
>> On Tue, Nov 24, 2015 at 05:55:42PM +0100, David García wrote:
>>
>> > openssl enc -e -des-ede3-cbc -in myfile.txt -k
>> > 'b2aec78eb50e05f2a60b9efa20b82c903e6cad4f3bd2027b' -iv  -nosalt
>> |
>> > openssl enc -base64
>>
>> Please read Michael's message carefully.  Note the comment about
>> "-k" vs. "-K" (upper-case).
>>
>> --
>> Viktor.
>> _______
>> openssl-users mailing list
>> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
>>
>
>
>
> --
> David
>



-- 
David
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] openssl des-ede3-cbc does not match with Java one

2015-11-24 Thread David García
I am sorry, I pasted an invalid key I was playing with to check some other
things. Next, the real key and now reading the value from a file instead
from echo (BTW I am using a linux terminal):

openssl enc -e -des-ede3-cbc -in myfile.txt -k
'b2aec78eb50e05f2a60b9efa20b82c903e6cad4f3bd2027b' -iv  -nosalt |
openssl enc -base64

myfile.txt (edited with vim) contains the string:

005863330

The value I get is:

SYqzNH5u8ExzyakWO3Cj/A==

meanwhile the one I am getting from Java and PHP examples is:

H6cr2yN8oWUVY3a6/Vaaow==


Regards.

2015-11-24 16:28 GMT+01:00 Michael Wojcik <michael.woj...@microfocus.com>:

>
> > echo 'text_to_cypher' | openssl enc -e -des-ede3-cbc -k
> 'b2aec78eb50e04f2a60b9efa20b82c903e3cad4f3bd2027g' -iv  -nosalt |
> openssl enc -base64
>
> That echo command will append a LF (x'0a') byte (if this is a conventional
> UNIX or Linux system, or Cygwin, etc, and you're running under one of the
> standard shells). Do you have that byte in the value of your "cleartext"
> variable in the Java code? You failed to supply that. (Also, the
> single-quote characters are unnecessary, unless you're running a very odd
> shell.)
>
> The value of the -k argument you're passing to "openssl enc" ends with
> "g", which is not a hexadecimal digit; the rest of the value appears to be
> hexadecimal. But it's not clear why you're using -k anyway. Perhaps you
> mean to use -K (uppercase K, with an actual hexadecimal argument)?
>
>
> --
> Michael Wojcik
> Technology Specialist, Micro Focus
>
> ___
> openssl-users mailing list
> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
>



-- 
David
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


[openssl-users] openssl des-ede3-cbc does not match with Java one

2015-11-24 Thread David García
Hi,

I am trying to use openssl command line tool for des-ede3-cbc encryption,
but it does not mach with the one I have in Java (and that I know that
works ok). I try to generate a des-ede3-cbc encryption with an IV =
0,0,0,0,0,0,0,0. Then I launch following command:


echo 'text_to_cypher' | openssl enc -e -des-ede3-cbc -k
'b2aec78eb50e04f2a60b9efa20b82c903e3cad4f3bd2027g' -iv  -nosalt |
openssl enc -base64


But I don't get the same result as the one I get in Java using Cipher:

private final byte [] IV = {0, 0, 0, 0, 0, 0, 0, 0};
.
DESedeKeySpec desKeySpec = new DESedeKeySpec(toByteArray(hexKey));
SecretKey desKey = new SecretKeySpec(desKeySpec.getKey(), "DESede");
Cipher desCipher = Cipher.getInstance("DESede/CBC/NoPadding");
desCipher.init(Cipher.ENCRYPT_MODE, desKey, new IvParameterSpec(IV));

//text 0 padding to get it multilpe of 8

byte[] ciphertext = desCipher.doFinal(cleartext);
new String(Base64.encodeBase64(ciphertext), "UTF-8");



Could anyone point me to what I am doing worng in this command line call?

Thanks in advance.
-- 
David
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] openssl-users Digest, Vol 11, Issue 5

2015-10-12 Thread David Lobron
vateKey on the 
_der object, it fails.  Here is the code that throws the exception:

// validate; throws exception if key invalid
- (void)validate
{
const unsigned char *p = (unsigned char *)[_der bytes];
RSA *r = d2i_RSAPrivateKey(0, , [_der length]);
int n;
if (r == 0)
[NSException raise:X509CertificateExcInvalidPrivateKey format:@"cannot 
decode RSA private key"];
NS_DURING {
switch (n = RSA_check_key(r)) {
case 1: // ok
break;
default:
[NSException raise:X509CertificateExcInvalidPrivateKey 
format:@"RSA_check_key() returned %d", n];
}
} NS_HANDLER {
RSA_free(r);
[localException raise];
} NS_ENDHANDLER
RSA_free(r);

}

Thanks for any help you can give here!

--David




smime.p7s
Description: S/MIME cryptographic signature
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


[openssl-users] d2i_RSAPrivateKey not working on a private key

2015-10-09 Thread David Lobron
Hello openssl people,

I am trying to read a private key of a certificate into memory using 
d2i_RSAPrivateKey.  I'm able to read the certificate without a problem, but 
when I pass the private key to d2i_RSAPrivateKey, it fails to parse.  I do not 
see an error message or errno being set - d2i_RSAPrivateKey simply returns 
NULL.  I've generated a self-signed cert which reproduces the problem, and I've 
attached it to this message (this is a throwaway cert, not in use for anything, 
so I'm knowingly sending the private key).  The command I used to generate this 
cert and its key was:

openssl req -x509 -newkey rsa:1024 -keyout key.pem -out cert.pem -days 36500 
-nodes -outform PEM

I have another cert where the private key *is* parseable by d2i_RSAPrivateKey.  
I printed out both certs from the command line, and compared them.  They appear 
almost identical.  The only difference I see is that when I print the attached 
unparseable cert, the Signature Algorithm section has 8 lines of hex.  In the 
parseable cert, I see 15 lines of hex.  Both certs use sha1WithRSAEncryption as 
the algorithm, with 1024 bits.

Can anyone help me understand why the private key in the attached cert is not 
readable by d2i_RSAPrivateKey?  I'm running these tests on a Mac, but the same 
thing happens on Ubuntu Linux.

Thank you,

David

Printout of the attached cert, which fails to parse with d2i_RSAPrivateKey:

MacBook-Air:self_signed dlobron$ openssl x509 -in cert.1024.combined -text 
-noout
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 17702003413458844255 (0xf5aa2650b7f77a5f)
Signature Algorithm: sha1WithRSAEncryption
Issuer: C=US, ST=Massachusetts, L=Cambridge, O=Akamai Technologies, 
OU=KMI, 
CN=akamai.normandy_authority.client_gateway_ca.1/emailAddress=dlob...@akamai.com
Validity
Not Before: Oct  8 15:47:30 2015 GMT
Not After : Jan 16 15:47:30 2016 GMT
Subject: C=US, ST=Massachusetts, L=Cambridge, O=Akamai Technologies, 
OU=KMI, 
CN=akamai.normandy_authority.client_gateway_ca.1/emailAddress=dlob...@akamai.com
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (1024 bit)
Modulus:
00:c2:33:df:d8:cb:c9:6e:a4:98:f0:b7:b1:06:51:
77:f8:6c:36:4b:f3:ab:fc:09:ab:98:13:d5:0a:03:
63:31:c4:ce:6f:02:12:b5:c4:4c:83:17:39:c2:b8:
27:89:a5:80:56:36:72:19:8b:9a:dd:e5:e2:22:60:
53:96:f9:4d:c0:f1:c6:06:5f:1b:95:de:b7:8e:d2:
ef:e8:ff:84:81:73:45:c9:a5:52:6d:af:8e:6a:16:
bf:23:97:66:5e:d8:1f:0e:e9:1b:d3:03:e3:cd:4c:
02:2f:68:f0:a5:70:a3:90:f5:19:8d:f5:6b:d1:87:
e7:82:39:f9:09:1b:ee:56:f9
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Subject Key Identifier: 
2F:D9:17:38:F0:9E:03:2C:57:E5:FF:20:24:BC:F1:AA:2C:35:AB:D5
X509v3 Authority Key Identifier: 

keyid:2F:D9:17:38:F0:9E:03:2C:57:E5:FF:20:24:BC:F1:AA:2C:35:AB:D5

X509v3 Basic Constraints: 
CA:TRUE
Signature Algorithm: sha1WithRSAEncryption
 5d:5c:c0:10:c3:60:10:c5:d4:30:cf:90:41:32:d9:73:1f:03:
 66:a5:3b:ca:e2:99:2f:89:10:0e:4d:d6:b3:1d:97:ae:0a:54:
 46:0b:a8:51:02:97:c6:41:32:16:db:7c:77:28:e8:df:73:70:
 a0:01:73:b6:84:90:b5:a8:b7:54:53:7d:a9:cd:81:33:35:6d:
 58:5e:ba:e2:7d:34:7a:32:c9:fd:4f:07:18:75:a7:53:3d:61:
 1b:98:7a:e6:92:5b:74:39:e1:ab:b2:6a:51:4a:56:c5:99:1e:
 d7:7a:7a:b6:32:e8:ca:f2:33:bc:3f:d5:3c:3f:87:2a:9f:ab:
 37:c8




cert.1024.combined
Description: Binary data


smime.p7s
Description: S/MIME cryptographic signature
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


[openssl-users] Best way to combine ControlPersist and ProxyCommand?

2015-09-11 Thread David Coppit
Hi all,

What's the best way to set up a persistent master connection, along with a
proxy jump host? Ideally the persistent master would speed up connections
to machines behind the proxy, not just the connection to the proxy.

Is this okay?

Host jumpbox
User   jumpboxuser
IdentityFile   jumpbox_key
ControlMaster  auto
ControlPath~/.ssh/controlmaster-%r@%h:%p
ControlPersist 5m

Host internal1 internal2 internal3 internal4
User   internaluser
IdentityFile   internal_key
ProxyCommand   ssh -W %h:%p -F ssh.config jumpbox
ControlMaster  auto
ControlPath~/.ssh/controlmaster-%r@%h:%p
ControlPersist 5m

I was worried that the internal[1234] controlmaster connections would be
multiplexed through the jumpbox one, but I stopped the jumpbox master with
"-O stop", verified that the socket file was gone, and the internal[1234]
controlmaster connections seemed to keep working.

Thanks,
David
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


[openssl-users] Strange problem in using verify command

2015-09-10 Thread David Li
Hi,

I am using "openssl verify -CAfile  " to verify the
certificate. It's been running as expected.

Recently I started to run this command on a different x86 platform.
What I found is the the first few times I always got:

error 9 at 1 depth lookup:certificate is not yet valid

Then I waited 10 min and reran the same cmd and got "OK".

I am puzzled by this. Is this a some timing issue?

My openssl version is:

OpenSSL 1.0.1e-fips 11 Feb 2013

David
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Strange problem in using verify command

2015-09-10 Thread David Li
Hi Jakob,

The computer has been up running for quite a while. I wonder if it
really needs NTP to take that long to sync up.

David

On Thu, Sep 10, 2015 at 7:20 PM, Jakob Bohm <jb-open...@wisemo.com> wrote:
> On 11/09/2015 02:13, David Li wrote:
>>
>> Hi,
>>
>> I am using "openssl verify -CAfile  " to verify the
>> certificate. It's been running as expected.
>>
>> Recently I started to run this command on a different x86 platform.
>> What I found is the the first few times I always got:
>>
>> error 9 at 1 depth lookup:certificate is not yet valid
>>
>> Then I waited 10 min and reran the same cmd and got "OK".
>>
>> I am puzzled by this. Is this a some timing issue?
>
> Chances are that the clock on the computer was wrong until
> NTP corrected it from the network.
>>
>>
>> My openssl version is:
>>
>> OpenSSL 1.0.1e-fips 11 Feb 2013
>
> That's kind of old.
>
>
> Enjoy
>
> Jakob
> --
> Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
> Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
> This public discussion message is non-binding and may contain errors.
> WiseMo - Remote Service Management for PCs, Phones and Embedded
>
> ___
> openssl-users mailing list
> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] OPENSSL_SYS_VOS meaning

2015-08-25 Thread David Luengo López
Thank you Wim,

I'll take a look to the notes you linked. And talking about the entropy
pool, do you know where in the code is the entropy pool filled with more
entropy? I'm doing some searches looking for /dev/u?random (I still need to
invest more time on this though), but it would be great if you can put me
on the way.

Thank you,

Best regards,


On Mon, Aug 24, 2015 at 10:34 PM, Wim Lewis w...@omnigroup.com wrote:

 On Aug 24, 2015, at 11:33 AM, David Luengo López dlue...@rti.com wrote:
  439 #define DUMMY_SEED  /* at least
 MD_DIGEST_LENGTH */
  440 /* Note that the seed does not matter, it's just that
  441  * ssleay_rand_add expects to have something to hash. */
  442 ssleay_rand_add(DUMMY_SEED, MD_DIGEST_LENGTH, 0.0);
 
  I don't know why the 0.0 parameter, since we are not adding anything
 here I never get more entropy in the pool. Any explanation for this 0.0?

 Because there is actually no entropy in DUMMY_SEED --- it's a constant.
 This piece of code is stirring the pool; it doesn't increase the amount
 of entropy (unpredictability) in the pool, it just makes sure that all the
 bits of the pool are equally unpredictable. Actual entropy must be added by
 some other piece of code.

  Anyone knows what does OPENSSL_SYS_VOS macro means?

 The notes from the patch from Paul Green adding randomness support for VOS
 might have useful information for you:

 https://rt.openssl.org/Ticket/Display.html?id=2563user=guestpass=guest

 (I do not know enough about VxWorks or VOS to say whether defining
 OPENSSL_SYS_VOS safely solves your problem, though it seems plausible)


 ___
 openssl-users mailing list
 To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users




-- 

[image: RTI]

*David Luengo López*
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


[openssl-users] OPENSSL_SYS_VOS meaning

2015-08-24 Thread David Luengo López
Hello openssl-ers,

Anyone knows what does OPENSSL_SYS_VOS macro means?

I'm building OpenSSL (1.0.1l) for Vxworks and I when I ran some tests I got
some errors while generating a new session ID because it seems it is
running out of entropy in the entropy pool. I've been looking into the code
and I have observed that, in the end, my entropy pool is being filled by
calling the ssleay_rand_add() function in file crypto/rand/md_rand.c:

435 #if MD_DIGEST_LENGTH  20
436 # error Please adjust DUMMY_SEED.
437 #endif
438
439 #define DUMMY_SEED  /* at least MD_DIGEST_LENGTH
*/
440 /* Note that the seed does not matter, it's just that
441  * ssleay_rand_add expects to have something to hash. */
*442 ssleay_rand_add(DUMMY_SEED, MD_DIGEST_LENGTH, 0.0);*

I don't know why the 0.0 parameter, since we are not adding anything here I
never get more entropy in the pool. Any explanation for this 0.0?

I realized then the RAND_poll() function in the crypto/rand/rand_unix.c
file:

426 #if defined(OPENSSL_SYS_VXWORKS)
427 int RAND_poll(void)
428 {
429 return 0;
430 }
431 #endif

I'm confused about this, but I also realized that, surrounded by #if
defined(OPENSSL_SYS_VOS) there is a nice implementation of the RAND_poll(),
so I built my library using it and now it seems to work. So, going back to
the main question, does anybody knows what is the OPENSSL_SYS_VOS and for
what is it used? And for some extra points, why that RAND_poll for
vxworks...

I'll keep investigating in all this.

Thank you in advance,

Best regards,


-- 

[image: RTI]

*David Luengo López*
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] SSL_CTX_load_verify_locations only with CAPath

2015-07-06 Thread David Thompson
 From: openssl-users On Behalf Of Dr. Roger Cuypers
 Sent: Monday, July 06, 2015 10:43

 Follow up:

 For some reason, the X509_NAME_hash function calculates a very different
 hash for the server certificate:

 5ad8a5d6

 Renaming the certificate to 5ad8a5d6.0 causes it to be found, but I wonder
 where the difference in the hashes lies.

[reformatted]
 openssl x509 -in D:\certs\-.wikipedia.org.crt -out 
 D:\certs\-.wikipedia.org.der
 -outform DER
 openssl x509 -in D:\certs\-.wikipedia.org.der -inform DER -out
 D:\certs\-.wikipedia.org.pem -outform PEM

Aside: those first two steps accomplish nothing; -.wikipedia.org.crt was
already PEM (we know it worked in CAfile). 'x509' reads PEM by default.

 openssl x509 -in D:\certs\-
 .wikipedia.org.pem -noout -subject_hash
 690deae8

 Then in D:\certs:

 D:\certsmklink /h 690deae8.0 -.wikipedia.org.pem

snip

I bet you put the entire cert *chain* in the -.wikipedia.org.crt file.

The leaf cert (currently) used by wikipedia, with
subject= /C=US/ST=California/L=San Francisco/O=Wikimedia Foundation, 
Inc./CN=*.wikipedia.org
issuer= /C=BE/O=GlobalSign nv-sa/CN=GlobalSign Organization Validation CA - 
SHA256 - G2
serial=1121972E32A5E5B2E29D472DFEDB72D6276E
notBefore=Dec 16 21:24:03 2014 GMT
notAfter=Feb 19 12:00:00 2017 GMT
has subject hash 690deae8.
This cert is sent from the server. It is not looked up in the truststore
and does not need to be in the truststore; if it is that copy is ignored.

The *root* cert for that wikipedia chain is
subject= issuer= /C=BE/O=GlobalSign nv-sa/OU=Root CA/CN=GlobalSign Root CA
serial=0401154B5AC394
notBefore=Sep  1 12:00:00 1998 GMT
notAfter=Jan 28 12:00:00 2028 GMT
and this has subject hash 5ad8a5d6. This is the only cert that needs to be
or is looked up in the truststore, and thus for CApath needs correct hash.

I thought, as the doc has (always? long?) said, that CApath must have
each cert (or CRL) in a separate file. But on checking I see that by_dir.c
actually calls X509_load_{cert,crl}_file from by_file.c, which for PEM
loads all certs (or crls) in a file to the working context. Thus a hashlink
to only the 3rd cert in a file, where that 3rd cert is the only one you need,
actually works even though not documented and I'm not sure intended.





THIS MESSAGE IS CONFIDENTIAL. This e-mail message and any attachments are 
proprietary and confidential information protected from disclosure and intended 
only for the use of the recipient(s) named above. If the reader of this message 
is not the intended recipient, or an employee or agent responsible for 
delivering this message to the intended recipient, you are hereby notified that 
any dissemination, distribution or copying of this message or any attachments 
is strictly prohibited. If you have received this communication in error, 
please notify CardConnect immediately by replying to this message and then 
delete this message and any attachments from your computer.
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Certificate serialnumber?

2015-07-06 Thread David Thompson
 From: openssl-users On Behalf Of Salz, Rich
 Sent: Sunday, July 05, 2015 11:56
[in response to message about 'ca']
   the question: where does the serial number for this certificate come
 from?
   is it random by default when nothing is said about it?

 It will be random if (a) the serial file does not exist; and (b) you specify 
 the -
 create_serial flag.  Otherwise it opens the file, reads the number (defaulting
 to zero if not exists) and increments it, updates the file, and uses that as 
 the
 new serial number.

One point I didn't notice until you pointed me at:

FOR 'ca': If the serial file exists,the current value is read (ERROR if none or 
bad,
not zero), THAT value is used, and then the incremented value is written back.
If the file doesn't exist and you specify create, a random value is used, then
the incremented value written. If the file doesn't exist and you don't
specify create, error.

FOR 'x509' with -set_serial, that is used and serial file is ignored. Otherwise
same as above, except value is incremented BEFORE it us used-- and
the create option is spelled -CAcreateserial  instead of -create_serial.

In short, 'ca' is like N++ in C but 'x509' is like ++N . Yikes!




THIS MESSAGE IS CONFIDENTIAL. This e-mail message and any attachments are 
proprietary and confidential information protected from disclosure and intended 
only for the use of the recipient(s) named above. If the reader of this message 
is not the intended recipient, or an employee or agent responsible for 
delivering this message to the intended recipient, you are hereby notified that 
any dissemination, distribution or copying of this message or any attachments 
is strictly prohibited. If you have received this communication in error, 
please notify CardConnect immediately by replying to this message and then 
delete this message and any attachments from your computer.
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] SSL_CTX_load_verify_locations only with CAPath

2015-07-05 Thread David Thompson
From: openssl-users On Behalf Of Dr. Roger Cuypers
Sent: Friday, July 03, 2015 11:01
 I'm trying to do peer client verification using the 
 SSL_CTX_load_verify_locations function
snip: CAfile works
 However, setting only CAPath will not: snip
 This will result in a X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY error.
 The cert directory D:\\certs looks like this:
 -.wikipedia.org.crt
 ca_client.jks
 ca_server.jks
 My expectation would be that the library uses -.wikipedia.org.crt
 As it is the only certificate available or am I doing something wrong?

A truststore generally can contain many certs, not just one. OpenSSL needs to
find the correct one(s) for each connection and uses special names for this.

From manpage 
https://www.openssl.org/docs/ssl/SSL_CTX_load_verify_locations.html

If CApath is not NULL, it points to a directory containing CA certificates in 
PEM format.
The files each contain one CA certificate. The files are looked up by the
CA subject name hash value, which must hence be available. If more than
one CA certificate with the same name hash value exist, the extension
must be different (e.g. 9d66eef0.0, 9d66eef0.1 etc). The search is performed
in the ordering of the extension number, regardless of other properties
of the certificates. Use the c_rehash utility to create the necessary links.

The semantically similar https://www.openssl.org/docs/apps/verify.html
-CApath option is slightly briefer:

A directory of trusted certificates. The certificates should have names of 
the form:
hash.0 or have symbolic links to them of this form (hash is the hashed 
certificate
subject name: see the -hash option of the x509 utility). Under Unix the c_rehash
script will automatically create symbolic links to a directory of certificates.

Note c_rehash  only works on Unix, or quasi-Unix like Cygwin/Windows.
For native Windows one or two is easiest done by hand:
  openssl x509 -in certfile.pem -noout -subject_hash
  (outputs hex number call it )
  mklink /h .0 certfile.pem
To do it automatically you need a trick to capture the value, something like
  for %f in (*.pem) do for /v %h ('openssl x509 -in %f -noout -subject_hash') ^
  do mklink /h %v.0 %f
except that doesn't handle errors or collisions intelligently. (And if you want
to make it a .bat file, remember all local % must be doubled in .bat.)

Note this *method* hasn't changed for at least a decade, but the *hash*
used for -subject_hash did change between 0.9.8 and 1.0.0. If you create
hashlinks with 0.9.8 they won't work for 1.0.0 and up, and vice versa.

And note only *one* cert per file in CApath is used. If your wikipedia.crt
file has multiple certs, using it as CAfile puts them *all* in the truststore
and uses them if needed, but for CApath you must split out separate file
for each needed cert, each with a hashlink (or name) as above.

But the server I get for wikipedia.org:443 (208.80.154.224) (as it should)
provides full chain up to but excluding GlobalSign Root CA, so that root
is the only cert you should need regardless of store format.




THIS MESSAGE IS CONFIDENTIAL. This e-mail message and any attachments are 
proprietary and confidential information protected from disclosure and intended 
only for the use of the recipient(s) named above. If the reader of this message 
is not the intended recipient, or an employee or agent responsible for 
delivering this message to the intended recipient, you are hereby notified that 
any dissemination, distribution or copying of this message or any attachments 
is strictly prohibited. If you have received this communication in error, 
please notify CardConnect immediately by replying to this message and then 
delete this message and any attachments from your computer.
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Certificate serialnumber?

2015-07-05 Thread David Thompson
 From: openssl-users On Behalf Of Walter H.
 Sent: Sunday, July 05, 2015 06:49

snip: CentOS default
 openssl req -new -newkey rsa:2048 -subj '/CN=Squid SSL-Bump
 CA/C=/O=/OU=/' -sha256 -days 365 -nodes -x509 -keyout ./squidCA.pem
 -out ./squidCA.pem

 the question: where does the serial number for this certificate come from?
 is it random by default when nothing is said about it?

Quoting the man page for req(1) -- although depending on the packaging
which I don't know for CentOS it may be a different section like 1s or 1ssl --
and also on the web https://www.openssl.org/docs/apps/req.html

-x509
this option outputs a self signed certificate instead of a certificate 
request.
This is typically used to generate a test certificate or a self signed root CA.
The extensions added to the certificate (if any) are specified in the
configuration file. Unless specified using the set_serial option,
a large random number will be used for the serial number.

 would this be also an option when using openssl like this:

 openssl ca -batch -config any.cnf -name any_ca -md sha256 -startdate
 ...  -enddate ... 

'ca' always uses the value currently in a 'serial' file configured in the
configuration file, and increments it, thus using sequential numbers
when you issue more than one cert. 'ca' also records issued certs
in a 'database' file usually named index.txt (a VERY SIMPLE db,
just a file with text lines and columns) which makes sequential
numbers convenient. If you want nonsequential numbers
you can edit the serial file before each or any execution of 'ca'.
This is mostly described on the man page for ca(1ssl), although
on checking I see it isn't actually stated that serial values are
incremented; you're supposed to infer that from the usual
meaning of the word, although the X.509 meaning has diverged.

OpenSSL's other, simpler but less capable way to issue a child
cert is 'openssl x509' with '-req' and '-CA', plus '-CAkey' unless
the key is in the (CA)cert file, and other options as needed.
In this method you may specify '-set_serial' as an option;
else it uses the serial-file method like 'ca' except the filename
may be an option or defaults to the (CA)cert file name with
.pem or other suffix changed to .srl. And 'x509 -req -CA' does
NOT record the index.txt 'database'. Now, where do you think
documentation of 'x509' might be?





THIS MESSAGE IS CONFIDENTIAL. This e-mail message and any attachments are 
proprietary and confidential information protected from disclosure and intended 
only for the use of the recipient(s) named above. If the reader of this message 
is not the intended recipient, or an employee or agent responsible for 
delivering this message to the intended recipient, you are hereby notified that 
any dissemination, distribution or copying of this message or any attachments 
is strictly prohibited. If you have received this communication in error, 
please notify CardConnect immediately by replying to this message and then 
delete this message and any attachments from your computer.
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Certificate serialnumber?

2015-07-05 Thread David Thompson
 From: openssl-users On Behalf Of Ben Humpert
 Sent: Sunday, July 05, 2015 07:58

 Take a look in your openssl.cnf and you should see the option serial
 with a path / file specified. The serial number is taken from that
 file. If the file doesn't exists or is empty when the very first
 certificate is created then 01 is used as a serial for it.

That's for 'ca', not for 'req -new -x509'. See my answer.

snip details for 'ca' from Ristic




THIS MESSAGE IS CONFIDENTIAL. This e-mail message and any attachments are 
proprietary and confidential information protected from disclosure and intended 
only for the use of the recipient(s) named above. If the reader of this message 
is not the intended recipient, or an employee or agent responsible for 
delivering this message to the intended recipient, you are hereby notified that 
any dissemination, distribution or copying of this message or any attachments 
is strictly prohibited. If you have received this communication in error, 
please notify CardConnect immediately by replying to this message and then 
delete this message and any attachments from your computer.
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] How to verify a cert chain using Openssl command line?

2015-06-30 Thread David Li
Ben,

I think you are right. My verify test is okay now if I match the
subjectAltName to the nameConstraints defined by the subCA.
Thanks.

David


On Mon, Jun 29, 2015 at 6:23 PM, Ben Humpert b...@an3k.de wrote:
 Yes, because nameConstraints are inherited.

 I don't know exactly where the bug lies but I strongly advise NOT to
 use nameConstraints because while there is a standard nobody has
 implemented full or correctly working support for it. I ran various
 tests some weeks ago and the result was horrible. See
 https://mta.openssl.org/pipermail/openssl-users/2015-May/001387.html
 and https://mta.openssl.org/pipermail/openssl-users/2015-May/001388.html

 2015-06-29 23:58 GMT+02:00 David Li dlipub...@gmail.com:
 The subCA  has nameConstraints in the subCA configuration file:

 [name_constraints]
 permitted;DNS.0 = example.com

 client configuration file has subjectAltName:
 subjectAltName = DNS: www.cs.com

 So is this a mismatch? How come s_client/s_server test was okay?





 On Mon, Jun 29, 2015 at 2:12 PM, Ben Humpert b...@an3k.de wrote:
 Do you use nameConstraints or have specified IP in subjectAltName?
 Because OpenSSL can't handle that correctly.

 2015-06-29 22:51 GMT+02:00 David Li dlipub...@gmail.com:
 Hi,

 As a test, I have created a rootCA, a subCA (signed by the rootCA) and
 a client cert (signed by the subCA). Now I want to use verify,
 s_client and s_server to test them together.

 However I searched and tried a number of times but still unsure about
 the correct syntax format in verify command. This is what I did:

 cat rootCA.crt subCA.crt  caChain.crt

 openssl -verbose -verify -CAflie caChain.crt clientCert.crt

 openssl verify -CAfile caChain.crt client/clientCert.crt
 client/clientCert.crt: C = US, ST = California, O = David's company,
 CN = David's client cert, emailAddress = david...@example.com
 error 47 at 0 depth lookup:permitted subtree violation


 However it seems my s_client and s_server test is OK:

 I created a caChain by cancatenating rootCA and subCA together:

 Server:
 openssl s_server -cert server/serverComb.crt -www -CAfile caChain.crt 
 -verify 3

 where serverComb.crt = cat of serverCert and server key

 Client:
 openssl s_client -CAfile caChina.crt -cert client/clientComb.crt

 where clientComb is  = cat of clientCert and clientKey


 Anyone has any idea why verify command failed?

 Thanks.
 ___
 openssl-users mailing list
 To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
 ___
 openssl-users mailing list
 To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
 ___
 openssl-users mailing list
 To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
 ___
 openssl-users mailing list
 To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] How to verify a cert chain using Openssl command line?

2015-06-29 Thread David Li
The subCA  has nameConstraints in the subCA configuration file:

[name_constraints]
permitted;DNS.0 = example.com

client configuration file has subjectAltName:
subjectAltName = DNS: www.cs.com

So is this a mismatch? How come s_client/s_server test was okay?





On Mon, Jun 29, 2015 at 2:12 PM, Ben Humpert b...@an3k.de wrote:
 Do you use nameConstraints or have specified IP in subjectAltName?
 Because OpenSSL can't handle that correctly.

 2015-06-29 22:51 GMT+02:00 David Li dlipub...@gmail.com:
 Hi,

 As a test, I have created a rootCA, a subCA (signed by the rootCA) and
 a client cert (signed by the subCA). Now I want to use verify,
 s_client and s_server to test them together.

 However I searched and tried a number of times but still unsure about
 the correct syntax format in verify command. This is what I did:

 cat rootCA.crt subCA.crt  caChain.crt

 openssl -verbose -verify -CAflie caChain.crt clientCert.crt

 openssl verify -CAfile caChain.crt client/clientCert.crt
 client/clientCert.crt: C = US, ST = California, O = David's company,
 CN = David's client cert, emailAddress = david...@example.com
 error 47 at 0 depth lookup:permitted subtree violation


 However it seems my s_client and s_server test is OK:

 I created a caChain by cancatenating rootCA and subCA together:

 Server:
 openssl s_server -cert server/serverComb.crt -www -CAfile caChain.crt 
 -verify 3

 where serverComb.crt = cat of serverCert and server key

 Client:
 openssl s_client -CAfile caChina.crt -cert client/clientComb.crt

 where clientComb is  = cat of clientCert and clientKey


 Anyone has any idea why verify command failed?

 Thanks.
 ___
 openssl-users mailing list
 To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
 ___
 openssl-users mailing list
 To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


[openssl-users] FIPS mode and AES_set_encrypt_key/AES_set_decrypt_key

2015-05-12 Thread David Weidenkopf
Can anyone shed light on why these APIs are disabled in FIPS mode? They
involve operations that must be implemented within the boundary of the FIPS
crypto module? It seems like disabling them is intended to prevent mistakes
from developers trying to write their own AES mode implementations?


Thanks in advance for any insight...
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Disable SSL3 and enable TLS1? / Ambiguous DES-CBC3-SHA

2015-04-07 Thread David Rueter
 You're confusing SSLv3 the protocol, with SSLv3 ciphersuites.
Yes, I admit I am not distinguishing between these.  However, !SSLv3  in the
cipher list does evidently disable the SSLv3 protocol as well--as evidenced
by testing with https://www.ssllabs.com/ssltest

Since I don't have source for the application I can only control OpenSSL's
behavior through the cypher list.  I guess I will have to choose between
leaving SSLv3 enabled and breaking Android and IE on XP users (that require
TLSv1).

From the symptoms, it sure seems like OpenSSL mistakenly uses the string
DES-CBC3-SHA to refer to both TLS and SSL3 (see
https://www.openssl.org/docs/apps/ciphers.html )  Is this really
intentional?  In other words, is the SSLv3 cipher
SSL_RSA_WITH_3DES_EDE_CBC_SHA actually the same as the TLS cipher
TLS_RSA_WITH_DES_CBC_SHA?



-Original Message-
From: openssl-users [mailto:openssl-users-boun...@openssl.org] On Behalf Of
Viktor Dukhovni
Sent: Monday, April 06, 2015 7:44 PM
To: openssl-users@openssl.org
Subject: Re: [openssl-users] Disable SSL3 and enable TLS1? / Ambiguous
DES-CBC3-SHA

On Mon, Apr 06, 2015 at 05:11:22PM -0700, David Rueter wrote:

 I would like to disable SSL3 (to prevent POODLE attacks), but I would 
 like to leave TLS1 enabled (particularly DES-CBC3-SHA, AES128-SHA and 
 AES256-SHA).

You're confusing SSLv3 the protocol, with SSLv3 ciphersuites.  To disable
the protocol set SSL_OP_NO_SSLv3 via SSL_CTX_set_options().

 Is there no way to disable SSL3 while leaving 
 TLS_RSA_WITH_3DES_EDE_CBC_SHA enabled?

Yes, disable the protocol, not the ciphers.

-- 
Viktor.
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Disable SSL3 and enable TLS1? / Ambiguous DES-CBC3-SHA

2015-04-07 Thread David Rueter
Got it!  Thanks for the detailed explanation.  I did not realize that the
same ciphers were used by both SSL3 and TLS1.  The behavior now makes all
the sense in the world.

Thanks!

-Original Message-
From: openssl-users [mailto:openssl-users-boun...@openssl.org] On Behalf Of
Viktor Dukhovni
Sent: Tuesday, April 07, 2015 8:32 AM
To: openssl-users@openssl.org
Subject: Re: [openssl-users] Disable SSL3 and enable TLS1? / Ambiguous
DES-CBC3-SHA

On Tue, Apr 07, 2015 at 08:09:31AM -0700, David Rueter wrote:

  You're confusing SSLv3 the protocol, with SSLv3 ciphersuites.

 Yes, I admit I am not distinguishing between these.  However, !SSLv3  
 in the cipher list does evidently disable the SSLv3 protocol as 
 well--as evidenced by testing with https://www.ssllabs.com/ssltest

When there are no SSLv3 ciphers left, the protocol is not offerred, but this
also disables TLSv1 and TLSv1.1 as they use the same set of ciphers.

 From the symptoms, it sure seems like OpenSSL mistakenly uses the 
 string DES-CBC3-SHA to refer to both TLS and SSL3 (see 
 https://www.openssl.org/docs/apps/ciphers.html )

There is no mistake.  The same cipher-suite:

DES-CBC3-SHASSLv3 Kx=RSA  Au=RSA  Enc=3DES(168) Mac=SHA1

applies to SSLv3, TLSv1, TLSv1.1 and TLSv1.2.


 intentional?  In other words, is the SSLv3 cipher 
 SSL_RSA_WITH_3DES_EDE_CBC_SHA actually the same as the TLS cipher 
 TLS_RSA_WITH_[3]DES_[EDE_]CBC_SHA?

Yes, they are one and the same (SSL 3.0, TLS 1.0, TLS 1.1, TLS 1.2):

RFC 6101: CipherSuite SSL_RSA_WITH_3DES_EDE_CBC_SHA = {
0x00,0x0A };
RFC 2246: CipherSuite TLS_RSA_WITH_3DES_EDE_CBC_SHA = {
0x00,0x0A };
RFC 4346: CipherSuite TLS_RSA_WITH_3DES_EDE_CBC_SHA = {
0x00,0x0A };
RFC 5246: CipherSuite TLS_RSA_WITH_3DES_EDE_CBC_SHA = {
0x00,0x0A };

As for:

CipherSuite TLS_RSA_WITH_DES_CBC_SHA   = { 0x00,0x09 };

it is not triple DES, it is single-DES, and corresponds (RFC 6101) to:

CipherSuite SSL_RSA_WITH_DES_CBC_SHA   = { 0x00,0x09 };

which OpenSSL calls:

DES-CBC-SHA SSLv3 Kx=RSA  Au=RSA  Enc=DES(56)   Mac=SHA1

-- 
Viktor.
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Disable SSL3 and enable TLS1? / Ambiguous DES-CBC3-SHA

2015-04-07 Thread David Rueter
 Is OpenSSL in its own DLL/DLLs?  

 

Yes, the OpenSSL DLL’s libeay32.dll and ssleay32.dll are used, and in fact I
have updated them to 1.0.2a

 

Yes, performing my own build on these DLL’s is an option, and I may pursue
it.  I just need to get a Windows dev environment set up to build these.

 

 

From: openssl-users [mailto:openssl-users-boun...@openssl.org] On Behalf Of
Jakob Bohm
Sent: Tuesday, April 07, 2015 9:57 AM
To: openssl-users@openssl.org
Subject: Re: [openssl-users] Disable SSL3 and enable TLS1? / Ambiguous
DES-CBC3-SHA

 

On 07/04/2015 17:09, David Rueter wrote:

You're confusing SSLv3 the protocol, with SSLv3 ciphersuites.

Yes, I admit I am not distinguishing between these.  However, !SSLv3  in the
cipher list does evidently disable the SSLv3 protocol as well--as evidenced
by testing with https://www.ssllabs.com/ssltest
 
Since I don't have source for the application I can only control OpenSSL's
behavior through the cypher list.  I guess I will have to choose between
leaving SSLv3 enabled and breaking Android and IE on XP users (that require
TLSv1).

Is OpenSSL in its own DLL/DLLs?  If so, could you simply
recompile OpenSSL (at latest patchlevel) without the SSL3
protocol?

This would also provide all the other security fixes that
have been added to OpenSSL since someone gave you the
program. 




Enjoy
 
Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Disable SSL3 and enable TLS1? / Ambiguous DES-CBC3-SHA

2015-04-06 Thread David Rueter
James, thanks for the reply.

 

At this point I am using compiled Windows binaries, and am running a compiled 
Windows application that uses the SSL DLLs.  The Windows application does let 
me specify a cipher list, but I do not have source to that application to 
re-build.

 

I don’t think that in this situation I am able to call SSL_CTX_set_options.

 

I guess I might be stuck if I can’t use the cipher list to disable SSL3 while 
leaving TLS1 enabled.  Not the end of the world, but not ideal.

 

Sincerely,

 

David Rueter

 

 

From: openssl-users [mailto:openssl-users-boun...@openssl.org] On Behalf Of 
James
Sent: Monday, April 06, 2015 6:45 PM
To: openssl-users@openssl.org
Subject: Re: [openssl-users] Disable SSL3 and enable TLS1? / Ambiguous 
DES-CBC3-SHA

 

Hi, 

Can you try this option

SSL_CTX_set_options(m_SslCtx, SSL_OP_NO_SSLv2|SSL_OP_NO_SSLv3);

instead of disabling using the cipher list

 

 

 

regards,

James Arivazhagan

 

 

On Tue, Apr 7, 2015 at 5:41 AM, David Rueter drue...@assyst.com 
mailto:drue...@assyst.com  wrote:

I would like to disable SSL3 (to prevent POODLE attacks), but I would like to 
leave TLS1 enabled (particularly DES-CBC3-SHA, AES128-SHA and AES256-SHA).

 

However disabling SSL3 with !SSLv3 disables TLSv1 also.  Furthermore, disabling 
SSL3 with -SSLv3 then adding in individual ciphers such as +DES-CBC3-SHA seems 
to re-enable SSLv3.

 

In looking at https://www.openssl.org/docs/apps/ciphers.html it looks like 
SSL_RSA_WITH_3DES_EDE_CBC_SHA and TLS_RSA_WITH_3DES_EDE_CBC_SHA are both 
referred to as DES-CBC3-SHA.

 

Is this intentional? Are not SSL_RSA_WITH_3DES_EDE_CBC_SHA and 
TLS_RSA_WITH_3DES_EDE_CBC_SHA different ciphers?

 

Is there no way to disable SSL3 while leaving TLS_RSA_WITH_3DES_EDE_CBC_SHA 
enabled?

 


___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users

 

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


[openssl-users] Disable SSL3 and enable TLS1? / Ambiguous DES-CBC3-SHA

2015-04-06 Thread David Rueter
I would like to disable SSL3 (to prevent POODLE attacks), but I would like
to leave TLS1 enabled (particularly DES-CBC3-SHA, AES128-SHA and
AES256-SHA).

 

However disabling SSL3 with !SSLv3 disables TLSv1 also.  Furthermore,
disabling SSL3 with -SSLv3 then adding in individual ciphers such as
+DES-CBC3-SHA seems to re-enable SSLv3.

 

In looking at https://www.openssl.org/docs/apps/ciphers.html it looks like
SSL_RSA_WITH_3DES_EDE_CBC_SHA and TLS_RSA_WITH_3DES_EDE_CBC_SHA are both
referred to as DES-CBC3-SHA.

 

Is this intentional? Are not SSL_RSA_WITH_3DES_EDE_CBC_SHA and
TLS_RSA_WITH_3DES_EDE_CBC_SHA different ciphers?

 

Is there no way to disable SSL3 while leaving TLS_RSA_WITH_3DES_EDE_CBC_SHA
enabled?

 

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] ecc negotiation

2015-04-06 Thread David Rufino
Great, that works, thank you. Is this the default behavior when using the C
API?

Thanks,
David

On Sunday, April 5, 2015, Matt Caswell m...@openssl.org wrote:



 On 05/04/15 23:42, Matt Caswell wrote:
 
 
  On 05/04/15 22:04, David Rufino wrote:
  Hello,
 
  It's possible I'm doing something wrong here, but I can't seem to
  negotiate ecdhe with an elliptic curve other than P-256. To reproduce
  the issue, using openssl 1.0.2
 
  openssl s_server  -key server.key -cert server.crt -msg -debug -dhparam
  dhparam.pem  -cipher ECDHE-RSA-AES128-SHA -tls1_2
 
  gnutls-cli 127.0.0.1 -p 4433 -d 4 --insecure
  --priority=NORMAL:-KX-ALL:+ECDHE-RSA:-CURVE-ALL:+CURVE-SECP224R1
 
  which gives the error
 
  :SSL routines:ssl3_get_client_hello:no shared cipher:s3_srvr.c:1366:
 
  changing to p256r1 succeeds. is there a particular why the negotation
  would fail with p224 ? my understanding is that openssl supports all the
  nist curves.
 
 
  Try adding -named_curve secp224r1 to your s_server arguments. This
  specifies the curve to use for ECDHE keys. The default if you don't
  specify a named curve is P-256 which is why it works when you are using
  that curve.

 BTW, you can also use -named_curve auto, which will just pick an
 appropriate curve.

 Matt

 ___
 openssl-users mailing list
 To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


[openssl-users] ecc negotiation

2015-04-05 Thread David Rufino
Hello,

It's possible I'm doing something wrong here, but I can't seem to negotiate
ecdhe with an elliptic curve other than P-256. To reproduce the issue,
using openssl 1.0.2

openssl s_server  -key server.key -cert server.crt -msg -debug -dhparam
dhparam.pem  -cipher ECDHE-RSA-AES128-SHA -tls1_2

gnutls-cli 127.0.0.1 -p 4433 -d 4 --insecure --priority=NORMAL:-KX-ALL:+
ECDHE-RSA:-CURVE-ALL:+CURVE-SECP224R1

which gives the error

:SSL routines:ssl3_get_client_hello:no shared cipher:s3_srvr.c:1366:

changing to p256r1 succeeds. is there a particular why the negotation would
fail with p224 ? my understanding is that openssl supports all the nist
curves.

Regards,
David
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


version question

2014-11-20 Thread David Flatley

  I am trying to build Openssh 6.7p1 on a Red Hat 5.6 x86_64 system
with Red Hat openssl-0.9.8e-31, which is the latest Red Hat openssl
version. The Openssh build checks openssl versions and requires 0.9.8f.
Is there a work around for this?
Thanks.

David Flatley

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: SSL_MODE_SEND_FALLBACK_SCSV option

2014-10-24 Thread David Li
I am still a little unclear by what exactly TLS_FALLBACK_SCSV option would
do.

What if the server only supports SSLv3 + TLSv1 and client only  connects
with SSLv3? Without the patch, both would agree to SSLv3. So this is a
problem.

What happens with the patch only on the server? And what happens with the
both server and client patched?



On Fri, Oct 24, 2014 at 9:30 AM, Jakob Bohm jb-open...@wisemo.com wrote:

  On 24/10/2014 18:19, Aditya Kumar wrote:

  Thanks Jakob for correcting my understanding. In short, can I conclude
 the following about FALLBACK flag.

  1. Whenever client is sending the FALLBACK flag in its request, an
 updated Server will interpret it that this client supports a higher version
 but since that higher version protocol request was refused, its trying to
 connect using a lower version protocol.
 2. The FALLBACK flag should only be set to communicate to those extremely
 rare old SSLv3 servers which completely fail to accept a request for (SSLv3
 or TLSv1+, the best client have). In that case, first client should attempt
 to connect with SSLAUTONEGOTIATE and if it fails, then connect with SSLV3
 FALLBACK enabled.

 Much simpler: The FALLBACK flag should be set only to communicate that
 the client has activated its manual fall back code (if any).  If the
 client doesn't contain manual fallback code, it doesn't need to do
 anything.

  3. Points 2 holds true even for the cases where clients connecting using
 TLS 1.2 fail and then client need to connect using TLS 1.1, TLS1.0 or
 SSLV3.0. Then client should attempt the next connections using FALLBACK
 flag set.

 Yes, SSLv3 is just an example, which happens to be important right now
 because of poodle.


  Hope this will clear all the confusions.

  -Aditya

  On Fri, Oct 24, 2014 at 5:35 PM, Jakob Bohm jb-open...@wisemo.com
 wrote:

 On 24/10/2014 13:33, Aditya Kumar wrote:

 Hi All,

 Thanks for your detailed responses, specially Florian Weimer and Matt
 Caswell. For the benefit of everyone and me, I am summarizing the thoughts
 which I have understood through all your replies. Please correct me
 wherever I am wrong.

 To summarize:
1.  Best way to prevent POODLE attack is to disable SSLV3 on both
 client and server side.
 2.  If for some reason, you cannot disable SSLv3 on server side even if
 Server support TLS 1.0 or higher(e.g server having SSLV23 set), Server
 definitely need to be patched to prevent fallback. Once server is patched,
 it will prevent updated clients from fallback attack.
 3.  After server is patched with OpenSSL FALLBACK flag fix, Server’s
 behavior will not change for the clients which do not send FALLBACK flag in
 their clienthello request. Server will continue to work with older client
 as usual. Only if an updated client sends FALLBACK flag into its
 clienthello request, server will be able to prevent fallback.
 4.  If for some reason, client has to keep SSLV3 enable even if it
 supports TLS 1.0 or higher version, client need to patch itself and set
 FALLBACK flag so that it does not come under fallback attack.

  WRONG, See below

 5.  Clients should never set protocol as SSLV23 to support both SSL3.0
 and TLS Servers. Clients should always explicitly first try to connect
 using its highest supported version(TLS1.0 or higher) and if the server
 rejects the connection, then clients should explicitly try to connect using
 next supported lower version protocol.

  WRONG, If client simply calls the SSL23_ (aka SSLAUTONEGOTIATE_) with
  options to allow both SSLv3 and higher TLSvX.XX, it is already secure
  and will never need to send the fallback flag.

 6.  While connecting to server using higher protocol like TLS1 or
 higher, client should set FALLBACK flag so that server do not allow
 automatically downgrade to a lower version protocol.

  WRONG, Client should always try its full range of enabled SSL/TLS
  versions in one attempt, in which case the protocols themselves
  (even without the latest patch) will automatically detect and
  prevent a fallback MiTM attack.

  However if client needs to work around some (extremely rare) old
  SSLv3 servers which completely fail to accept a request for (SSLv3
  r TLSv1+, the best you have), that client may use a workaround of:

  Step 6.1: Attempt to connect with SSLAUTONEGOTIATE_(SSLv3 up to
  TLSv1.2).  Do not set/send FALLBACK flag.

  Step 6.2: If Step 6.1 fails (either because of old broken server or
  because of new fallback MiTM attack), try again with SSLV3ONLY_(),
  and set the FALLBACK flag to tell the server that the maximum
  version specified in this call is not the true maximum version of
  the client (in case it is not an old server, but a MiTM attack
  trying to trick this fallback code).

  Step 6.3: Step 6.2 could be extended to do retries with TLSv1.1,
  then TLSv1.0, then SSLv3 etc. all of which would need the FALLBACK
  flag because the client would actually have wanted TLSv1.2 if it
  could get it.
  **

 Few questions which still 

Re: SSL_MODE_SEND_FALLBACK_SCSV option

2014-10-24 Thread David Li
On Fri, Oct 24, 2014 at 11:18 AM, Richard Könning 
richard.koenn...@ts.fujitsu.com wrote:

 At 24.10.2014 19:03, David Li wrote:

 I am still a little unclear by what exactly TLS_FALLBACK_SCSV option
 would do.

 What if the server only supports SSLv3 + TLSv1 and client only  connects
 with SSLv3? Without the patch, both would agree to SSLv3. So this is a
 problem.


Well I thought it's the CBC cipher mode used by SSLv3 that has the problem.
So we should avoid SSLv3 all together.

Maybe this is is my confusion. Will the SSLv3 alone cause the attack? Or
will a downgrade process from TLS 1.0 or later to the SSLv3 expose the
vulnerability?




 Where is the problem? When the client only supports SSLv3 and therefore
 right away starts with SSLv3, then you get an SSLv3 connection as wanted.
 When you don't want any SSLv3 connections, remove the SSLv3 support in the
 server and enhance the client so it speaks also TLSv1 or better.

  What happens with the patch only on the server? And what happens with
 the both server and client patched?


 First question: nothing, because the client can't say to the server that
 the second handshake with SSLv3 is a fallback of a previous handshake
 announcing the availability of TLSv1 or better.

 Second question: When the client starts due to a MITM attack a second
 handshake announcing SSLv3 and setting TLS_FALLBACK_SCSV option than the
 server knows that the client supports something better than SSLv3 and quits
 the handshake.

 Best regards,
 Richard




 On Fri, Oct 24, 2014 at 9:30 AM, Jakob Bohm jb-open...@wisemo.com
 mailto:jb-open...@wisemo.com wrote:

 On 24/10/2014 18:19, Aditya Kumar wrote:

 Thanks Jakob for correcting my understanding. In short, can I
 conclude the following about FALLBACK flag.

 1. Whenever client is sending the FALLBACK flag in its request, an
 updated Server will interpret it that this client supports a
 higher version but since that higher version protocol request was
 refused, its trying to connect using a lower version protocol.
 2. The FALLBACK flag should only be set to communicate to those
 extremely rare old SSLv3 servers which completely fail to accept a
 request for (SSLv3 or TLSv1+, the best client have). In that case,
 first client should attempt to connect with SSLAUTONEGOTIATE and
 if it fails, then connect with SSLV3 FALLBACK enabled.

 Much simpler: The FALLBACK flag should be set only to communicate that
 the client has activated its manual fall back code (if any).  If the
 client doesn't contain manual fallback code, it doesn't need to do
 anything.

 3. Points 2 holds true even for the cases where clients connecting
 using TLS 1.2 fail and then client need to connect using TLS 1.1,
 TLS1.0 or SSLV3.0. Then client should attempt the next connections
 using FALLBACK flag set.

 Yes, SSLv3 is just an example, which happens to be important right now
 because of poodle.


 Hope this will clear all the confusions.

 -Aditya

 On Fri, Oct 24, 2014 at 5:35 PM, Jakob Bohm jb-open...@wisemo.com
 mailto:jb-open...@wisemo.comwrote:


 On 24/10/2014 13:33, Aditya Kumar wrote:

 Hi All,

 Thanks for your detailed responses, specially Florian
 Weimer and Matt Caswell. For the benefit of everyone and
 me, I am summarizing the thoughts which I have understood
 through all your replies. Please correct me wherever I am
 wrong.

 To summarize:
1.  Best way to prevent POODLE attack is to disable
 SSLV3 on both client and server side.
 2.  If for some reason, you cannot disable SSLv3 on server
 side even if Server support TLS 1.0 or higher(e.g server
 having SSLV23 set), Server definitely need to be patched
 to prevent fallback. Once server is patched, it will
 prevent updated clients from fallback attack.
 3.  After server is patched with OpenSSL FALLBACK flag
 fix, Server’s behavior will not change for the clients
 which do not send FALLBACK flag in their clienthello
 request. Server will continue to work with older client as
 usual. Only if an updated client sends FALLBACK flag into
 its clienthello request, server will be able to prevent
 fallback.
 4.  If for some reason, client has to keep SSLV3 enable
 even if it supports TLS 1.0 or higher version, client need
 to patch itself and set FALLBACK flag so that it does not
 come under fallback attack.

 WRONG, See below

 5.  Clients should never set protocol as SSLV23 to support
 both SSL3.0 and TLS Servers. Clients should always
 explicitly first try to connect using its highest
 supported version(TLS1.0 or higher

Re: SSL_MODE_SEND_FALLBACK_SCSV option

2014-10-24 Thread David Li
On Fri, Oct 24, 2014 at 1:28 PM, Richard Könning 
richard.koenn...@ts.fujitsu.com wrote:

 Am 24.10.2014 20:47, schrieb David Li:



 On Fri, Oct 24, 2014 at 11:18 AM, Richard Könning
 richard.koenn...@ts.fujitsu.com
 mailto:richard.koenn...@ts.fujitsu.com wrote:

 At 24.10.2014 19:03, David Li wrote:

 I am still a little unclear by what exactly TLS_FALLBACK_SCSV
 option
 would do.

 What if the server only supports SSLv3 + TLSv1 and client only
 connects
 with SSLv3? Without the patch, both would agree to SSLv3. So
 this is a
 problem.


 Well I thought it's the CBC cipher mode used by SSLv3 that has the
 problem. So we should avoid SSLv3 all together.


 Exactly. But when you have a client which speaks only SSLv3 as in your
 example below you have to decide: Don't use the client or enhance it so it
 speaks at least TLSv1 or use SSLv3 despite the problems with SSLv3.

  Maybe this is is my confusion. Will the SSLv3 alone cause the attack? Or
 will a downgrade process from TLS 1.0 or later to the SSLv3 expose the
 vulnerability?


 SSLv3 alone is vulnerable. When you decide that this vulnerability is so
 large that you don't want to use SSLv3 in any case than life is easy:
 deactivate the usage of SSLv3 in all clients and servers and you have not
 to think about fall back to SSLv3.

 But when your opinion is, that an SSLv3 connection is better than no
 connection than you may have to fall back to SSLv3 some times. The
 TLS_FALLBACK_SCSV helps you to ensure that the fall back is done only when
 SSLv3 is really the highest SSL/TLS protocol shared by client and server.




So is it true that in this case TLS_FALLBACK_SCSV still can't prevent the
attack since this is a totally legitimate honest fallback to SSLv3? In
other words, the MITM attacker doesn't have to message the handshake at
all.





 Best regards,
 Richard



 Where is the problem? When the client only supports SSLv3 and
 therefore right away starts with SSLv3, then you get an SSLv3
 connection as wanted. When you don't want any SSLv3 connections,
 remove the SSLv3 support in the server and enhance the client so it
 speaks also TLSv1 or better.

 What happens with the patch only on the server? And what happens
 with
 the both server and client patched?


 First question: nothing, because the client can't say to the server
 that the second handshake with SSLv3 is a fallback of a previous
 handshake announcing the availability of TLSv1 or better.

 Second question: When the client starts due to a MITM attack a
 second handshake announcing SSLv3 and setting TLS_FALLBACK_SCSV
 option than the server knows that the client supports something
 better than SSLv3 and quits the handshake.

 Best regards,
 Richard




 On Fri, Oct 24, 2014 at 9:30 AM, Jakob Bohm
 jb-open...@wisemo.com mailto:jb-open...@wisemo.com
 mailto:jb-open...@wisemo.com mailto:jb-open...@wisemo.com__

 wrote:

  On 24/10/2014 18:19, Aditya Kumar wrote:

  Thanks Jakob for correcting my understanding. In short,
 can I
  conclude the following about FALLBACK flag.

  1. Whenever client is sending the FALLBACK flag in its
 request, an
  updated Server will interpret it that this client
 supports a
  higher version but since that higher version protocol
 request was
  refused, its trying to connect using a lower version
 protocol.
  2. The FALLBACK flag should only be set to communicate
 to those
  extremely rare old SSLv3 servers which completely fail
 to accept a
  request for (SSLv3 or TLSv1+, the best client have). In
 that case,
  first client should attempt to connect with
 SSLAUTONEGOTIATE and
  if it fails, then connect with SSLV3 FALLBACK enabled.

  Much simpler: The FALLBACK flag should be set only to
 communicate that
  the client has activated its manual fall back code (if
 any).  If the
  client doesn't contain manual fallback code, it doesn't
 need to do
  anything.

  3. Points 2 holds true even for the cases where clients
 connecting
  using TLS 1.2 fail and then client need to connect
 using TLS 1.1,
  TLS1.0 or SSLV3.0. Then client should attempt the next
 connections
  using FALLBACK flag set.

  Yes, SSLv3 is just an example, which happens to be
 important right now
  because of poodle.


  Hope this will clear all the confusions.

  -Aditya

  On Fri, Oct 24

  1   2   3   4   5   6   7   8   9   10   >