Re: [openssl-users] base64 decode in C

2015-03-18 Thread Jakob Bohm

Please refer to Dave Thompson's answer, it describes your problem.

On 18/03/2015 16:08, Prashant Bapat wrote:

Hi Dave and Walter,

Thanks for our reply.

I'm not doing anything different for the ssh pubkey. I'm able to 
decode it using the openssl enc -base64 -d -A command. But not using 
the C program.


Attaching my entire code here. After getting the base64 decoded I'm 
calculating the MD5 sum and printing it. This works for a regular 
string but not for SSH pubkey.


Thanks again.

--Prashant


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] question about resigning a certificate

2015-03-17 Thread Jakob Bohm

On 16/03/2015 02:46, Alex Samad - Yieldbroker wrote:

Hi

I had a sha1 signed CA and I issued other  identity and CA certificates from 
this CA.

With the deprecation of sha1 coming, I resigned my original CA (self signed) as 
sha512, with the same creation and expiry dates. I believe the only thing 
changed was the signature and serial number.

But when I go to verify older certs that were signed by the original CA (the 
sha1 signed one), they are no longer valid.

I thought if I used the same private and public key I should be okay. I thought 
the only relevant issue was the issuer field and that the CA keys where the 
same . Was I wrong.

Alex

Run openssl x509 -noout -text -in OneOfYourIssuedCerts.pem| more

Look at what aspects of your CA are mentioned.  For example,
does it include the X509v3 Authority Key Identifier
extension, and if so, which fields from the CA cert are
included?


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] SSL_ERROR_WANT_READ, SSL_ERROR_WANT_WRITE

2015-03-11 Thread Jakob Bohm

On 10/03/2015 20:10, Serj Rakitov wrote:

Hi, Jakob. Thanks for reply.

Now I have seen OpenSSL code and something clear for me.

WANT_READ/WANT_WRITE it's just an implementation for WOULDBLOCK: not fatal 
error for non-blocking IO. So, for example for socket and Windows it's just 
WSAEWOULDBLOCK returns by WSAGetLastError. Peforms by 
BIO_sock_should_retry/BIO_sock_non_fatal_error in sock_read/sock_write.

There was some incomprehension for me because I forgot that SSL_read/SSL_write 
can perform a handshake if it didn't happen before. This is the key, because if 
handshake took place when SSL_write never will want read(to my mind), because 
it's just perform writesocket(send) operation.

But with Rehandshaking (renegotiation) still incomprehension... I don't know 
why there is a silence about this here and in the net!

I have read Eric Rescorla's old(January 10, 2002) article and there he told 
about Rehandshaking on the Server and on the Client, so it's possible with 
OpenSSL, but maybe in newer versions of OpenSSL it is not possible?

Jakob, can you tell me: is it possible to renegotiate a connection in OpenSSL? 
And if yes how to do it right?


There is lots of mention of renegotiation (what you call
rehandshaking) in the OpenSSL documents and discussions,
so I am reasonably sure it can be done.  It also seems
there are secure and insecure ways to do it.  I don't
know the details though.

This implies that the general rules about applications
using non-blocking sockets having to always handle the
possibility of WANT_READ/WANT_WRITE at any time might be
invoked by renegotiation scenarios at any time.  Because
the rules say at any time, there is no specific
discussion of applying those rules at specific times
(such as during renegotiation).



10.03.2015, 19:06, Jakob Bohm jb-open...@wisemo.com:

Not having tested or read the relevant OpenSSL code, I
presume that SSL_write could want a read if it has sent
a handshake message, but not yet received the reply, thus
it cannot (encrypt and) send user data until it has
received and acted on the handshake reply message.

Maybe the easier scenarios are at the start of a session,
where the initial handshake has not yet completed, as
happens in a HTTPS client (always writes a request before
the first read) or a simple SMTPS server (always writes a
banner line before the first read of client commands,
except in some servers that do an early read to check if
a broken/spammer client is trying to send before receiving
the banner).

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Getting info on the ciphers supported by a client

2015-03-10 Thread Jakob Bohm

On 09/03/2015 14:13, Waldin wrote:

Am 08.03.2015 um 09:14 schrieb Waldin:


Now, I also want to check ciphers enabled in (mobile) mail clients.
I've tried to make OpenSSL listen on port 110 (for POP with TLS) and
redirected the client to the OpenSSL server.  But when trying to pull
mail I can't see any handshake information:

FTR, I've now managed to check my mobile mail client.  The hint was the
argument to the s_client command's -starttls option, which made me
realize that for handshaking with starttls a minimum understanding of
the protocol is needed.  OpenSSL probably doesn't include a POP or IMAP
server.  But it works without starttls, when listening on port 993:


openssl s_server -cert public.pem -key ca-key.pem -accept 993

Enter pass phrase for ca-key.pem:
Loading 'screen' into random state - done
Using default temp DH parameters
ACCEPT
-BEGIN SSL SESSION PARAMETERS-
MFUCAQECAgMBBAIAOQQABDAM5TDoa/9vlS6pUsqtlPWpgpMc1L7bvwCS5UGiIhut
13A4uf0Zm8T2/q3ULkxnkPKhBgIEVP2ataIEAgIBLKQGBAQB
-END SSL SESSION PARAMETERS-
Shared ciphers:DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:AES256-SHA:EDH-RSA-DES-CBC3
-SHA:EDH-DSS-DES-CBC3-SHA:DES-CBC3-SHA:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA:AES
128-SHA:IDEA-CBC-SHA:RC4-SHA
CIPHER is DHE-RSA-AES256-SHA
Secure Renegotiation IS NOT supported
~A1 LOGIN MYLOGIN MYPASSWORD
ERROR
shutting down SSL
CONNECTION CLOSED
ACCEPT

Hurray!  But wait, a plain text password?  And no server certificate
pinning?  Oh, no!

Yep, that is essentially what the e-mail protocols allowin
most real world scenarios.  Note however that the password
is SSL/TLS encrypted, which is why some mail clients and
servers are quite insistant on having that enabled.  For
example, the usual configuration of the e-mail servers
recommended by the Debian distribution (exim SMTP and courier
POP3/IMAP), the default configuration is for the server to
not even ask for a password until SSL/TLS is active, the only
thing a client can do in plaintext is to ask for STARTTLS, or
deliver remote mail (which obviously doesn't require a password).

But at least the client should refuse if the certificate does
not match the DNS name or IP address it was trying to contact
(not to be confused with whatever name the server returns in
protocol messages such as the SMTP banner).

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] SSL_ERROR_WANT_READ, SSL_ERROR_WANT_WRITE

2015-03-10 Thread Jakob Bohm

On 09/03/2015 13:21, Serj Rakitov wrote:

I have to open discussion again.

I want to test situations when SSL_read WANT_WRITE and SSL_write WANT_READ. But 
I can't do this. SSL_read never wants write and SSL_write never wants read!

I don't know how to catch these situations. I don't know how to rehandshake. I 
tried after connect and handshake to send data simultaneously both to server 
and to client and never got one of those situations, SSL_read  only wanted to 
read and  SSL_write  only wanted to write, all data was received by both client 
and server.

I don't even understand how SSL_write can want to read? In what cases?
I can understand when SSL_read wants to write, for example when client got 
HelloRequest or server got a new ClientHello while reading data. But I can't 
test it, because I don't know how to start handshake again, how to perform a 
rehandshake(renegotiation).

Can anybody help me? How to test these situations or how to perform a 
rehandshake?

Not having tested or read the relevant OpenSSL code, I
presume that SSL_write could want a read if it has sent
a handshake message, but not yet received the reply, thus
it cannot (encrypt and) send user data until it has
received and acted on the handshake reply message.

Maybe the easier scenarios are at the start of a session,
where the initial handshake has not yet completed, as
happens in a HTTPS client (always writes a request before
the first read) or a simple SMTPS server (always writes a
banner line before the first read of client commands,
except in some servers that do an early read to check if
a broken/spammer client is trying to send before receiving
the banner).

--

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] End of the line for the OpenSSL FIPS Object Module?

2015-02-26 Thread Jakob Bohm

I think it was clear enough:

NIST/NSA/CMVP is demanding that OpenSSL change the
definition of*already* validated platforms before they
will allow OpenSSL to addnew platforms.

But changing those definitions would invalidate existing
governmentcontracts for delivery of products that used
the OpenSSL FIPS moduleon those platforms, so the users
that actually need the FIPS validationwill be hurt
either way.

For example, if company X has an existing contract where
they promisethat the foo system they delivered to some
US Government agencyuses only crypto code which was
validated for Red Hat EnterpriseLinux 7.0 (x86_64)
running on VmWare ESX, then if OpenSSL obeysthe
demand to change the definition to read Red Hat
EnterpriseLinux 7.0 (x86_64) running on VmWare ESX
5.1, then company Xwould suddenly be unable to
fulfill their contract, which may evenbe a criminal
offense.  But if OpenSSL refuses to change the
definition, OpenSSL cannot deliver to company X a
new validationfor Apple OS/X 10.8 (x86_64) running
on raw Apple hardware,so company X cannot get a new
government contract to deliver forthat platform, and
neither can anybody else.

So currently, OpenSSL's realistic options are:

A. Wait for someone to convince the US Government to
  drop thisretroactive requirement.
B. Start over with a new validation for a new FIPS
  module version 3, whichwould have to be modified
  to meet other new demands, whichdidn't exist when
  FIPS module version 2 was originally submitted.
   This would involve 2 variants of the FIPS interface
  code in OpenSSL1.0.3, lots of new code and a very
  very expensive bill to get thecode validated all
  over again.

On 27/02/2015 03:24, Jeffrey Walton wrote:

Hi Steve,

I read the 'The FIPS 140-2 Hostage Issue' page. Its not clear to me
what the problem is, or how OpenSSL is a hostage.

I was looking under The New Requirement for a statement of the
problem (assuming the new requirement is causing the problem), but its
escaping me (forgive my ignorance). I think the The New Requirement 
section is bogged down with some background information, which is
masking out the statement being made about the problem.

If its ... the change that is being demanded is that we supply
explicit version numbers for the hypervisor based platforms, so for
instance an existing platform, then why is that a problem?

How is virtualization a problem? (I know real problems exist in
virtualized environments, so PRNGs can suffer. We had one appliance
vendor tell us to do the link /dev/random to /dev/urandom trick
(sigh...)).

Can't that be worked around by having vendors provide real iron? (Most
validated platforms appear to be real iron, so it seems nothing has
changed to me).

Is it a problem on mobile platforms?

How is it holding OpenSSL hostage?

Can you provide the executive summary here?

Jeff

On Wed, Feb 25, 2015 at 9:08 AM, Steve Marquess marqu...@openssl.com wrote:

As always, if you don't know or care what FIPS 140-2 is count yourself
very, very lucky and move on.

The open source based OpenSSL FIPS module validations now date back over
a decade, a period during which we've encountered many challenges.
We have recently hit an issue that is apparently inconsequential on its
face, but which threatens to bring an end to the era of the open source
validated module. This is a situation that reminds me of the old for
want of a nail... ditty (https://en.wikipedia.org/wiki/For_Want_of_a_Nail).

Tedious details can be found here:

   http://openssl.com/fips/hostage.html

The short take is that for now at least the OpenSSL FIPS Object Module
v2.0, certificate #1747, can no longer be updated to include new
platforms. This development also wrecks the already marginal economics
of tentative plans for a new open source based validation to succeed the
current #1747. So, the #1747 validation may be the last of the
collaborative open source FIPS modules.

If you are a stakeholder currently using the OpenSSL FIPS module, or
with a desire to use it or successor modules (either directly or as the
basis for a private label validation), this is the time to speak up.
Feel free to contact me directly for specific suggestions or to
coordinate with other stakeholders.

-Steve M.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] [openssl-dev] Proposed cipher changes for post-1.0.2

2015-02-11 Thread Jakob Bohm

On 11/02/2015 16:46, Salz, Rich wrote:

I agree with Viktor. His suggestion (keep RC4 in MEDIUM, suppress it
explicilty in DEFAULT) is a good one that maintains important backward
compatibility while providing the desired removal of RC4 by default. There's
no advantage to moving RC4 to LOW.

Sure there is:  it's an accurate description of the quality of protection 
provided by the algorithm. :)

Is the security level (i.e. log2 of the cost of the best
known direct attack) 40 bits (historical EXPORT label), 56
bits (historical LOW label), 112 to 127 bits (historical
MEDIUM) level, or somewhere in between?

This is the real question that should guide its
classification, not if it is lower than what is currently
recommended.

Given the currenly available ciphers, it may make sense to
add new groups: HIGH192, HIGH256 and larger ones still. As
time progresses, the default can then move from HIGH to
HIGH192, to HIGH256 as the state of the art changes,
without redefining semantics.

Furthermore, the various attacks on SSL3/TLS1.0 padding and
IV logic creates an ongoing need to have a widely supported,
TLS1.0 compatible stream-or-otherwise-unpadded cipher choice
as an emergency fallback as other protections are being
rolled out by all kinds of vendors.

For example RC4 is immune to POODLE as well as a previous
widelypublicized attack, simply because it uses neither the
flawed SSL3padding nor the sometimes problematic TLS1.0 IV
selection.  No other cipher provides this protection when
talking to older peers that cannot or will not upgrade to
the latest-and-greatest TLS standards.


It's also compatible with our documentation, which as was pointed out, always uses the 
word currently to describe the magic keywords.

And it's also planned for the next version which won't be available until near 
the end of the year.

And it's also compliant with the expected publication of the IETF RFC's that 
talk about TLS configuration and attacks.

Postfix can work lay the groundwork to be future-compliant by changing its 
default configuration to be HIGH:MEDIUM:RC4.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


[openssl-users] Changelog inconsistency between 1.0.1l and 1.0.2

2015-02-11 Thread Jakob Bohm

The changelog (file CHANGES) in the 1.0.2 tarball contains
some confusingdifferences fromthe one in 1.0.1l.

Specifically:

The 1.0.2 changelog seems to indicate that a few bugs that
were fixed in the 1.0.1 branch were not fixed in the 1.0.2
branch (dtls1_get_record segmentation fault,
dtls1_buffer_record memory leak, no-ssl3 NULL issue,
unverified DH client certificates, BN_sqr bug).

The 1.0.2 changelog also contains a spurious copy of an
incomplete draft of the 1.0.1j changes, sandwiched between
1.0.1h and 1.0.1i.

Please clarify if the missing entries are actually fixed
in 1.0.2 anyway.

Also pleaseclean up any differences that are just typos
before the future 1.0.2arelease.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] The evolution of the 'master' branch

2015-02-09 Thread Jakob Bohm

On 07/02/2015 12:12, Michael Felt wrote:
From someone who does NOT understand the in's and out's of what people 
(developers and users) have been using openSSL for.
My first reaction is: have developers been using openSSL, or has it 
gone to abusing it?
For the sake of argument - let's say just use as it has always been 
intended.



Fundamentally, since its inception by EAY years ago, OpenSSL
has provided two things to other software developers: A very
popular implementation of the SSL protocols defined by
Netscape/Mozilla/IETF, and an equally popular library of
fundamental cryptographic building blocks such as large
numbers and various types of encryption and decryption.

My criticism of the OpenSSL changes in the future version
1.1.0 is that they are removing the most flexible building
blocks from the part that is intended to be used.

Many technologies - especially related to security - whether it be a 
big log through 'something', to skeleton keys', to digital keys, etc - 
we want to be able to trust our locks. When the lock technology is no 
longer trustworthy - whether it be packaging (which is what the 
discussion sounds like atm) or unrepairable concerns with the 
technology asis - we change our locks.



2014 saw some widely published problems with various SSL
variants.

  Heartbleed was a programming error found *only* in
the OpenSSL SSL code and did not affect the handful of
competing SSL implementations (such as the NSS one by
Mozilla and the STUNNEL one by Microsoft).  Essentially,
heartbleed allowed people to put a hook through the
keyhole and steal the key from behind the locked door.

  Poodle was a new way to attack a known weakness in
the old version 3.0 of the SSL protocol, affecting all
implementations, combined with a weakness in how Web
Browsers work around bad SSL libraries that refuse to
even reply to requests for protocol version 3.1 (TLS
1.0).  On top of that, it turned out that a few minor
competing SSL implementations (not OpenSSL, NSS and
STUNNEL) never implemented the TLS 1.0 protection
against the known weakness, leading to a rumor that
poodle affected all TLS 1.0 implementations, and
not just the few broken ones.

Not everyone changes locks at the same moment in time. urgency depends 
on need, i.e., what is at risk.


I started following these discussions because I am concerned (remember 
I am not really interested in the inner workings. I just think my 
locks are broken and wondering if it is time to change to something 
that maybe can do less - but what it does, does it better than what 
I have now.


Regardless of the choices made by openssl - people outside openssl 
have needs and are looking at alternatives. To someone like me it is 
obvious something must change - even if technically it is cosmetic - 
because (open)SSL is losing the trust of it's users.


As a user - I need a alternative. And just as I stopped using 
telnet/ftp/rsh/etc- because I could not entrust the integrity of my 
systems when those doors were open - so are my concerns re: (open)SSL. 
In short, is SSL still secure? And, very simply, as an 
un-knowledgeable user - given the choice of a library that does 
something well - and that's it, versus something else that does that - 
but leaves room for 'experiments' - Not on my systems. Experiment in 
experiment-land.


My two bits.

On Fri, Feb 6, 2015 at 9:59 PM, Matt Caswell m...@openssl.org 
mailto:m...@openssl.org wrote:




On 06/02/15 16:03, Jakob Bohm wrote:
 I believe you have made the mistake of discussing only amongst
 yourselves, thus gradually convincing each other of the
 righteousness of a flawed decision.


...and, Rich said in a previous email (in response to your comment):
 I fear that this is an indication that you will be killing
 off all the other non-EVP entrypoints in libcrypto

 Yes there is a good chance of that happening.

I'd like to stress that there has been no decision. In fact we're not
even close to a decision on that at the moment.

Whilst this has certainly been discussed I don't believe we are
near to
a consensus view at the moment. So whilst there is a good chance
of that
happeningthere's also a very good chance of it not. It is still
under discussion.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] How to load local certificate folder on windows

2015-02-09 Thread Jakob Bohm

On 06/02/2015 20:19, Michael Wojcik wrote:

From: openssl-users [mailto:openssl-users-boun...@openssl.org] On Behalf
Of Dave Thompson
Sent: Friday, February 06, 2015 12:04

* Windows beginning AIR XP or maybe NT does support links on NTFS,
but they're not easy to use and not well known, and I think I saw a recent
bug report that they don't even work for OpenSSL,  at least sometimes.

In modern versions of Windows, NTFS supports three sorts of link-like objects: 
file symbolic links, directory symbolic links, and junctions, all of which are 
types of reparse points. Older versions of NTFS only support junctions. These 
can be created with the mklink command. Prior to Vista, there was no command in 
the base OS for this purpose, and you needed something like linkd from the 
Windows Server Resource Kit to manipulate links.

Actually, there is a 4th and 5th form of NTFS native
symbolic links: POSIX subsystem symbolic links, which
have all the expected semantics but may not work with
Win32 programs such as OpenSSL; and DFS junctions which
have special semantics for the SMB/CIFS file sharing
protocol.

I just did a bit of testing with openssl.exe from OpenSSL 1.0.1k. It appears to 
work correctly with all three.

Windows also supports shortcuts, but those are a Windows Explorer artifact. 
They're just files that have a particular extension and particular sort of contents. 
OpenSSL doesn't support them, but then neither do most programs. Shortcuts were invented 
for Windows 95 to overcome some of the limitations of the FAT32 filesystem. They're 
rubbish.

Actually, shortcuts are really desktop/start menu entries,
which store a command line, startup directory, menu icon
and launch options.  They work like the .desktop files
in modern Linux desktop environments and should never have
been confused with symbolic links.  They were created as
a more flexible replacement for the Windows 3.x program
manager icon group files.

And Cygwin provides both hard and symbolic UNIX-style links for NTFS. Hard 
links can only be to files. I'm not sure how Cygwin implements them, but they 
seem to work fine with OpenSSL.

All versions of NTFS support hard links natively, though
there is no command in the base OS to create them, and
prior to Windows 2000, they could only be created via
an undocumented API and/or by using the POSIX subsystem
(which did include a working ln command though).  When
you run chkdsk (fsck) on an NTFS file system, you will see
inodes referred to as Files in the Master File Table
and directories as Indexes.

Cygwin supports multiple implementations of symbolic links; see 
https://cygwin.com/cygwin-ug-net/using.html#pathnames-symlinks. Default 
symbolic links are ordinary files recognized by the Cygwin library as special, 
so they aren't handled by (non-Cygwin) OpenSSL. Shortcut-style symlinks are 
shortcuts, so per above they do not work. Native symlinks are Windows symlinks 
and should work fine with OpenSSL. The native implementation can be selected by 
setting the CYGWIN environment variable appropriately, so (contrary to recent 
messages on the list) there's no reason to rewrite c_rehash for use on Windows.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] custom name attributes not sent with certificate

2015-02-06 Thread Jakob Bohm

On 06/02/2015 00:21, Florence, Jacques wrote:


I created a client certificate with custom name attributes:

In the openssl.cnf file, I addedunder section [ new_oids ] the line: 
myattribute=1.2.3.4


And under [ req_distinguished_name ] I added the line: myattribute = hello

If I use the openssl tool x509, I see that my new attribute appears in 
the name, after the email address.


However, when the certificate is sent to the server, the server cannot 
read this attribute.


I used wireshark and saw that my custome attribute is not sent with 
the rest of the name.


Why is that ?



Are you sure this is what is really happening?

If any byte in the signed part of the certificate (and this
most certainly includes the name) is changed, the certificate
should completely fail to verify.

So are you sure the name isn't sent?  Maybe it is just the
utility you use to display the sent certificate which fails
to display unknown name components.

P.S.

I presume that for any real use, you would use an officially
allocated OID to avoid clashing with what other people use.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] using openssl to create PKCS#7/CMS on windows

2015-02-06 Thread Jakob Bohm

On 05/02/2015 14:30, Srinivas Rao wrote:

Hi All,

Is there a way to use openssl to sign data using a private key (on USB
token) and produce PKCS7 output on win32, if:

a) the data to be signed message is not touched yet and goes as input
to the solution to the answer to this problem, OR

b) signature is already generated, i.e message is hashed and signed
and only needs to be encoded in PKCS7,

If yes, for which of the above case and how (please give some pointers
on how to go about it).

Thanks
Srinivas


Are you trying to get us to help with a school assignment?
This looks a lot like how a teacher would ask a question to
his students to find out how much they have understood
themselves.

That said, the main pointers I can give you are these:

Verylittlein OpenSSL differs between Win32 and other
systems.  Howeverthere is one part in the question that
will usually be slightly different onWin32.If you
understand the question and OpenSSL general features, you
should be able to recognize which part that is.

One of the alternatives is going to be more difficult than
the other because it is a less common task, but it may still
be doable with some ingenuity.

The task (either one if both are doable) can be performed
using almost no APIs and interfaces other than those
provided by OpenSSL and ANSI C.  If you are tempted to use
other tools, look closer at the OpenSSL feature lists and
available options.

In your code below you forgot to use two of the items your
teacher gave you, which is probably the problem.


On 1/30/15, Srinivas Rao srir...@gmail.com wrote:

All,

Please let me know if my below mentioned usage of PKCS7_sign()+adding
signer info is wrong and how.

Really appreciate your response.

cheers and regards
Srinivas

On 1/29/15, Srinivas Rao srir...@gmail.com wrote:

OpenSSL experts,

Here the intention is to get the signed data (raw signature obtained
by PKCS11 APIs like C_Sign) to be packed in PKCS7 format (attached -
with certificate, content and signer info) using openssl.

I am using USB token (smart card) for signing.

Here's the code snippet.

PKCS7* p7 = PKCS7_new();
PKCS7_set_type(p7, NID_pkcs7_signed);
//PKCS7_SIGNER_INFO* pSI = PKCS7_SIGNER_INFO_new();
//PKCS7_SIGNER_INFO_set(pSI, pX509, pX509-cert_info-key-pkey,
EVP_sha256());
//PKCS7_add_signer(p7, pSI);
PKCS7_SIGNER_INFO* pSI = PKCS7_add_signature(p7, pX509,
pX509-cert_info-key-pkey, EVP_sha256());  // == core dumps here
 :
 :
where pX509 is correctly obtained X509* node using d2i_X509() from the
value obtained from PKCS11 funcstions like C_GetAttributeValue() etc.

I believe the set of the commented lines is the alternate way for this
add signature function - that also dumps core at
PKCS7_SIGNER_INFO_set() function.

I have no clue as to what am I doing wrong here.

Appreciate your help.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] The evolution of the 'master' branch

2015-02-06 Thread Jakob Bohm

On 05/02/2015 00:42, Salz, Rich wrote:

Not much on that page so far, not even a kill list of
intended victims except an admission that EAY's popular DES
library can no longer be accessed via the copy in OpenSSL.

Yup.  Pretty empty.  Over the coming year there will be more.

I fear that this is an indication that you will be killing
off all the other non-EVP entrypoints in libcrypto

Yes there is a good chance of that happening.

, making
it much harder to use the library with experimental or
non-standard algorithms and methods.

Well, not really much harder.  I think that what DOES get harder is binary 
distributions of such things, as opposed to custom OpenSSL builds that have 
these new features.  Don't forget, *you have the source*  Hack away.  Do what 
you want.  And having a standard API that any of your consumers use will 
benefit your consumers greatly.  And by making things like the EVP structure 
hidden from your consumers, then you can add a new experimental crypto 
mechanism and -- this is an important major benefit:  THEIR CODE DOES NOT HAVE 
TO RECOMPILE.   As an implementor, your work is a bit harder.  For your users, 
it is much easier.  Imagine being able to put an OID in a config file and 
applications can almost transparently use any crypto available without change.  
(Emphasis on ALMOST :)  To us, this is clearly the right trade-off to make.

You seem to misunderstand the scenario:

My scenario is:

1. Load an unchanged shared libcrypto.so.1.1 provided by an
  OS distribution.

2. Implement / use / experiment with non-standard methods
  (such as new modes of operation or new zero-knowledge
  proofs) by calling the functions that are exported by
  libcrypto out of the box.  The higher level libssl is
  not used for anything.

Your scenario is:

1. Extend the higher level stuff (TLS1.2, CMS etc.) with non-
  standard methods (such as new modes of operation or new
  signature forms).

2. Inject this new functionality into existing applications
  that use OpenSSL in generic ways, such as wget and WebKit
  browsers.

My scenario got great advantages from the large number of
fundamental interfaces exported from libcrypto.so.1.0 and
will automatically benefit when a new patch release of
OpenSSL is installed on the system (e.g. to fix a timing
attack on one of the algorithm implementations).  Having
to create a custom libnotquitecrypto.so.1.1 would not do
this, and distributing such a library as source patches
became much harder when you reformatted the source.

Looking at the reverse dependencies in Debian, the
followingprojects probably use low level libcrypto
interfaces to do interesting things: afflib, asterisk,
bind, clamav, crda (WiFi), crtmpserver, encfs, ewf-tools,
faifa (Ethernet over Power), gfsd, gnugk, gnupg-pkcs11-scd,
hfsprogs, hostapd (WiFi), ipppd, KAME IPSEC, OpenBSD IPSEC,
ldnsutils, apache mod-auth-cas, apache mod-auth-openid,
perl's Crypt::OpenSSL::Bignum, libh323, liblasso, barada,
StrongSWAN, unbound, libxmlsec, libzrtpcpp, nsd, opensc,
openssh, rdd, rdesktop, rsyncrypto, samdump, tor,
tpm-tools, trousers, wpasupplicant (WiFi), yate, zfs-fuse.
*This is only a rough list based on features claimed by
OpenSSL-depending packages*


Should everyone not doing just TLS1.2 move to a different library now, such as 
crypto++ ?

Use whatever works best for you, and your consumers/customers.

Not everyone will agree with this direction. Some people will be inconvenienced 
and maybe even completely drop using OpenSSL. We discussed this pretty 
thoroughly, and while we respect that some may disagree, please give us credit 
for not doing this arbitrarily, or on a whim. 

I believe you have made the mistake of discussing only amongst
yourselves, thus gradually convincing each other of the
righteousness of a flawed decision.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Handle Leaks - shmem-win32.c shmem.c - OpenSSL 1.0.1l

2015-02-04 Thread Jakob Bohm

Following up on this somewhat old thread just to correct some
misunderstandings about the nature of the Windows APIs.

On 25/01/2015 19:49, Michel SALES wrote:

Hi Avery,


In the code I sent over before, I was calling CloseHandle on the thread:
HANDLE h1=CreateThread(0,0,thread1,0,0,t1);  if(h1==0) { return

0; } CloseHandle(h1);

Yes, but you were trying to close the handle of a thread which was still
running !
I have not checked what happens in this case.

Just FYI: On Windows (unlike the fork-like semantics of POSIX
pthreads), the handle to a thread is just one of N references
to the thread object, another of which is the actual fact of
the thread still running.  As long as at least one such
reference exists, the kernel resources associated with the
thread (an ETHREAD structure, the thread identifier etc.)
remain in use.  The act of waiting for and/or closing the
handle has no direct relationship to thread exit.  So closing
an unwanted thread handle right after thread creation is a
normal and correct resource saving coding technique.

I am not sure to fully understand what your are doing now, but with the
modified version I've sent to you, _CrtDumpMemoryLeaks() doesn't report any
problem on my Windows 7 64 bits machine.

Note that _Crt functions only check internal state in the
per-compiler C runtime, they cannot check for leaks of OS
level objects, that requires OS tools, such as those
available via Task manager (taskmgr.exe) and the OS level
debuggers (WinDbg.exe, GFlags.exe etc.).

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] i2d_X509_SIG() in FIPS mode

2015-02-04 Thread Jakob Bohm

On 03/02/2015 06:26, Gayathri Manoj wrote:

Hi Steve, Viktor,

I have tried with len also, But this also causing seg fault.
my requiremnt is to store  max  2048 bit keys. Hence I  used length as 
512 +1.

currently i ma getting len value = 28514.

X509_SIG sig;
X509_ALGOR algor;
ASN1_OCTET_STRING digest;
ASN1_TYPE parameter;
   ASN1_item_digest() // to get digest details
   sig.algor = algor;
sig.algor-algorithm=OBJ_nid2obj(NID_md5);

There is the problem!  FIPS does not allow use of MD5,
probably never has.  Have you checked if thisreturned
NULL to indicate an error finding the MD5 OID?

parameter.type=V_ASN1_NULL;
parameter.value.ptr=NULL;
sig.algor-parameter = parameter;
sig.digest = digest;
sig.digest-data=(unsigned char*)msg;
sig.digest-length=datalen;
len = i2d_X509_SIG(sig,NULL);

Have you checked if this returns a negative value to
indicate an error?


On Mon, Feb 2, 2015 at 9:31 PM, Viktor Dukhovni 
openssl-us...@dukhovni.org mailto:openssl-us...@dukhovni.org wrote:


On Mon, Feb 02, 2015 at 07:15:12PM +0530, Gayathri Manoj wrote:

 unsigned char *ptr, *tmp=NULL;
 X509_SIG sig;
 

How is sig initialized?

 len=i2d_X509_SIG(sig,NULL);
 tmp = (unsigned char*) malloc(513);

Why 513 and not len?  What is the value of len?

 ptr=tmp;
 i2d_X509_SIG(sig, ptr);  // here causing problem.


Note to OpenSSL documentation team: The documentation for
the OpenSSL X509_SIG data type is circular at best, and
refers to PKCS#1 only by name, not by its currently
available location (one or more RFCs).   Also there are
apparently no documented functions using X509_SIG other
than to read/write/encode/decode the structure, the closest
I could find were some undocumented functions in pkcs12.h .


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Handle Leaks - shmem-win32.c shmem.c - OpenSSL 1.0.1l

2015-02-04 Thread Jakob Bohm

Following up on this somewhat old thread just to correct some
misunderstandings about the nature of the Windows APIs.

On 25/01/2015 19:49, Michel SALES wrote:

Hi Avery,


In the code I sent over before, I was calling CloseHandle on the thread:
HANDLE h1=CreateThread(0,0,thread1,0,0,t1);  if(h1==0) { return

0; } CloseHandle(h1);

Yes, but you were trying to close the handle of a thread which was still
running !
I have not checked what happens in this case.

Just FYI: On Windows (unlike the fork-like semantics of POSIX
pthreads), the handle to a thread is just one of N references
to the thread object, another of which is the actual fact of
the thread still running.  As long as at least one such
reference exists, the kernel resources associated with the
thread (an ETHREAD structure, the thread identifier etc.)
remain in use.  The act of waiting for and/or closing the
handle has no direct relationship to thread exit.  So closing
an unwanted thread handle right after thread creation is a
normal and correct resource saving coding technique.

I am not sure to fully understand what your are doing now, but with the
modified version I've sent to you, _CrtDumpMemoryLeaks() doesn't report any
problem on my Windows 7 64 bits machine.

Note that _Crt functions only check internal state in the
per-compiler C runtime, they cannot check for leaks of OS
level objects, that requires OS tools, such as those
available via Task manager (taskmgr.exe) and the OS level
debuggers (WinDbg.exe, GFlags.exe etc.).

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] i2d_X509_SIG() in FIPS mode

2015-02-04 Thread Jakob Bohm

On 03/02/2015 06:26, Gayathri Manoj wrote:

Hi Steve, Viktor,

I have tried with len also, But this also causing seg fault.
my requiremnt is to store  max  2048 bit keys. Hence I  used length as 
512 +1.

currently i ma getting len value = 28514.

X509_SIG sig;
X509_ALGOR algor;
ASN1_OCTET_STRING digest;
ASN1_TYPE parameter;
   ASN1_item_digest() // to get digest details
   sig.algor = algor;
sig.algor-algorithm=OBJ_nid2obj(NID_md5);

There is the problem!  FIPS does not allow use of MD5,
probably never has.  Have you checked if thisreturned
NULL to indicate an error finding the MD5 OID?

parameter.type=V_ASN1_NULL;
parameter.value.ptr=NULL;
sig.algor-parameter = parameter;
sig.digest = digest;
sig.digest-data=(unsigned char*)msg;
sig.digest-length=datalen;
len = i2d_X509_SIG(sig,NULL);

Have you checked if this returns a negative value to
indicate an error?


On Mon, Feb 2, 2015 at 9:31 PM, Viktor Dukhovni 
openssl-us...@dukhovni.org mailto:openssl-us...@dukhovni.org wrote:


On Mon, Feb 02, 2015 at 07:15:12PM +0530, Gayathri Manoj wrote:

 unsigned char *ptr, *tmp=NULL;
 X509_SIG sig;
 

How is sig initialized?

 len=i2d_X509_SIG(sig,NULL);
 tmp = (unsigned char*) malloc(513);

Why 513 and not len?  What is the value of len?

 ptr=tmp;
 i2d_X509_SIG(sig, ptr);  // here causing problem.


Note to OpenSSL documentation team: The documentation for
the OpenSSL X509_SIG data type is circular at best, and
refers to PKCS#1 only by name, not by its currently
available location (one or more RFCs).   Also there are
apparently no documented functions using X509_SIG other
than to read/write/encode/decode the structure, the closest
I could find were some undocumented functions in pkcs12.h .


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Certificate verification fails with latest commits (ECDSA)

2015-02-04 Thread Jakob Bohm

Summary of thread so far: The latest security update enforces
that any inherently unsigned BIGNUM must be encoded as a non-
negative DER INTEGER (which has a leading 0 byte if the most
significant bit of the first byte would otherwise be set).

It is a well known historic bug that some other ASN.1
libraries incorrectly treat the DER/BER INTEGER type as
unsigned when encoding and decoding inherently unsigned
numbers, and that such libraries will thus accept the correct
encoding (leading 0 byte) as a non-canonical BER encoding
(and thankfully forget to normalize it to the wrong form),
while producing an incorrect encoding without the leading 0
byte.

Historically, OpenSSL (and probably some other ASN.1 libraries
too) have intentionally tolerated this specific incorrect
encoding, but the new security update now consistently rejects
it.  Would it reintroduce the related security issue to
explicitly tolerate the alternative encoding, perhaps by
allowing the EC code to accept negative numbers as their
unsigned encoding equivalents, while preserving the signed
form when round-tripping BER to BN to DER.  (This of cause
would still fail if the most significant 9 bits were all 1,
e.g. 0xFF8, but that would still be 256 times rarer).

I am assuming without checking, that i2d_ASN1_INTEGER
already handles negative values.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] The evolution of the 'master' branch

2015-02-04 Thread Jakob Bohm

On 03/02/2015 23:02, Rich Salz wrote:

As we've already said, we are moving to making most OpenSSL data
structures opaque. We deliberately used a non-specific term. :)
As of Matt's commit of the other day, this is starting to happen
now.  We know this will inconvenience people as some applications
no longer build.  We want to work with maintainers to help them
migrate, as we head down this path.

We have a wiki page to discuss this effort.  It will eventually include
tips on migration, application and code updates, and anything else the
community finds useful.  Please visit:

http://wiki.openssl.org/index.php/1.1_API_Changes

Thanks.
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users

Not much on that page so far, not even a kill list of
intended victims except anadmission that EAY's popular DES
library can no longer be accessed via the copyin OpenSSL.

I fear that this is an indication that you will be killing
off all the othernon-EVP entrypoints in libcrypto, making
it much harder to use thelibrary with experimental or
non-standard algorithms and methods.

Just consider how hard it would now be to use libcrypto to
implement stuff like AES-GCM (if it was not already in the
library) or any of the block modes that were proposed in
the FIPS process, but not standardised by NIST (and thus
not included in EVP).

With the classic non-EVP API, it is trivial to wrap a custom
mode around the basic DES/AES/IDEA/... block functions.

And this is just one example of the flexibility provided by
not going through the more rigid EVP API.

Should everyone not doing just TLS1.2 move to a different
librarynow, such as crypto++ ?

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] How to construct certificate chain when missing intermediate CA

2015-01-09 Thread Jakob Bohm

On 09/01/2015 03:45, Jerry OELoo wrote:

Hi All:
I am using X509_STORE_CTX_get1_chain() to get web site's full certificate chain.
Now I am encounter an issue that some web site does not return
intermediate CA certificate but only web site leaf certificate.

For example. https://globaltrade.usbank.com

Below is certificate I get.

Subject: /C=US/ST=Minnesota/L=St. Paul/O=U.S.
Bank/OU=ISS/CN=globaltrade.usbank.com
Issuer: /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of
use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure
Server CA - G3

As my environment missing VeriSign Class 3 Secure Server CA - G3 certificate.

When open web site in Browsers (Chrome on windows), I can see
certificate chain is built successfully, I think this is because
browser should recognize VeriSign Class 3 Secure Server CA - G3 this
intermediate CA, and automatically installed crt into system.

So my question is how can I achieve same as browsers with openssl,
with openssl I can get error info. But where can I use program to
download VeriSign G3 certificate and installed automatically, then I
can build full certificate chain.

Peer cert subject[/C=US/ST=Minnesota/L=St. Paul/O=U.S.
Bank/OU=ISS/CN=globaltrade.usbank.com] depth[0] error[20]
Peer cert subject[/C=US/ST=Minnesota/L=St. Paul/O=U.S.
Bank/OU=ISS/CN=globaltrade.usbank.com] depth[0] error[27]
Peer cert subject[/C=US/ST=Minnesota/L=St. Paul/O=U.S.
Bank/OU=ISS/CN=globaltrade.usbank.com] depth[0] error[21]



The trick is that many (not all) certificates now include an
AuthorityInformation Access (AIA) extension which
(optionally) gives a download URL for the next certificate
in the chain in case the browser does not have a local copy.
This is the same extension which also (in another optional
field) provides the URL of an OCSP revocation checking
server.

So in some clients (at least Internet Explorer 9+), the
procedure for each certificate is:

1. Using the full Issuer DN (which is a complex ASN.1
structure), put them in the same form (already done
because that part of the certificate has to be in the
strict DER format), then do a binary compare for
identity against the full Subject DN in all the
certificates received from the other end.

2. If this fails, do the same against all the
certificates in your local catalog of trusted root CAs.

3. If this fails, do the same against all the certificates
in your local catalog of known Intermediary CAs.

4. If this fails, do the same against all the certificates
in your local cache of previously downloaded certificates.

5. If this fails, look for an AIA extension in the cert
and check if that extension includes a certificate
download URL, then download from that URL to an in memory
variable.  If the validation ultimately succeeds, save
that downloaded certificate from memory to your local
cache.

OpenSSL 1.0.1 and older include functions to do steps 1
(if the other end sent the certificates in the order
needed) and 2.  That code may be coerced into doing steps
3 and 4 by putting the intermediary certificates into the
root store and checking if a certificate is self-signed
to decide if it is trusted or just a potentially
unverified intermediary.

OpenSSL 1.0.2 beta apparently includes better code for
most of these steps than 1.0.1.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
openssl-users@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] [openssl-announce] OpenSSL version 1.0.1k released

2015-01-09 Thread Jakob Bohm

On 09/01/2015 04:32, Dongsheng Song wrote:

test failure on custom build:

perl Configure ^
 no-comp no-dso no-idea no-ssl2 no-ssl3 no-psk no-srp ^
 --prefix=D:/var/pool/openssl-win32 ^
 VC-WIN32
...

D:\var\tmp\openssl-1.0.1knmake -f ms\ntdll.mak
...

D:\var\tmp\openssl-1.0.1knmake -f ms\nt.mak test

Microsoft (R) Program Maintenance Utility Version 12.00.21005.1
Copyright (C) Microsoft Corporation.  All rights reserved.

 cd out32
 ..\ms\test
rsa_test
PKCS #1 v1.5 encryption/decryption ok
OAEP encryption/decryption ok
PKCS #1 v1.5 encryption/decryption ok
OAEP encryption/decryption ok
PKCS #1 v1.5 encryption/decryption ok
OAEP encryption/decryption ok
PKCS #1 v1.5 encryption/decryption ok
OAEP encryption/decryption ok
PKCS #1 v1.5 encryption/decryption ok
OAEP encryption/decryption ok
PKCS #1 v1.5 encryption/decryption ok
OAEP encryption/decryption ok
destest
Doing cbcm
Doing ecb
Doing ede ecb
Doing cbc
Doing desx cbc
Doing ede cbc
Doing pcbc
Doing cfb8 cfb16 cfb32 cfb48 cfb64 cfb64() ede_cfb64() done
Doing ofb
Doing ofb64
Doing ede_ofb64
Doing cbc_cksum
Doing quad_cksum
input word alignment test 0 1 2 3
output word alignment test 0 1 2 3
fast crypt test
ideatest
'ideatest' is not recognized as an internal or external command,
operable program or batch file.
problems.

I guess it then has no idea what that test is (hint, hint).

The Windows build scripts (unlike the POSIX build scripts)
are not completelyadapted to the configured build options,
and thus include tests for featuresyou have decided not
to build.

This is because the syntax differences between POSIX make
and Microsoft nmake are too big to automatically do a
complete translation inside the Configure.pl script.  So
the script cheats and reads out the primary lists of
files to compile and thenjust outputs simplistic nmake
makefiles (such as nt.mak and ntdll.mak) based on those
lists.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
openssl-users@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] OpenSSL Release Strategy and Blog

2015-01-07 Thread Jakob Bohm

On 28/12/2014 12:26, Kurt Roeckx wrote:

On Sun, Dec 28, 2014 at 01:31:38AM +0100, Jakob Bohm wrote:

3. The 1.0.x binary compatibility promise seems to not have been
  completely kept.  As recently as just this December, As a practical
  example: I had an OS upgrade partially fail due to the presence of
  a self-compiled up to date 1.0.1* library that conflicted with an
  OS supplied 1.0.x library that was frozen earlier while relying on
  your promise.

Can you give more details about this?  Please note the binary
compatibility will only work if you used the same options to build
the library.  (This is one of the reasons to make more structures
opaque.)

Yep, I presume the distribution packagers used different compile
options for the 1.0.x installed in /usr/lib, than I had used for the
1.0.1 installed in /usr/local/lib.

   Also the 1.0.1 changelog includes at least one change of binary
  flag values to get a different compatibility behavior than
  previous 1.0.1 releases, thus there is not even binary
  compatibility within 1.0.1 .

I assume you're talking about SSL_OP_NO_TLSv1_1?  It's unfortunate
that SSL_OP_ALL already contained that in 1.0.0 while 1.0.0
doesn't even know anything about TLS 1.1.  But that only affects
people who compiled with 1.0.1 or 1.0.1a headers.

Yes, that's exactly the one.



  must choose one of the stabilized 1.0.x releases (1.0.0 or 1.0.1)
  as the new LTS release, and you need to deal with the fact that
  since the 0.9.8 end-of-life announcement, you have been unclear
  which of the two existing 1.0.x releases would be LTS-security,
  causing some projects (not mine, fortunately) to irreversibly
  pick a different one than you did.

I think people should stop using both 0.9.8 and 1.0.0 as soon as
possible since they do not support TLS 1.2.  You really want to use
TLS 1.2.

I agree, that is why I had a locally compiled 1.0.1 on that particular
system.



   The best you can do is to split libcrypt into two or tree well
  defined layers, that can be mixed in applications but do not break
  their layering internally.  These could be: rawalgs (non-opaque
  software primitives, bignums etc.  with CPU acceleration but
  nothing more fancy)

I don't think we intend to remove functions like AES_* yet but
might deprecate them in favour APIs that exist for a very long
time.  Please note that for instance using the AES functions you
do not have AESNI acceleration but by using the EVP interfance you
do.

Thing is that the AES_, DES_ etc. functions have long been a key part
ofSSLeay/OpenSSL.  Notably, the DES library had previously been
distributedseparately, and the large integer library is a key
componentfor people implementing new new crypto algorithms and
methodsnot yet (or ever) in OpenSSL.

And unlike higher level mechanisms, they tend not to bloat statically
linkedapplications with lots unused code.(Though the use of EVP
insidethe RNG forced me to do some patching to avoid pulling in the
EVP RSA code).

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
openssl-users@openssl.org
https://mta.opensslfoundation.net/mailman/listinfo/openssl-users


Re: [openssl-users] OpenSSL Release Strategy and Blog

2015-01-07 Thread Jakob Bohm

On 29/12/2014 01:37, Matt Caswell wrote:

On 28/12/14 00:31, Jakob Bohm wrote:

On 24-12-2014 00:49, Matt Caswell wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

You will have noticed that the OpenSSL 1.0.0 End Of Life Announcement
contained a link to the recently published OpenSSL Release Strategy that
is available here:
https://www.openssl.org/about/releasestrat.html

I have put up a blog post on the thinking behind this strategy on the
newly created OpenSSL Blog that you may (or may not!) find interesting.
It can be found here:
https://www.openssl.org/blog/blog/2014/12/23/the-new-release-strategy/

I am afraid that this is a somewhat rush decision, with insufficient
consideration for the impact on others:

Not at all. This decision has been under consideration for some
considerable period of time with much discussion of the impacts.

Discussing this only amongst yourselves has probably blinded you
to the needs ofoutsiders, leading to a bad decision.

But since your minds are made mostly up, let me rephrase the key
communityneeds as I see them:

1. The ability, at any given day, to know which of the currently
available OpenSSLreleases is going to receive back-portable
security patches with binary compatibilityfor at least 3 to 5
years into the future from that day.  A given community member
(such as a Linux distro or a closed source product) will use
this on one of the daysnear the end of their development cycle,
after which they will intend to provideonly small drop in
patches (shared libraries, small programs, binary diffs) for the
lifetime of their product.

2. The ability to use libcrypt as the basis for non-SSL code, such
as OpenSSH or theSRP reference impementation (you should coordinate
changes in low level APIswith at least those two teams).  Also
there is the need to use subsets of libcryptwithout the rest, e.g.
in bootloaders or kernels (I don't know if any of the kernel
crypto in Linux or BSD uses OpenSSL code).  And then there is all
the fun securityresearchers are having with the code.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
openssl-users@openssl.org
https://mta.opensslfoundation.net/mailman/listinfo/openssl-users


Re: [openssl-users] openssl, opencrypto integration

2015-01-07 Thread Jakob Bohm

(Resend from correct account)

On 06/01/2015 19:52, Chris Eltervoog wrote:


I have implemented a H/W encryption driver and have integrated it with 
cryptodev.  In eng_cryptodev.c there is an array digests[].  In that 
array it defines CRYPTO_MD5 to have a keylen of 16.  In cryptodev, the 
xform.c file definedes MD5 to have a keylen of 0.Why is the keylen 
not zero for the MD5 entry in this table?


I presume that this particular array uses the field name keylen to 
really refer to the hashlen (the size of the digest value), since 
digests generally don't have keys (otherwise they would be considered 
MAC algorithms).  The code in xform.c then probably refers to a more 
general definition, which encompasses both digests and MACs, and uses a 
0 MAC keylen to indicate that this is a digest, not a MAC.


Cryptodev also defines the keylen in a structure.  The keylen is zero 
there.  A comparison happens on session creation.  This difference 
causes a check in the session creation to fail and cryptodev always 
selects the s/w crypto engine. If I change the eng_cryptodev digests[] 
entry for CRYPTO_MD5 to have a keylen of zero the MD5 hashing works, 
however Key generation output changes.  If you run the openssl test 
case for key generation it will fail.  It seems that the files 
generated are smaller. I don't see how this change has this side 
affect with key generation.


IF my previous presumption is right, the correct change would be to keep 
both tables as they are, but change the comparison to compare values 
that are actually supposed to be the same, such as MAC key length to MAC 
key length (implicit 0 in the digests[] array), and result length to 
result length (named keylen in the digests[] array).



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
openssl-users@openssl.org
https://mta.opensslfoundation.net/mailman/listinfo/openssl-users


Re: [openssl-users] OpenSSL Release Strategy and Blog

2014-12-27 Thread Jakob Bohm

On 24-12-2014 00:49, Matt Caswell wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

You will have noticed that the OpenSSL 1.0.0 End Of Life Announcement
contained a link to the recently published OpenSSL Release Strategy that
is available here:
https://www.openssl.org/about/releasestrat.html

I have put up a blog post on the thinking behind this strategy on the
newly created OpenSSL Blog that you may (or may not!) find interesting.
It can be found here:
https://www.openssl.org/blog/blog/2014/12/23/the-new-release-strategy/


I am afraid that this is a somewhat rush decision, with insufficient
consideration for the impact on others:

1. Because this was announced during the Xmas/New year holidays, many
 parties will not see this until the beginning of 2015.

2. The decision that 1.0.0 and 1.0.1 should both be classed as a STS
 releases seems new, and is unlikely to have been anticipated by many
 users.  Many will have naturally assumed that 1.0.0 would last as
 long as 0.9.8 lasted.

  So announcing the imminent death of 1.0.0 at the same time as 0.9.8
 is going to be a nasty surprise to anyone who already froze their
 projects on the 1.0.0 series rather than the new and more unstable
 1.0.1 series.

3. The 1.0.x binary compatibility promise seems to not have been
 completely kept.  As recently as just this December, As a practical
 example: I had an OS upgrade partially fail due to the presence of
 a self-compiled up to date 1.0.1* library that conflicted with an
 OS supplied 1.0.x library that was frozen earlier while relying on
 your promise.

  Also the 1.0.1 changelog includes at least one change of binary
 flag values to get a different compatibility behavior than
 previous 1.0.1 releases, thus there is not even binary
 compatibility within 1.0.1 .

4. LTS release periods have an absolute need to overlap, such that
 at any given date, there is at least one active LTS release known
 not to end within the next many years, otherwise the whole concept
 is useless.  On any given day of the year (except, perhaps, holidays),
 a project somewhere is going to look at available tool and library
 releases and decide which one is most likely to be supportable for
 the next N years, then irreversibly freeze on that version.  So if
 OpenSSL has no active long term release with at least 3 to 5 years
 left, then OpenSSL is not viable or projects will have to incur the
 cost of having security-novices backport security fixes manually
 to an unsupported version for the remainder of the needed N years.

  Accordingly the policy should be that there will always be at least
 one LTS release which is at least one year old and has at least 5
 years left before security support ends.  For comparison, Microsoft
 usually promises security fixes for 10 years after release, non-
 critical fixes for only 5, and people still complain loudly when the
 10 year period is up for e.g. NT 4.0 and XP.

  Since you have already announced the upcoming end of 0.9.8, you
 must choose one of the stabilized 1.0.x releases (1.0.0 or 1.0.1)
 as the new LTS release, and you need to deal with the fact that
 since the 0.9.8 end-of-life announcement, you have been unclear
 which of the two existing 1.0.x releases would be LTS-security,
 causing some projects (not mine, fortunately) to irreversibly
 pick a different one than you did.

5. Since its original release as part of SSLeay, libcrypt has become
 the dominant BSD-licensed library of raw crypto primitives for all
 sorts of uses, such as (but not at all limited to), openssh, the
 SRP reference implementation, the NTP cryptographic support etc.

  Limiting the capabilities, transparency or other aspects at this
 point in time is way, way too late.  It is as futile as when the
 ANSI/ISO C committee tried to remove the UNIX-like file handle APIs
 (io.h) in favor of the FILE* API (stdio.h) at a time when the C
 language was about the current age of OpenSSL.

  The best you can do is to split libcrypt into two or tree well
 defined layers, that can be mixed in applications but do not break
 their layering internally.  These could be: rawalgs (non-opaque
 software primitives, bignums etc.  with CPU acceleration but
 nothing more fancy), EVP-api (opaque structures with hidden heap
 allocations, engine and FIPS support, but no forced loading of
 unused algorithms, except for the all-or-nothing-ness of fips and
 other engine blobs), and NID-API (algorithms referable by numeric
 IDs, large bundles of algorithms loaded by master-init functions,
 automatic X.509 checking/use based on embedded algorithm OIDs etc.).

  Ideally, the rawalgs level should never make its own heap
 allocations, except in compatibility functions for older APIs that
 did, and should be usable in deeply embedded systems, such as OS
 loaders and door locks.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion

Re: [openssl-users] How to display root certificate in command line

2014-12-22 Thread Jakob Bohm

On 22/12/2014 11:52, Jerry OELoo wrote:

Hi All:
I have used openssl command line to get some website's certificate
chain. Now, I want to show root certificate information. but I do not
find any command argument to do it.

openssl s_client -showcerts -CApath /etc/ssl/certs -connect
studentexclusives.hsbc.co.uk:443

I use -CApath to set root certificate path.

 From below, I can get full certificate path. 3 certificates

CONNECTED(0003)
depth=2 C = US, O = VeriSign, Inc., OU = VeriSign Trust Network, OU
= (c) 2006 VeriSign, Inc. - For authorized use only, CN = VeriSign
Class 3 Public Primary Certification Authority - G5
verify return:1
depth=1 C = US, O = VeriSign, Inc., OU = VeriSign Trust Network, OU
= Terms of use at https://www.verisign.com/rpa (c)10, CN = VeriSign
Class 3 Secure Server CA - G3
verify return:1
depth=0 C = GB, ST = London, L = London, O = HSBC Holdings plc, OU =
HTSE, CN = studentexclusives.hsbc.co.uk
verify return:1


But in certificate chain, I only get 2 certificates information (I
think this two are return by website.)

---
Certificate chain
  0 s:/C=GB/ST=London/L=London/O=HSBC Holdings
plc/OU=HTSE/CN=studentexclusives.hsbc.co.uk
i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use
at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure
Server CA - G3
-BEGIN CERTIFICATE-
...
-END CERTIFICATE-
  1 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use
at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure
Server CA - G3
i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006
VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public
Primary Certification Authority - G5
-BEGIN CERTIFICATE-
...
-END CERTIFICATE-
---

Now I want to also display root certificate VeriSign Class 3 Public
Primary Certification Authority - G5 information, How can I show it?

Thanks!


This means the web server did not send it, but expects your
client/browser to find it (by name) in your local root certificates
store, such as /etc/ssl/certs.

Look in that directory for /C=US/O=VeriSign, Inc./OU=VeriSign Trust
Network/OU=(c) 2006 VeriSign, Inc. - For authorized use
only/CN=VeriSign Class 3 Public Primary Certification Authority - G5
and dump that filedirectly with

  openssl x509 -text -in /etc/ssl/certs/somefile.pem

Unfortunately no currently released version of s_client knows how to
dump out the constructed verification chain, there is only an option
to dump the server supplied certificates (regardless if those were
used by the client or not).  Hopefully some future version will have
options to dump either or both lists.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
openssl-users@openssl.org
https://mta.opensslfoundation.net/mailman/listinfo/openssl-users


Re: [openssl-users] Differences in openssl 0.9.8 and 1.0.1x for private pem key file

2014-12-22 Thread Jakob Bohm

On 22/12/2014 13:57, Dave Thompson wrote:


At least for now; there is another thread started just a few days ago
about all PEM formats used by OpenSSL suggesting the traditional
privatekey forms are obsolete and maybe should be deleted!

Please don't do that until 5+ years after 0.9.8 end-of-life. Because
private keyswritten by 0.9.8 to securely stored offline media will
be using the old formatand need to be usable down the line.  Most
certificates expire after 5 years orless though a few private keys
may be needed much later:

1. Decryption certificates/keys may be needed to decrypt data long
 after the certificateexpired (in fact, as long as the data remains
 relevant, think 30+ years for mortgagecontracts, 50+ years for
 life insurance, and 140+ years for copyright disputes).

2. A few certs (e.g. CA roots and Android developer certs) have very
 long (30+ years)certificate lifetimes, but those tend to be used
 regularly over that period, givingplenty of opportunity to convert
 the private key files.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
openssl-users@openssl.org
https://mta.opensslfoundation.net/mailman/listinfo/openssl-users


Re: [openssl-users] OpenSSL performance issue

2014-12-19 Thread Jakob Bohm

On 19/12/2014 00:10, Prabhat Puroshottam wrote:

I am trying to summarize the problem again, since the previous
mail seems confusing to some of you. It might help you quickly understand
the problem I am facing:

We have a product, where Client connects to Server (Proxy Server in my
earlier mail). Client is implemented in C and uses OpenSSL, while Server is
implemented using Java code with BufferedInputStream and
BufferedOutputStream. The following are my observations:

1. There is inordinate delay during connection establishment.
2. Using ssldump it was found that SSL handshake response from Server is
 taking most of the time. Rest of the application data transfer and
 processing hardly takes fraction of a second. The response from SSL
 handshake by Server comes after anywhere between 2 to 13 seconds
 after initial response sent by Client.
3. Subsequent analysis of the code showed that it was the first Buffered
Read/Write which was taking inordinate amount of time.
4. Understanding that first Buffered Read/Write was hung on SSL connection
 completion, I introduced SSLConnect::startHandshake() so that I can
 explicitly see where is the problem. It was observed that now
 startHandshake() blocked for as much time as first Read/Write did.
 Further none of the Read/Write calls block, and returned data almost
 immediately.

I would like to understand why startHandshake() is taking so long. I
understand that it is a asynchronous call, but still the time delay is too much
IMO. Is it something to do with the socket configuration/cipher/encryption
used? Using ssldump I found there was absolutely no data transfer
between the sending of client's hello request and subsequent response
from server, so apparently all the time startHandshake() is busy doing
something or may be nothing - what I have no idea. FWIW, this is not a
network latency issue, 1) all the boxes are on the same network, 2) all
other data transfers combined takes less than 0.4s.

Can somebody kindly suggest what might be wrong or what can be done to
fix this? Could it be some Socket or SSL setting, encryption/cipher used, or
something else?

From the traces in your previous questions, and the answers you have 
already

given, I guess this is what happens:

1. The difference is in how long the Java code spends during the initial key
  exchange.

2. The SSL code in the proxy, (but not the one in your own server) is 
configured

  to support Ephemeral Diffie-Hellman (DHE) handshake, which is safer, but
  potentially slower.  The slowness of DHE happens only during the 
handshake,

  because the data transmission part is the same.  For example
  RSA_AES256_SHA256 and DHE_RSA_AES_SHA256 use the same transmission
  phase, but different handshakes.  The safety of DHE is that it 
protects you

  if someone tapes the encrypted connection and later steal the private
  key of the proxy/server.

3. The slowest part of doing a DHE exchange is choosing a (non-secret) 
prime,

 which can be used again and again for many connections.  This is only done
 by the server end of a TLS/SSL connection.  The prime (and a few related
 numbers is known as the DH group parameters.

4. If you were to enable DHE in an OpenSSL based server/proxy, the standard
 solution is to choose the non-secret prime during server startup, 
before any

 connection arrives.  Some programs even choose it while configuring the
 server program, storing the prime in a file.

5. From the long time spent by the Java code generating its ServerHello, I
 suspect it is generating the prime during the handshake, and choosing a
 new prime for each connection, thus wasting a lot of time.

6. Maybe there is a way to tell the Java SSL library to generate the DH
 group parameters for needed key lengths (1024, 2048 or whatever you
 need) during proxy startup, so it is ready by the time the client 
connects.


7. If you upgrade to OpenSSL 1.0.1 or later (remember to only use the
 latest letter-suffix security update of whatever version), you could also
 use an ECDHE_RSA_xxx crypto mode, these don't currently allow the
 server/proxy to generate their own group parameters, but force you
 to choose from a short list of parameters generated by professional
 spying agencies such as the NSA (the NIST curves) or someone else
 (the X9.62 curves, the SECG curves and the WTLS curves).  So
 your computers don't spend time generating the parameters, and
 you just have to trust the professionals who chose them for you.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
openssl-users@openssl.org
https://mta.opensslfoundation.net/mailman/listinfo/openssl-users


Re: [openssl-users] OpenSSL performance issue

2014-12-19 Thread Jakob Bohm

On 19/12/2014 12:11, Jakob Bohm wrote:

On 19/12/2014 00:10, Prabhat Puroshottam wrote:

I am trying to summarize the problem again, since the previous
mail seems confusing to some of you. It might help you quickly understand
the problem I am facing:

We have a product, where Client connects to Server (Proxy Server in my
earlier mail). Client is implemented in C and uses OpenSSL, while Server is
implemented using Java code with BufferedInputStream and
BufferedOutputStream. The following are my observations:

1. There is inordinate delay during connection establishment.
2. Using ssldump it was found that SSL handshake response from Server is
 taking most of the time. Rest of the application data transfer and
 processing hardly takes fraction of a second. The response from SSL
 handshake by Server comes after anywhere between 2 to 13 seconds
 after initial response sent by Client.
3. Subsequent analysis of the code showed that it was the first Buffered
Read/Write which was taking inordinate amount of time.
4. Understanding that first Buffered Read/Write was hung on SSL connection
 completion, I introduced SSLConnect::startHandshake() so that I can
 explicitly see where is the problem. It was observed that now
 startHandshake() blocked for as much time as first Read/Write did.
 Further none of the Read/Write calls block, and returned data almost
 immediately.

I would like to understand why startHandshake() is taking so long. I
understand that it is a asynchronous call, but still the time delay is too much
IMO. Is it something to do with the socket configuration/cipher/encryption
used? Using ssldump I found there was absolutely no data transfer
between the sending of client's hello request and subsequent response
from server, so apparently all the time startHandshake() is busy doing
something or may be nothing - what I have no idea. FWIW, this is not a
network latency issue, 1) all the boxes are on the same network, 2) all
other data transfers combined takes less than 0.4s.

Can somebody kindly suggest what might be wrong or what can be done to
fix this? Could it be some Socket or SSL setting, encryption/cipher used, or
something else?

From the traces in your previous questions, and the answers you have 
already

given, I guess this is what happens:

1. The difference is in how long the Java code spends during the 
initial key

  exchange.

2. The SSL code in the proxy, (but not the one in your own server) is 
configured

  to support Ephemeral Diffie-Hellman (DHE) handshake, which is safer, but
  potentially slower.  The slowness of DHE happens only during the 
handshake,

  because the data transmission part is the same.  For example
  RSA_AES256_SHA256 and DHE_RSA_AES_SHA256 use the same transmission
  phase, but different handshakes.  The safety of DHE is that it 
protects you

  if someone tapes the encrypted connection and later steal the private
  key of the proxy/server.

3. The slowest part of doing a DHE exchange is choosing a (non-secret) 
prime,
 which can be used again and again for many connections.  This is only 
done

 by the server end of a TLS/SSL connection.  The prime (and a few related
 numbers is known as the DH group parameters.

4. If you were to enable DHE in an OpenSSL based server/proxy, the 
standard
 solution is to choose the non-secret prime during server startup, 
before any

 connection arrives.  Some programs even choose it while configuring the
 server program, storing the prime in a file.

5. From the long time spent by the Java code generating its ServerHello, I
 suspect it is generating the prime during the handshake, and choosing a
 new prime for each connection, thus wasting a lot of time.

Dave Thompson (who knows more than I do) pointed out that if this is the
SSL library included with Oracle Java, then it doesn't do that, but it does
waste time on another operation (random number generator setup),
which is the same for all handshake methods.


6. Maybe there is a way to tell the Java SSL library to generate the DH
 group parameters for needed key lengths (1024, 2048 or whatever you
 need) during proxy startup, so it is ready by the time the client 
connects.


If the problem is really initializing the Java secure random number 
generator,

you could probably force it to initialize earlier by simply adding Java code
that asks for one byte of cryptographically strong bits, then throws it 
away,
thus forcing the Java runtime to initialize its random number library at 
that

time (before the connection arrives).

7. If you upgrade to OpenSSL 1.0.1 or later (remember to only use the
 latest letter-suffix security update of whatever version), you could also
 use an ECDHE_RSA_xxx crypto mode, these don't currently allow the
 server/proxy to generate their own group parameters, but force you
 to choose from a short list of parameters generated by professional
 spying agencies such as the NSA (the NIST curves) or someone else

Re: [openssl-users] Creating a Certificate with CA=TRUE

2014-12-19 Thread Jakob Bohm

On 19/12/2014 13:13, Benjamin wrote:

Hello everyone!
I am quite new to two things: this mailing list and making and working 
with certificates


I want to run a small owncloud on my raspberry pi and tried to make a 
crt which I can also use with my mobile devices. Here is the problem:

When i make a certificate either with this instruction:
http://wiki.ubuntuusers.de/CA
or this one:
https://www.prshanmu.com/2009/03/generating-ssl-certificates-with-x509v3-extensions.html

i have the problem that the cacert has basicconstriants CA=TRUE but 
when i make a cert by request i got a new cert (as far as i knew, that 
which i should use for my nginx webserver) which has CA=FALSE. This is 
no problem normally but my Android phone only accepts Certs with 
CA=TRUE and actually i don´t know how to make such a certificate…Of 
course, i could use the cacert itself but isn´t this insecure and 
inadequate?



I very much doubt that Android only accepts certificates with CA=TRUE.

Unless of cause you are accidentally using an Android command to
install the public certificate of a CA, rather than a command
to install the private key+public certificate of a certificate
for the Android itself.  I seem to recall that the Android user
interfaces for these things are a bit confusingly named.

It should be perfectly safe (for the CA) to install the public
certificate (with CA=TRUE) of the CA on your phone, PC, posted
on your Google+ profile and any other place you think of, since
this is the whole point (notice how the big names go to extreme
lengths to get theirs included in every browser, OS, Phone etc.
sold).  Only the matching private key of your mini-CA needs to
be kept in a very secret and locked down place, such as on a
separate CA boot-SD that you only boot from when issuing new
certificates or refreshing your CRL.


Thanks, best Benjamin!


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
openssl-users@openssl.org
https://mta.opensslfoundation.net/mailman/listinfo/openssl-users


Re: [openssl-users] CVE-2014- and OpenSSL?

2014-12-15 Thread Jakob Bohm

On 12-12-2014 21:31, Jeffrey Walton wrote:

On Fri, Dec 12, 2014 at 5:23 AM, Jakob Bohm jb-open...@wisemo.com wrote:

On 09/12/2014 21:46, Jeffrey Walton wrote:

On Tue, Dec 9, 2014 at 2:07 PM, Amarendra Godbole
amarendra.godb...@gmail.com wrote:

So Adam Langley writes SSLv3 decoding function was used with TLS,
then the POODLE attack would work, even against TLS connections. on
his the latest POODLE affecting TLS 1.x.
(https://www.imperialviolet.org/).

I also received a notification from Symantec's DeepSight, that states:
OpenSSL CVE-2014-8730 Man In The Middle Information Disclosure
Vulnerability.

However, I could not find more information on OpenSSL's web-site about
POODLE-biting-again. Did I miss any notification? Thanks.

Here's some more reading:
https://community.qualys.com/blogs/securitylabs/2014/12/08/poodle-bites-tls

There's nothing specific to OpenSSL. Its a design defect in the
protocols (its been well known that TLS 1.0 had the same oracle as
SSLv3 since only the IV changed between them).

Its not surprising that a PoC demonstrates it against TLS 1.0. Many
have been been waiting for it.

It looks like Ubuntu is going to have to enable TLS 1.1 and 1.2 in
12.04 LTS for clients.
https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1256576
.
___

Stop spreading FUD and lies.  This is NOT a protocol weakness in any TLS
version,
it is an implementation *bug* affecting multiple TLS implementations,
specifically
those that don't implement the *required* checks of the padding during
decryption.

The cryptographers would disagree with you. The various attacks
against the design defects appear to offer proof by counter example.

Here's the analysis by Krawczyk: The Order of Encryption and
Authentication for Protecting Communications,
http://www.iacr.org/archive/crypto2001/21390309.pdf.

Here's his recent remarks on the TLS WG mailing list where he
revisited his conclusions, and called out SSL/TLS as being
unconditionally insecure (due to a misunderstanding in the way padding
was applied). From
http://www.ietf.org/mail-archive/web/tls/current/msg13677.html:

 So the math in the paper is correct - the
 conclusion that TLS does it right is wrong.
 It doesn't.

He is saying exactly what I said (padding before mac is safe, TLS with
CBC does thatwrong).  The only thing I said was right was the SSL case
with no padding at all (stream ciphers, in casethere was a good one in
SSL 3).

Now the POODLE against TLS 1.0 is NOT about all that.  It is about
*broken* TLS 1.0implementations that fail to implement the indirect
protection of the padding specified in the TLS 1.0 RFC.Specifically,
those implementations fail to implement that only a single padding
content value is authenticfor each given padding size, and at most 32
padding size/value pairs are valid for any given authenticatedmessage
size.

This indirect protection in TLS 1.0 greatly reduces the power of the
padding oracle, since the chance thatan interesting plaintext snippet
matches one of the 32 permitted values is a lot less than the chance of
matching one of the 2**61 permitted values in SSL 3 padding.  These
numbers are for 64 bit block size,for 128 bit block size, the numbers
are 16 vs 2**124 .  Variations in how the attacker detects acceptance
as padding could change the numbers to 256 or 1 for *correct* TLS 1.0
pad checks versus 2**64 or 2**56for SSL 3.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Soborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
openssl-users@openssl.org
https://mta.opensslfoundation.net/mailman/listinfo/openssl-users


Re: [openssl-users] CVE-2014- and OpenSSL?

2014-12-12 Thread Jakob Bohm

On 09/12/2014 21:46, Jeffrey Walton wrote:

On Tue, Dec 9, 2014 at 2:07 PM, Amarendra Godbole
amarendra.godb...@gmail.com wrote:

So Adam Langley writes SSLv3 decoding function was used with TLS,
then the POODLE attack would work, even against TLS connections. on
his the latest POODLE affecting TLS 1.x.
(https://www.imperialviolet.org/).

I also received a notification from Symantec's DeepSight, that states:
OpenSSL CVE-2014-8730 Man In The Middle Information Disclosure
Vulnerability.

However, I could not find more information on OpenSSL's web-site about
POODLE-biting-again. Did I miss any notification? Thanks.

Here's some more reading:
https://community.qualys.com/blogs/securitylabs/2014/12/08/poodle-bites-tls

There's nothing specific to OpenSSL. Its a design defect in the
protocols (its been well known that TLS 1.0 had the same oracle as
SSLv3 since only the IV changed between them).

Its not surprising that a PoC demonstrates it against TLS 1.0. Many
have been been waiting for it.

It looks like Ubuntu is going to have to enable TLS 1.1 and 1.2 in
12.04 LTS for clients.
https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1256576.
___
Stop spreading FUD and lies.  This is NOT a protocol weakness in any TLS 
version,
it is an implementation *bug* affecting multiple TLS implementations, 
specifically

those that don't implement the *required* checks of the padding during
decryption.

So far, there have been public reports about which server side TLS 
implementation
have this particular bug, OpenSSL is in the clear, some (which?) NSS 
versions are
insecure, F5 and A10 load balancing devices need a recently released 
patch for

this specific issue.

I have seen no reports on which client side TLS implementations have the 
bug.


P.S.

Also, Mr. Langley seems to blindly reiterate an over-interpretation of 
the well known
dangers of bad mac-then-pad-then-encrypt to ban all mac-before-encrypt 
schemes
in favor of the much more fragile Authenticated Encryption modes such 
as GCM.


If you read the original paper that warned against mac-then-encrypt, its 
proofs
explicitly depended on the possibility that multiple related encrypted 
strings would
decrypt to the same mac+data and be accepted as identical, thus 
providing the oracle

used in the POODLE and BEAST attacks, amongst others.

Thus any Mac-then-encrypt scheme which guarantees that any change in the
encrypted value will cause the mac check to fail (within the strength of 
the mac),

should in my opinion remain at least as safe as encrypt-then-mac, and in my
opinion be even safer, since the attacker can no longer observe and 
probe the
mac protection independently of the encryption itself.  SSLv3 and TLS 
have always
done this right for stream ciphers, its just that the specific stream 
cipher RC4 has

its own (unrelated) problems.

One scheme, that I personally like, is to do 
pad-then-macprefix-then-encrypt.  By
including the padding in the mac calculation (as if part of the data), 
the classic
oracle vulnerability goes away.  By putting the high entropy keyed mac 
first, the
predictable IV problem in CBC cipher modes (and similar) is solved in a 
stronger
way than by sending an unencrypted IV (like TLS 1.2 does).  I suspect 
(but have

not done the math yet) that this is as strong as AuthEnc modes with perfect
block ciphers, and stronger than those in the eventuality of either the 
cipher or

mac having unexpected weaknesses.  Despite these improvements, this scheme
has the same implementation and size costs as the broken SSLv3 scheme, using
the same implementation functions.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
openssl-users@openssl.org
https://mta.opensslfoundation.net/mailman/listinfo/openssl-users


Re: [openssl-users] Error: A call to SSPI failed ...

2014-12-12 Thread Jakob Bohm

On 11/12/2014 13:45, Richard Moore wrote:
On 11 December 2014 at 10:20, Thirumal, Karthikeyan 
kthiru...@inautix.co.in mailto:kthiru...@inautix.co.in wrote:


Dear team,

Can someone tell me why the error is happening as SSPI failed ? Am
seeing this new today and when I searched the internet – it says
whenever there is a BAD formed request or when there is no client
certificate – we may get this error. Can someone shed more light
here ?

12/11/2014  12:50:06.161

ByteCount: 69

SSL Authentication failed A call to SSPI failed, see inner exception.



Since this is an error from .net you're asking the wrong place. This 
list is for users of openssl.



More specifically, this is an error from Microsoft SCHANNEL,
which is their SSL-library, called via their SSPI API (a
GSSAPI variant).

See inner exception means you should scroll down in the
error reportfrom .NET to see the real error code.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

___
openssl-users mailing list
openssl-users@openssl.org
https://mta.opensslfoundation.net/mailman/listinfo/openssl-users


Re: Small memory leak on multithreaded server

2014-11-23 Thread Jakob Bohm

On 21-11-2014 23:23, Viktor Dukhovni wrote:

On Fri, Nov 21, 2014 at 04:13:58PM -0500, Jeffrey Walton wrote:

A fixed amount of memory that is not deallocated and is independent
of the number of operations performed, is NOT a memory leak.

Languages like Java and C# can continuously load and unload a library.
You will see a growth in memory usage because the memory is not
reclaimed.

Unloading of shared libraries is generally unsafe.  Loading and
unloading of pure of Java packages may work well enough, but I
would expect a Java runtime that unloads native libraries to stay
running for very long.


That is horribly outdated information and an assumption that no
competent library author should make or rely on others to make.

On modern systems, unloading of shared libraries that are used
as plugins, and by extension any shared libraries that might be
referenced by plugins without being referenced by the plugin-using
application core, is a normal and frequent operation supported
by the core shared library loader and most shared libraries.

If a library contains code that needs to be automatically called
when it is loaded or unloaded without that being an exposed API
level init/cleanup function, then the library porter needs to do
the target specific gymnastics to get called by the (C) runtime
at the appropriate times, and it needs to deal with common
restrictions on what such calls from the (C) runtime are not
allowed to do (one of which is recursive calls to the dynamic
loader API).  For libraries written in C++, the static constructor
and destructor language mechanisms are treated this way
automatically and thus subject to the same limitations on
permitted operations.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Small memory leak on multithreaded server

2014-11-21 Thread Jakob Bohm

On 21/11/2014 15:26, Barbe, Charles wrote:

Thanks for the response... here is the code that runs when my connection closes:

void OpensslConnection::cleanup()
{
 if(ssl != NULL)
 {
 if(isConnected)
 {
while(SSL_shutdown(ssl) == 0)
;
 }
 SSL_free(ssl);
 ERR_remove_state(0);
 ssl = NULL;
 }

 isConnected = false;
}

And here is the code that runs to shut down my SSL library:

static void
openSslShutdown ()
{
CONF_modules_free();
ERR_remove_state(0);
CONF_modules_unload(1);
ERR_free_strings();
EVP_cleanup();
CRYPTO_cleanup_all_ex_data();

if (opensslLocks != NULL)
 {
 for(int i = 0; i  CRYPTO_num_locks(); i++)
 {
 PAL_mutexDestroy (opensslLocks[i]);
 }

 IST_FREE (opensslLocks);
 }
}

Also, I have numerous worker threads handling connections and they all do the 
following before they exit:

   ERR_remove_thread_state(0);

From the response by Dr. Henson, maybe you need code to unload your
server certificate and its certificate chain (a STACK of
certificates).


From: owner-openssl-us...@openssl.org [owner-openssl-us...@openssl.org] on 
behalf of Jeffrey Walton [noloa...@gmail.com]
Sent: Thursday, November 20, 2014 6:03 PM
To: OpenSSL Users List
Subject: Re: Small memory leak on multithreaded server


Any help would be appreciated.

This could be one of two problems. First, it could be an issue with
your code and the way you handle cleanup. To help diagnose this,
please show us your cleanup code.

Second, it could be the memory leak from the compression methods. This
is a well known problem dating back a few years that has not been
fixed. See, for example,
http://rt.openssl.org/Ticket/Display.html?id=2561user=guestpass=guest.

On Thu, Nov 20, 2014 at 5:19 PM, Barbe, Charles
charles.ba...@allworx.com wrote:

Hello,

I have noticed a small and consistent memory leak in my multithreaded openssl 
server and am wondering if somebody can help me figure out what I need to do to 
free it when my application closes. I am on OpenSSL version 1.0.1j. Here's how 
I reproduce the leak:

1) Start up my server
2) Load my homepage with Chrome
3) Load my homepage with IE
4) Load my homepage with Firefox

I can do any combination of steps 2,3 and 4 above (ie. leave some of them out) 
and I always get the same amount of memory left over after I shut down my 
application. I believe this means that this is some sort of global information 
that OpenSSL is hanging on to and not something in my SSL connection structure.

Specifically I get 20 blocks totaling 253 bytes. I have stack traces of where 
each block is allocated but I cannot figure out how this memory should be 
cleaned up. Each of the 20 blocks filter down to 1 of 5 root stack traces. The 
stack traces are:

Repeated 6 times:



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Schanner secu

2014-11-20 Thread Jakob Bohm

On 19/11/2014 22:37, Gilles Vollant wrote:

On https://support.microsoft.com/kb/2992611 we can read
Some customers have reported an issue that is related to the changes 
in this release. These changes added the following new cipher suites 
to Windows Server 2008 R2 and Windows Server 2012. In order to give 
customers more control over whether these cipher suites are used in 
the short term, we are removing them from the default cipher suite 
priority list in the registry.

TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
TLS_RSA_WITH_AES_256_GCM_SHA384
TLS_RSA_WITH_AES_128_GCM_SHA256

In other words, they disabled the stronger suites rather than
fixingthe actual compatibility issue (which was the removal of
anunnecessary supported points format extension, which was
sentinprevious versions).

So if Mr. Idrassi was right AND if OpenSSL 1.0.0/1.0.0a/1.0.0b
were the only affected clients, then this is not the best
possiblefix.

On the other hand, if some other SSL library would fail if
presented withthe 3 new suites (the GCM suites without
ECDSA certs), then their fix is correct and just helps the
old OpenSSL versions by chance.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: 1.0.1j on Windows32 shows error C2027: use of undefined type 'in6_addr'

2014-11-06 Thread Jakob Bohm

On 05/11/2014 20:55, neil carter wrote:

Okay, so what magic script is this?  Is it available in MS VS 6.0?
Sorry, not a developer so part of what everyone is saying is beyond me.

I ran the VCVARS32.bat script.  Previously that's all I've had to do to
prepare the environment for installing OpenSSL.



That would be a script included with the relevant old platform
SDK (usually named SetEnv.Bat), or a set of options set in the
registry when installing that SDK.


Again, this all worked with 1.0.1g and that also included IPv6 support,
didn't it?  I'm trying to understand this.

Thanks!


On 11/5/2014 1:23 PM, Jakob Bohm wrote:

Maybe you forgot to run the batch file that sets the
INCLUDE and LIB environmentvariables to prepend later
VC 6.0 compatible SDK headers before,such as those in
the July 2002 Platform SDK.

The copyright message quoted by Walter H. is just that,
acopyright message acknowledging that some of the
linesin that file were obtained from an old BSD.
That acknowledgement is still present in the June 2014
version of winsock2.h. Just like theSSLeay copyright
message in most OpenSSL files referto that original
SSL2/3 library by EAY and TJH.

On 05/11/2014 19:53, neil carter wrote:

So then why was 1.0.1g able to compile without these errors?



On 11/5/2014 12:48 PM, Walter H. wrote:

On 05.11.2014 19:27, neil carter wrote:

Sorry, typo - s/b 'VCVARS32.bat'

So are you implying that MS Visual Studio 6.0 might be the issue in
that it might not have built-in code with IPv6 headers?

yes, definitly

WINSOCK2.H contains this:

/*
 * Constants and structures defined by the internet system,
 * Per RFC 790, September 1981, taken from the BSD file netinet/in.h.
 */

by the way: Visual C++ is from 1998, also an old ancient compiler
we have 2014 ;-)






Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded






Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Why public key SHA1 is not same as Subject key Identifier

2014-11-05 Thread Jakob Bohm

On 05/11/2014 09:11, Jerry OELoo wrote:

Hi All:
As I know, When calculate Public key in certificate, it's SHA1 value
is equal to Subject Key Identifier in certificate, and I verify this,
and found that some websites are follow this.

But when I go to www.google.com website, I find the leaf certificate
and intermediate certificate is ok, but root CA certificate (GeoTrust
Global CA) is not.

For Geo Trust Global CA certificate.
Public key:
30 82 01 0a 02 82 01 01 00 da cc 18 63 30 fd f4 17 23 1a 56 7e 5b df
3c 6c 38 e4 71 b7 78 91 d4 bc a1 d8 4c f8 a8 43 b6 03 e9 4d 21 07 08
88 da 58 2f 66 39 29 bd 05 78 8b 9d 38 e8 05 b7 6a 7e 71 a4 e6 c4 60
a6 b0 ef 80 e4 89 28 0f 9e 25 d6 ed 83 f3 ad a6 91 c7 98 c9 42 18 35
14 9d ad 98 46 92 2e 4f ca f1 87 43 c1 16 95 57 2d 50 ef 89 2d 80 7a
57 ad f2 ee 5f 6b d2 00 8d b9 14 f8 14 15 35 d9 c0 46 a3 7b 72 c8 91
bf c9 55 2b cd d0 97 3e 9c 26 64 cc df ce 83 19 71 ca 4e e6 d4 d5 7b
a9 19 cd 55 de c8 ec d2 5e 38 53 e5 5c 4f 8c 2d fe 50 23 36 fc 66 e6
cb 8e a4 39 19 00 b7 95 02 39 91 0b 0e fe 38 2e d1 1d 05 9a f6 4d 3e
6f 0f 07 1d af 2c 1e 8f 60 39 e2 fa 36 53 13 39 d4 5e 26 2b db 3d a8
14 bd 32 eb 18 03 28 52 04 71 e5 ab 33 3d e1 38 bb 07 36 84 62 9c 79
ea 16 30 f4 5f c0 2b e8 71 6b e4 f9 02 03 01 00 01

Public Key SHA1: 00:f9:2a:c3:41:91:b6:c9:c2:b8:3e:55:f2:c0:97:11:13:a0:07:20

Subject Key Identifier: c0 7a 98 68 8d 89 fb ab 05 64 0c 11 7d aa 7d
65 b8 ca cc 4e

As you can above, Public Key SHA1 is not same as Subject Key Identifier.

What' wrong about this? Thanks a lot!

The subject key identifier is any short value that the CA can come
up with to use as a kind of alternative serial number of the
certificate.  It could be a checksum of the public key (using any
algorithm), or it could just bea reference to an internal CA
database.  The only important thing is that in some cases, the
certificate may bereferenced by this number and not the full
subject distinguished name.

Using SHA1(public key) used to be a common practice, but as use of
SHA1 is being phased out in favor of new hash algorithms with longer
values, CAs are going to start to use other formulas for making up
unique key identifiers, andmost of them are not going to reveal
their chosen formula.

One formula that should work far into the future could be
AES-encrypt(some-unpublished-key, concat(sequential CA id,
sequential database ID)), this will fit nicely in just 16 bytes
(128 bits) yet be guaranteed unique within a CA company
regardless of hash collisions.  Cracking that AES key would gain
an attacker very little (except perhaps a way to enumerate
certificates using lookup mechanisms that require knowledge of
the SKI as proof of need to know).


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: 1.0.1j on Windows32 shows error C2027: use of undefined type 'in6_addr'

2014-11-05 Thread Jakob Bohm

Maybe you forgot to run the batch file that sets the
INCLUDE and LIB environmentvariables to prepend later
VC 6.0 compatible SDK headers before,such as those in
the July 2002 Platform SDK.

The copyright message quoted by Walter H. is just that,
acopyright message acknowledging that some of the
linesin that file were obtained from an old BSD.
That acknowledgement is still present in the June 2014
version of winsock2.h. Just like theSSLeay copyright
message in most OpenSSL files referto that original
SSL2/3 library by EAY and TJH.

On 05/11/2014 19:53, neil carter wrote:

So then why was 1.0.1g able to compile without these errors?



On 11/5/2014 12:48 PM, Walter H. wrote:

On 05.11.2014 19:27, neil carter wrote:

Sorry, typo - s/b 'VCVARS32.bat'

So are you implying that MS Visual Studio 6.0 might be the issue in 
that it might not have built-in code with IPv6 headers?

yes, definitly

WINSOCK2.H contains this:

/*
 * Constants and structures defined by the internet system,
 * Per RFC 790, September 1981, taken from the BSD file netinet/in.h.
 */

by the way: Visual C++ is from 1998, also an old ancient compiler
we have 2014 ;-)






Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: 1.0.1j on Windows32 shows error C2027: use of undefined type 'in6_addr'

2014-11-05 Thread Jakob Bohm

(Lets keep this on list)

The headers that shipped with Visual Studio 6.0 did not cover the
IPv6 parts of Winsock2.They were however included in the Visual
Studio 6.0 compatible platform SDKsreleasedlater, such as the
ones from at least July 2002 to sometime in 2003 or 2004.  The
April2005 platform SDK officially had limited support for Visual
Studio 6.0, although the problemswere not that large.  Later
SDKs were even less compatible with Visual Studio 6.0.

Additionally, the inclusion of Visual J++ with Visual Studio 6.0
meant that Microsofthad to remove it from all distribution
channels due to the settlement with Sun overthe Java
incompatibilities in the Microsoft Java VM.

So if you have any need for Visual C++ 6.0 (e.g. to compile NT 4.0
compatible device drivers), then you should keep your copy safe
as you can't easily get a new one.

Conclusion:

If you are compiling with Visual C++ 6.0, then you need to add a
later platform SDK to the INCLUDE and (possibly) LIB paths in the
environment before compiling OpenSSL.  Chances are that you
probably have one of those SDKs lying around already.

On 05/11/2014 19:27, neil carter wrote:

Sorry, typo - s/b 'VCVARS32.bat'

So are you implying that MS Visual Studio 6.0 might be the issue in 
that it might not have built-in code with IPv6 headers? Haven't the 
IPv6 pieces of the OpenSSL code been around for a while?  I know I saw 
posts regarding it from several years back in the list archive.


Thanks!




On 11/5/2014 12:13 PM, Walter H. wrote:

On 05.11.2014 18:47, neil carter wrote:
I'm trying to install the 1.0.1j version on a Windows 2003 server 
(32-bit), with MS Visual Studio 6.0, nasm 2.11.05, and ActiveState 
perl v5.16.3.


Steps involved include running the VCVARS21.BAT script, ' perl 
Configure VC-WIN32 --prefix=c:\openssl-1.0.1j', 'ms\do_nasm.bat', 
and finally 'nmake -f ms\ntdll.mak'. Everything looks normal/good 
until the last step, which ends in the following:



VCVARS21.BAT = Visual C++ 2.1?
if yes, you should throw away the old ancient compiler of the early 
beginning of WinNT ... as of 1994;

and get the new actual Platform SDK from Microsoft ...

 .\apps\s_cb.c(803) : error C2027: use of undefined type 'in6_addr'
 .\apps\s_cb.c(803) : see declaration of 'in6_addr'
 .\apps\s_cb.c(836) : error C2027: use of undefined type 'in6_addr'
 .\apps\s_cb.c(836) : see declaration of 'in6_addr'
 .\apps\s_cb.c(884) : error C2027: use of undefined type 'in6_addr'
 .\apps\s_cb.c(884) : see declaration of 'in6_addr'
 .\apps\s_cb.c(917) : error C2027: use of undefined type 'in6_addr'
 .\apps\s_cb.c(917) : see declaration of 'in6_addr'
 NMAKE : fatal error U1077: 'cl' : return code '0x2'
 Stop.

this seems that you include ancient SDK headers not capable of IPv6 
at all ...







--
Jakob Bohm, CIO, partner, WiseMo A/S. http://www.wisemo.com
Transformervej 29, 2860 Soborg, Denmark. direct: +45 31 13 16 10 
tel:+4531131610

This message is only for its intended recipient, delete if misaddressed.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: OpenSSL Team Keys

2014-11-04 Thread Jakob Bohm

On 04/11/2014 11:30, Matt Caswell wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all

I recently noticed a GPG key on the public key servers supposedly for
my name and email address that I did not create or control (key id
9708D9A2). As I sometimes sign OpenSSL releases I thought it was worth
reminding everyone that the only keys that should be trusted when
verifying an OpenSSL release are those linked to from the website:

https://www.openssl.org/about/

Matt
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJUWKq1AAoJENnE0m0OYESR+yUH/irTjzn6PUooKHL760y/UTgB
65/R16Ap2n6LfmBGg3acBjaTodZRMSzHM5nkL8ya3fxI89KYP7/F5fQDqfwpjQHx
K7s4EImchoRirYmfhKrQzNYfo42gq76EDCIsOvtRAtcvU3ojC5gxTPj+1F61Azdb
e4AQCgG1BvfYNslED4IAAtv9qAZB9sPHUpj1ZIMEyh+mquHkYFzNYGDL1dIwlOx2
dZZ9ddWBHdsHyPqKcFsvAbn43C+DGxYm/ZJXE4NP/yQe6UMAPFXCZcTuyNgIgmw0
kzoNbPvlg9t/CrhzDnHhy3umUysKjTWBFVRLuU68a0uJb2VSqZqM8k6TuQPs/E8=
=x8a5
-END PGP SIGNATURE-

I feared something like that was going to happen ever since I
noticed howsloppy you were getting with the choice of signing keys.

Noted weaknessesin current signing procedures:

1. The list of applicable signing keys included in the tarballs and
elsewhereonly lists the fingerprints, not the full key blobs,
making it a lot trickier to getand check the keys without getting
random other keys from the keyservers.

2. The list seems kind of long, are all these people really
authorized to decidewhich release tarballs are real?

3. The list contains a lot of old MD5 fingerprints and seemingly
no fingerprintwith modern algorithms, such as whirlpool or SHA-2.
The list on the about page doesn't even provide full fingerprints
for most keys and the links point to key server searches, not
actual local key blobs.

4. Some releases are signed with keys not on the list in the
previous tarball,breaking the chain of trust.

5. As an SSL/X.509/SMIME library, you have a strange preference
for PGP/GPG keys.

6. The SSL certificate for www.openssl.org is of the lowest trust
grade available (domain validated only).  Surely you are in a
position to get a certificate backed by much more thorough
identity checks, given your position in the SSL pecking order.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Expected results for testing Poodlebug using OpenSSL CLI

2014-10-30 Thread Jakob Bohm

On 29/10/2014 21:14, Paul Konen wrote:


Hi, I found on the web a way to use your tool to test for the new 
vulnerability called Poodlebug.


The command is: opsnssl s_client –connect ip:port –ssl3

I feel that I have tomcat configured to use TLS only and this is the 
response back.


When I execute this against a box that isn’t restricted to TLS, I see 
the certificate information returned.


Is the above window showing that is was NOT able to make a SSLv3 
connection?




You are making a very fundamental mistake here:  Refusing SSLv3 is
not the only way tosecure a server against the POODLE attack (not
poodlebug, it is not a bug but an attack against known old bugs).

There are at least 3 ways:

A. Simply turning off SSLv3 connections, and loose support for
  older clients that cannot be upgraded to support TLS.  This is
  what you are testing for.

B. Support SSLv3, but implement the TLS_FALLBACK_SCSV system to
  ensure that up to date web browsers cannot be forced to use a
  lower SSL/TLS version than necessary.  This protects against
  the first half of the POODLE attack except when talking to old
  browsers that lack the new security features.

C. Support SSLv3, but limit it to RC4 only.  Continue to support
  better ciphers when the connection uses higher TLS versions
  that don't use the old RSADSI BSAFE padding that was part of
  SSLv3.  This is vulnerable to the cryptographic weakness of
  RC4, but not to any of the attacks against the SSLv3 ways of
  using   block ciphers.

Currently, OpenSSL apparently has no obvious way to configure it
to do something like solution C, but servers using other SSL/TLS
implementations might do this, so any test tool needs to accept
it as a solution.

By the way, I have yet to hear of any other SSL implementation
doing anything to release fixes that enable solution B.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: SSL_MODE_SEND_FALLBACK_SCSV option

2014-10-24 Thread Jakob Bohm

On 24/10/2014 13:33, Aditya Kumar wrote:

Hi All,

Thanks for your detailed responses, specially Florian Weimer and Matt 
Caswell. For the benefit of everyone and me, I am summarizing the 
thoughts which I have understood through all your replies. Please 
correct me wherever I am wrong.


To summarize:
   1.  Best way to prevent POODLE attack is to disable SSLV3 on both client 
and server side.
2.  If for some reason, you cannot disable SSLv3 on server side even if Server 
support TLS 1.0 or higher(e.g server having SSLV23 set), Server definitely need 
to be patched to prevent fallback. Once server is patched, it will prevent 
updated clients from fallback attack.
3.  After server is patched with OpenSSL FALLBACK flag fix, Server’s behavior 
will not change for the clients which do not send FALLBACK flag in their 
clienthello request. Server will continue to work with older client as usual. 
Only if an updated client sends FALLBACK flag into its clienthello request, 
server will be able to prevent fallback.
4.  If for some reason, client has to keep SSLV3 enable even if it supports TLS 
1.0 or higher version, client need to patch itself and set FALLBACK flag so 
that it does not come under fallback attack.

WRONG, See below

5.  Clients should never set protocol as SSLV23 to support both SSL3.0 and TLS 
Servers. Clients should always explicitly first try to connect using its 
highest supported version(TLS1.0 or higher) and if the server rejects the 
connection, then clients should explicitly try to connect using next supported 
lower version protocol.

WRONG, If client simply calls the SSL23_ (aka SSLAUTONEGOTIATE_) with
options to allow both SSLv3 and higher TLSvX.XX, it is already secure
and will never need to send the fallback flag.

6.  While connecting to server using higher protocol like TLS1 or higher, 
client should set FALLBACK flag so that server do not allow automatically 
downgrade to a lower version protocol.

WRONG, Client should always try its full range of enabled SSL/TLS
versions in one attempt, in which case the protocols themselves
(even without the latest patch) will automatically detect and
prevent a fallback MiTM attack.

However if client needs to work around some (extremely rare) old
SSLv3 servers which completely fail to accept a request for (SSLv3
r TLSv1+, the best you have), that client may use a workaround of:

Step 6.1: Attempt to connect with SSLAUTONEGOTIATE_(SSLv3 up to
TLSv1.2).  Do not set/send FALLBACK flag.

Step 6.2: If Step 6.1 fails (either because of old broken server or
because of new fallback MiTM attack), try again with SSLV3ONLY_(),
and set the FALLBACK flag to tell the server that the maximum
version specified in this call is not the true maximum version of
the client (in case it is not an old server, but a MiTM attack
trying to trick this fallback code).

Step 6.3: Step 6.2 could be extended to do retries with TLSv1.1,
then TLSv1.0, then SSLv3 etc. all of which would need the FALLBACK
flag because the client would actually have wanted TLSv1.2 if it
could get it.
**

Few questions which still remains in my mind are:
As part of my question’s reply, Florian replied that following:
*Unconditionally setting SSL_MODE_SEND_FALLBACK_SCSV (if by default or after 
user configuration) is a time bomb—your client application will break once the 
server implements TLS 1.3 (or any newer TLS version than what is supported by 
the OpenSSL version you use).  Extremely few applications have to deal with 
SSL_MODE_SEND_FALLBACK_SCSV.*
Why client application will break if FALLBACK flag is set and the server is 
upgrade to TLS 1.3 or higher version? Isn’t that the server should take care of 
this flag  when it is updated with higher version protocol?

Note: If client calls with SSLAUTONEGOTIATE_(SSLvX up to TLSv1.1)
and sets the FALLBACK flag, then a server which understands
TLSv1.2 will read this as I know this call says I only understand
up to TLSv1.1, but that is only because I think you refused my
attempt to use TLSv1.2 or higher, and therefore the server will
REJECT the connection as if a MiTM attack in progress.

Note 2: If a client calls with SSLAUTONEGOTIATE_(SSLvX up to
TLSv1.2) and sets the FALLBACK flag, then a server which understands
TLSv1.3 will read this as I know this call says I only understand
up to TLSv1.2, but that is only because I think you refused my
attempt to use TLSv1.3 or higher, and therefore the server will
REJECT the connection as if a MiTM attack is in progress.

Please let me know your opinion on this.
Once again thanks everyone for your response.
-Aditya


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

__
OpenSSL Project

Re: SSL_MODE_SEND_FALLBACK_SCSV option

2014-10-24 Thread Jakob Bohm

On 24/10/2014 18:19, Aditya Kumar wrote:
Thanks Jakob for correcting my understanding. In short, can I conclude 
the following about FALLBACK flag.


1. Whenever client is sending the FALLBACK flag in its request, an 
updated Server will interpret it that this client supports a higher 
version but since that higher version protocol request was refused, 
its trying to connect using a lower version protocol.
2. The FALLBACK flag should only be set to communicate to those 
extremely rare old SSLv3 servers which completely fail to accept a 
request for (SSLv3 or TLSv1+, the best client have). In that case, 
first client should attempt to connect with SSLAUTONEGOTIATE and if it 
fails, then connect with SSLV3 FALLBACK enabled.

Much simpler: The FALLBACK flag should be set only to communicate that
the client has activated its manual fall back code (if any).  If the
client doesn't contain manual fallback code, it doesn't need to do
anything.
3. Points 2 holds true even for the cases where clients connecting 
using TLS 1.2 fail and then client need to connect using TLS 1.1, 
TLS1.0 or SSLV3.0. Then client should attempt the next connections 
using FALLBACK flag set.

Yes, SSLv3 is just an example, which happens to be important right now
because of poodle.


Hope this will clear all the confusions.

-Aditya

On Fri, Oct 24, 2014 at 5:35 PM, Jakob Bohm jb-open...@wisemo.com 
mailto:jb-open...@wisemo.comwrote:


On 24/10/2014 13:33, Aditya Kumar wrote:

Hi All,

Thanks for your detailed responses, specially Florian Weimer
and Matt Caswell. For the benefit of everyone and me, I am
summarizing the thoughts which I have understood through all
your replies. Please correct me wherever I am wrong.

To summarize:
   1.  Best way to prevent POODLE attack is to disable
SSLV3 on both client and server side.
2.  If for some reason, you cannot disable SSLv3 on server
side even if Server support TLS 1.0 or higher(e.g server
having SSLV23 set), Server definitely need to be patched to
prevent fallback. Once server is patched, it will prevent
updated clients from fallback attack.
3.  After server is patched with OpenSSL FALLBACK flag fix,
Server’s behavior will not change for the clients which do not
send FALLBACK flag in their clienthello request. Server will
continue to work with older client as usual. Only if an
updated client sends FALLBACK flag into its clienthello
request, server will be able to prevent fallback.
4.  If for some reason, client has to keep SSLV3 enable even
if it supports TLS 1.0 or higher version, client need to patch
itself and set FALLBACK flag so that it does not come under
fallback attack.

WRONG, See below

5.  Clients should never set protocol as SSLV23 to support
both SSL3.0 and TLS Servers. Clients should always explicitly
first try to connect using its highest supported
version(TLS1.0 or higher) and if the server rejects the
connection, then clients should explicitly try to connect
using next supported lower version protocol.

WRONG, If client simply calls the SSL23_ (aka SSLAUTONEGOTIATE_) with
options to allow both SSLv3 and higher TLSvX.XX, it is already secure
and will never need to send the fallback flag.

6.  While connecting to server using higher protocol like TLS1
or higher, client should set FALLBACK flag so that server do
not allow automatically downgrade to a lower version protocol.

WRONG, Client should always try its full range of enabled SSL/TLS
versions in one attempt, in which case the protocols themselves
(even without the latest patch) will automatically detect and
prevent a fallback MiTM attack.

However if client needs to work around some (extremely rare) old
SSLv3 servers which completely fail to accept a request for (SSLv3
r TLSv1+, the best you have), that client may use a workaround of:

Step 6.1: Attempt to connect with SSLAUTONEGOTIATE_(SSLv3 up to
TLSv1.2).  Do not set/send FALLBACK flag.

Step 6.2: If Step 6.1 fails (either because of old broken server or
because of new fallback MiTM attack), try again with SSLV3ONLY_(),
and set the FALLBACK flag to tell the server that the maximum
version specified in this call is not the true maximum version of
the client (in case it is not an old server, but a MiTM attack
trying to trick this fallback code).

Step 6.3: Step 6.2 could be extended to do retries with TLSv1.1,
then TLSv1.0, then SSLv3 etc. all of which would need the FALLBACK
flag because the client would actually have wanted TLSv1.2 if it
could get it.
**

Few questions which still remains in my mind are:
As part of my question’s reply, Florian replied that following:
*Unconditionally setting

Re: openssl SSL3 vulnerability

2014-10-24 Thread Jakob Bohm

On 24/10/2014 15:53, Pradeep Gudepu wrote:

To my earlier code, I have added these extra flags for client:

SSL_CTX_set_options(ctx, SSL_OP_ALL | SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3);

And server also has these same flags set, so that no way client and server can 
communicate on sslv2, sslv3.

But again in logs I see SSL3 is negotiated:

[2014-10-24 18:00:17.063, Info   proxysrv:10684] SSLConfig::Init: SSL 
initiated (OpenSSL 1.0.1j 15 Oct 2014 built on: Mon Oct 20 15:08:32 2014).
[2014-10-24 18:02:11.640, Info   proxysrv:10684] SSLSocket::Callback: 
Handshake done: AES256-SHA  SSLv3 Kx=RSA  Au=RSA  Enc=AES(256)  
Mac=SHA1

Does this really mean SSLv3.0 protocol negotiated?

Or does it just mean SSLv3.x (which includes TLSv1.x)?

Or perhaps SSLv3 compatible cipher suite (which also includes TLSv1.x)?


On server, I have these ciphers set:

::SSL_CTX_set_cipher_list(ctx, 
ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM);

Is there something wrong with these ciphers? What are best cipher argument for 
only TLSv1 communication. I think, I need not set ciphers on client side.

Thanks – Pradeep reddy.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: SSL_MODE_SEND_FALLBACK_SCSV option

2014-10-22 Thread Jakob Bohm

On 21/10/2014 16:05, Florian Weimer wrote:

* Jakob Bohm:


The purpose of the option is to make totally broken applications a
bit less secure (when they happen to certain servers).  From my

I meant “a bit less insecure”, as Bodo pointed out.

OK, point already taken.



point of view, there is only one really good reason to have this
client-side option—so that you can test the server-side
support. That's why I implemented it for OpenJDK as well.
Application should *never* use it because it does not really solve
anything. If you have fallback code, your application is still
insecure.

No the purpose is to make them more secure by preventing their
(rarely needed) fallback code from being abused by MITM attackers,
but the extra protection only works if the server contains the
corresponding patch.  Basically, if a (patched) server sees that

The key word here is “patched”, a broken-server-supporting application
gets only protection for well-maintained servers—after the Powers That
Be forced server operators to add a patch to better support such
broken-server-supporting applications.  No one will be forced to fix
their insecure, version-intolerant servers, and it is unlikely that
those will ever implement TLS_FALLBACK_SCSV.  It's a bit like telling
people to wear gas mask, instead of taking measures against air
polution.

I wouldn't be so harsh.  I would say it is like telling people who
still carry cash howto tell the difference between a legitimate old
cash-only business and a fraudulentcheck-out clerk trying to cheat
them into paying cash that the (credit card accepting)modern shop
will never see.

With the combination of the server and client patches, the broken-
server-supporting code willno longer constitute a risk except when
actually talking to broken servers run by the(certificate verified)
legitimate owners of the requested domain name.

That is a huge reduction of the associated risk, soon the fallback
code will no longer endanger a visit to the (sort of) well run places
that people trust the most.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Please document the new SSL_MODE_SEND_FALLBACK_SCSV

2014-10-19 Thread Jakob Bohm

According to the discussions in other threads, some applications
contain that kind of code, some don't.

Since alot of people are getting confused, I am asking that the
documentation clarify if only applications with that kind of code
need to do anything at all.

On 17-10-2014 18:25, Nou Dadoun wrote:


Since this is the users list (as opposed to the dev list) I’m a little 
confused about point 2 there; my understanding from the sketchy 
descriptions I’ve read is that the fallback to a lower version is 
automatically done by openssl on connect failure as opposed to 
something similar to the code snippet below being present in 
application code. (i.e. I’m not sure whether you intend the “client” 
in that description to be openssl library code or user application 
code which calls into the library).  Thanks … N


*From:*owner-openssl-us...@openssl.org 
[mailto:owner-openssl-us...@openssl.org] *On Behalf Of *Jakob Bohm

*Sent:* October-17-14 7:59 AM
*To:* openssl-users@openssl.org
*Subject:* Please document the new SSL_MODE_SEND_FALLBACK_SCSV

The new SSL_MODE_SEND_FALLBACK_SCSV option is badly documented in
the wiki and man pages, which is going to cause a lot of problems
when people everywhere rush to incorporate the security fixes into
their code.

In particular, I find the following to be fully undocumented (except
by trying to read the code):

1. SSL_MODE_SEND_FALLBACK_SCSV was introduced in versions 1.0.1j,
  1.0.0o and 0.9.8zc, not way back when SSL_CTX_set_mode() itself
  was introduced.  The information at the bottom of that manpage
  needs to say that, like it already does for SSL_MODE_AUTO_RETRY.

2. [ THIS IS A GUESS ]
   SSL_MODE_SEND_FALLBACK_SCSV should only be set if the client
  contains code like the following:

  /* pseudo code */
  SSL_try_connect_(supporting versions x..y)
  if (failed) {
 SSL_try_connect_(supporting versions x..y-1)
 if (failed) {
SSL_try_connect_(supporting versions x..y-2)
... (etc.)
 }
  }

  In which case that code needs to change to

  /* pseudo code */
  SSL_try_connect_(supporting versions x..y)
  /* No SSL_MODE_SEND_FALLBACK_SCSV when trying with highest
attempted y */
  if (failed) {
 SSL_try_connect_(supporting versions x..y-1, 
SSL_MODE_SEND_FALLBACK_SCSV)

 if (failed) {
SSL_try_connect_(supporting versions x..y-2, 
SSL_MODE_SEND_FALLBACK_SCSV)

... (etc.)
 }
  }

  (Note: The Internet draft says (in very technical terms) when an
  SSL client should send the message, not when an application should
  tell any given SSL library to do so, because that answer is expected
  to differ between OpenSSL, Mozilla NSS, Microsoft SCHANNEL,
  MatrixSSL and other SSL libraries).

3. Unlike the other SSL_MODE_ options, SSL_MODE_SEND_FALLBACK_SCSV
 is not about an internal API behavior.

4. Why this isn't SSL_OPTION_SEND_FALLBACK_SCSV (there is probably
 a good reason, but it isn't documented).



--
Jakob Bohm, CIO, partner, WiseMo A/S. http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark. direct: +45 31 13 16 10 
call:+4531131610

This message is only for its intended recipient, delete if misaddressed.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Please document the new SSL_MODE_SEND_FALLBACK_SCSV

2014-10-17 Thread Jakob Bohm

The new SSL_MODE_SEND_FALLBACK_SCSV option is badly documented in
the wiki and man pages, which is going to cause a lot of problems
when people everywhere rush to incorporate the security fixes into
their code.

In particular, I find the following to be fully undocumented (except
by trying to read the code):

1. SSL_MODE_SEND_FALLBACK_SCSV was introduced in versions 1.0.1j,
  1.0.0o and 0.9.8zc, not way back when SSL_CTX_set_mode() itself
  was introduced.  The information at the bottom of that manpage
  needs to say that, like it already does for SSL_MODE_AUTO_RETRY.

2. [ THIS IS A GUESS ]
   SSL_MODE_SEND_FALLBACK_SCSV should only be set if the client
  contains code like the following:

  /* pseudo code */
  SSL_try_connect_(supporting versions x..y)
  if (failed) {
 SSL_try_connect_(supporting versions x..y-1)
 if (failed) {
SSL_try_connect_(supporting versions x..y-2)
... (etc.)
 }
  }

  In which case that code needs to change to

  /* pseudo code */
  SSL_try_connect_(supporting versions x..y)
/* No SSL_MODE_SEND_FALLBACK_SCSV when trying with highest
attempted y */
  if (failed) {
 SSL_try_connect_(supporting versions x..y-1, 
SSL_MODE_SEND_FALLBACK_SCSV)

 if (failed) {
SSL_try_connect_(supporting versions x..y-2, 
SSL_MODE_SEND_FALLBACK_SCSV)

... (etc.)
 }
  }

  (Note: The Internet draft says (in very technical terms) when an
  SSL client should send the message, not when an application should
  tell any given SSL library to do so, because that answer is expected
  to differ between OpenSSL, Mozilla NSS, Microsoft SCHANNEL,
  MatrixSSL and other SSL libraries).

3. Unlike the other SSL_MODE_ options, SSL_MODE_SEND_FALLBACK_SCSV
 is not aboutan internal API behavior.

4. Why this isn't SSL_OPTION_SEND_FALLBACK_SCSV (there is probably
 a good reason, but itisn't documented).

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Quick question about the poodle fix

2014-10-17 Thread Jakob Bohm

On 17/10/2014 16:37, dol o wrote:


Dear Devs,

Here is the blogpost of the HTTPS breakdown: 
http://www.moserware.com/2009/06/first-few-milliseconds-of-https.html
From what I understand, the Client hello is the first part of the ssl 
handshake that is not encrypted/HMAC’d


According to https://www.openssl.org/~bodo/ssl-poodle.pdf 
https://www.openssl.org/%7Ebodo/ssl-poodle.pdfthey recommend that 
clients (Client Hello) send the value 0x56, 0x00 (TLS_FALLBACK_SCSV) 
and the servers should accept the value 0x56, 0x00 (TLS_FALLBACK_SCSV) 
but this is stuff is transmitted over plaintext which can potentially 
be modified by an attacker. Can the vulnerable SSL connection still 
occur with the removal of the TLS_FALLBACK value set from the client. 
Let me know what you think when you get a chance.



No, while not encrypted, the Client Hello message will be
signed/hashed later in the handshake, ensuring that the connection
will fail if it is modified, otherwise much worse could be done
(such as removing all the strong ciphers from that same list, thus
causing 40 bit encryption).

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Wiki bug: Documentation_Index generated wrongly

2014-10-17 Thread Jakob Bohm

The code/script which generates
http://wiki.openssl.org/index.php/Documentation_Index
from the manpages looks like it contains two bugs:

1. If a manpage lists another function under see also, that other
  function is listed as a subitem of that first manpage, causing
  lotsof duplicate entries in the list.

2. If a function is listed in the NAME section of a manpage (as it
  should), but not in any see also of other manpages, it is not
  listed at all.  This affects functions such as SSL_set_mode(),
  because all the other manpages only does a see also for the
  combined SSL_CTX_set_mode() and SSL_set_mode() manpage.

In other words, the script looks like it is indexing the SEE ALSO
section, ratherthan the TITLES and NAME sections (which is what
man(1) on *n*x does).

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Browsers do not import chained certificate.

2014-10-09 Thread Jakob Bohm

On 09/10/2014 08:17, dE wrote:

Hi!

I'm trying to make a certificate chain using the following commands --

openssl genpkey -out issuer.key -algorithm rsa
openssl genpkey -out intermediate.key -algorithm rsa
openssl req -new -key issuer.key -out issuer.csr
openssl req -new -key intermediate.key -out intermediate.csr
openssl x509 -req -days 365 -in issuer.csr -signkey issuer.key -out 
issuer.pem
openssl x509 -req -days 360 -in intermediate.csr -CA issuer.pem -CAkey 
issuer.key -CAcreateserial -out intermediate.pem


After importing issuer.key to chrome/FF when I try to import 
intermediate.pem, I get errors. Namely --


This is not a certificate authority certificate, so it can't be 
imported into the certificate authority list. from FF and 
intermediate: Not a Certification Authority from Chrome.


Other intermediate certificates as provided by websites work fine.

Make sure your intermediary certificate is marked as a CA in its
x509 properties as signed by issuer.  Otherwise, you have just
created an ordinary certificate issued directly by issuer.

To check this look at the output from

   openssl x509 -noout -text -in intermediate.pem

and compare to the result from an intermediary certificate that
works.  The important lines are those that say CA or
Certificate in their text.

For example, here are some values from an intermediary certificate
from GlobalSign (omitting specifics and using example URLs):

X509v3 extensions:
X509v3 Key Usage: critical
Certificate Sign, CRL Sign
X509v3 Basic Constraints: critical
CA:TRUE, pathlen:0
X509v3 CRL Distribution Points:
URI:http://c.example.com/crl/issuer.crl
Authority Information Access:
OCSP - URI:http://ocsp.example.com/issuerCA
Netscape Cert Type:
SSL CA


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Know Extended Key Usage

2014-10-08 Thread Jakob Bohm

I think you can safely omit the middle openssl command.

On 08/10/2014 09:28, Akash Jain wrote:

Thanks Lewis !

I also used -

openssl s_client -showcerts -connect google.com:443 
http://google.com:443  /dev/null | openssl x509 -outform PEM | 
openssl x509 -noout -text | grep -A1 X509v3 Extended Key Usage


On Tue, Oct 7, 2014 at 11:40 PM, Lewis Rosenthal 
lgrosent...@2rosenthals.com mailto:lgrosent...@2rosenthals.com wrote:


Hi, Akash...

On 10/08/2014 01:40 AM, Akash Jain wrote:

HI,

How can I know the Extended Key Usage parameters of a remote
SSL enabled site using OpenSSL ?

Does this help:

https://www.madboa.com/geek/openssl/#cert-retrieve

You could modify the one script there to something like:

#!/bin/sh
#
for CERT in \
  www.somesite.tld:443
do
  echo |\
  openssl s_client -connect ${CERT} 2/dev/null |\
  sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' |\
  openssl x509 -noout -text
done

and filter the output of the -text param.

It's interesting that I can't seem to hit on a specific option to
dump just the extended key usage data.

Actually, as I was drafting this, I thought that perl might be a
more elegant way to go. Perhaps have a look at:


http://cpansearch.perl.org/src/MIKEM/Net-SSLeay-1.47/examples/x509_cert_details.pl

Anyone else have a suggestion?

Cheers

-- 
Lewis

-
Lewis G Rosenthal, CNA, CLP, CLE, CWTS, EA
Rosenthal  Rosenthal, LLC www.2rosenthals.com
http://www.2rosenthals.com
visit my IT blog www.2rosenthals.net/wordpress
http://www.2rosenthals.net/wordpress
IRS Circular 230 Disclosure applies   see www.2rosenthals.com
http://www.2rosenthals.com
-


-- 
This email was Anti Virus checked by Astaro Security Gateway.

http://www.astaro.com
__
OpenSSL Project http://www.openssl.org
User Support Mailing List openssl-users@openssl.org
mailto:openssl-users@openssl.org
Automated List Manager majord...@openssl.org
mailto:majord...@openssl.org





Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Know Extended Key Usage

2014-10-08 Thread Jakob Bohm

Yep, middle of 3 openssl commands in the pipeline...

On 08/10/2014 16:56, Lewis Rosenthal wrote:

Hi, all...

Actually, Jakob, I think it's the second one (the first one after the 
pipe) which can come out, i.e.:


openssl s_client -showcerts -connect google.com:443  \
/dev/null | openssl x509 -noout -text | grep -A1 X509v3 Extended Key 
Usage


which seems to produce a little less noise, but it's still not down to 
a single line of output. Still, it's more elegant than what I cited, I 
think.


Cheers

On 10/08/2014 08:43 AM, Jakob Bohm wrote:

I think you can safely omit the middle openssl command.

On 08/10/2014 09:28, Akash Jain wrote:

Thanks Lewis !

I also used -

openssl s_client -showcerts -connect google.com:443 
http://google.com:443  /dev/null | openssl x509 -outform PEM | 
openssl x509 -noout -text | grep -A1 X509v3 Extended Key Usage


On Tue, Oct 7, 2014 at 11:40 PM, Lewis Rosenthal 
lgrosent...@2rosenthals.com mailto:lgrosent...@2rosenthals.com 
wrote:


Hi, Akash...

On 10/08/2014 01:40 AM, Akash Jain wrote:

HI,

How can I know the Extended Key Usage parameters of a remote
SSL enabled site using OpenSSL ?

Does this help:

https://www.madboa.com/geek/openssl/#cert-retrieve

You could modify the one script there to something like:

#!/bin/sh
#
for CERT in \
  www.somesite.tld:443
do
  echo |\
  openssl s_client -connect ${CERT} 2/dev/null |\
  sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' |\
  openssl x509 -noout -text
done

and filter the output of the -text param.

It's interesting that I can't seem to hit on a specific option to
dump just the extended key usage data.

Actually, as I was drafting this, I thought that perl might be a
more elegant way to go. Perhaps have a look at:

http://cpansearch.perl.org/src/MIKEM/Net-SSLeay-1.47/examples/x509_cert_details.pl 



Anyone else have a suggestion?

Cheers

-- Lewis
-
Lewis G Rosenthal, CNA, CLP, CLE, CWTS, EA
Rosenthal  Rosenthal, LLC www.2rosenthals.com
http://www.2rosenthals.com
visit my IT blog www.2rosenthals.net/wordpress
http://www.2rosenthals.net/wordpress
IRS Circular 230 Disclosure applies   see www.2rosenthals.com
http://www.2rosenthals.com
-


-- This email was Anti Virus checked by Astaro Security 
Gateway.

http://www.astaro.com
__
OpenSSL Project http://www.openssl.org
User Support Mailing List openssl-users@openssl.org
mailto:openssl-users@openssl.org
Automated List Manager majord...@openssl.org
mailto:majord...@openssl.org





Enjoy

Jakob





Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: best practice for creating a CA cert?

2014-09-29 Thread Jakob Bohm

Out of general interest,

Assuming a low e (such as e=65537) RSA public key, how big is the
cost of going from a 2048 bit to a 4096 bit modulus for an
intermediary CA, given that verifications will significantly
outnumber signings for a CA key?

On 29/09/2014 09:26, Kyle Hamilton wrote:
Generally, a client doesn't bother checking a certificate that's in 
its local trust store. The idea is, if it's in its trusted store, 
there's no need to verify its integrity, because the administrator 
already performed that verification.


Where this might have an impact is if your new certificate is 
cross-certified by another organization's root. You'll have to judge 
for yourself how likely this scenario might be for your environment.


On September 28, 2014 11:59:29 PM PDT, Jason Haar 
jason_h...@trimble.com wrote:


Hi there

Due to the upcoming Google instigated phasing out of SHA-1, I'm looking
at creating a new enterprise CA (ie internal only)

If I just click through the defaults of openssl ca, I'd probably end
up with a 2048bit RSA, SHA-2 (256) cert. So my question is, should I
future proof that by making it 4096bit and maybe SHA-2 (512)? (ie I want
the CA to be viable for 10 years, not 5 years). What is the performance
impact of increasing these values of the CA cert itself? I'd expect to
still only sign 2048-bit, SHA-256 server/client certs - but is there a
real performance downside to making the CA cert itself stronger? I don't
care if the CA takes 30 seconds longer to sign a cert - but I'd really
care if it made a web browser hang when talking to the resultant server
cert ;-)



--
Jakob Bohm, CIO, partner, WiseMo A/S. http://www.wisemo.com
Transformervej 29, 2860 Soborg, Denmark. direct: +45 31 13 16 10 
tel:+4531131610

This message is only for its intended recipient, delete if misaddressed.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Verifying authenticode signature using openssl API

2014-09-22 Thread Jakob Bohm

Ok, look in the SignerInfo structure of the secondary signature.
 There is a separate field (digestEncryptionAlgorithm) indicating
the OID of the signature algorithm.  Look at this and see if it is
different from the value in the outer signature, and look up the
value online to see what it means.

On 22/09/2014 10:24, Prasad Dabak wrote:

Well, I am bit confused here.

I am decrypting the signature using RSA_public_decrypt function 
passing it a public key with RSA_PKCS1_PADDING option.


For primary signature, I get back a 35 byte value which is inclusive 
of the digestAlgorithm. It is in the v1.5format that you mention about.
For secondary signature, I get back a 20 byte value which matches 
byte-to-byte with the digest (SHA1 hash of the signed attributes 
si-auth_attr). But it doesn't include digestAlgorithm.


I understand that primary and secondary signatures are generated by 
different computers belonging to different companies. However, the 
fact that the decrypted signature matches with the SHA1 hash of the 
signed attributes makes me believe, that it's probably not a different 
algorithm (DSS/DSA and ECDSA) issue and it doesn't look like PSS 
format issue either.


Thanks.
-Prasad


On Sep 19, 2014, at 10:24 AM, Jakob Bohm jb-open...@wisemo.com wrote:


On 19/09/2014 09:14, Prasad Dabak wrote:
The RFC links helped.
   
I am able to do decrypt the encrypted digest and match it 
with the

DigestInfo as explained in rfc2315.
DigestInfo ::= SEQUENCE {
digestAlgorithm DigestAlgorithmIdentifier,
digest Digest }
   
Digest ::= OCTET STRING
   
I typically get back 35 byte decrypted digest which matches 
with sequence above.

I am also able to validate counterSignatures in similar fashion.
   
Now I am trying this with various Authenticode executables 
and one
small issue that I found is: For some authenticode 
executables, the
counterSignature encryption only considers the bytes of the 
Digest
OCTET_STRING i.e. it does not consider digestAlgorithm 
field. Because
of this, the decrypted counterSignature is 20 bytes long 
(size of sha1
hash) instead of 35 bytes mentioned earlier. It does match 
with bytes

of the Digest OCTET_STRING.
   
Is this expected behavior? How do I programmatically check this
behavior? If the size of decrypted counterSignature is equal 
to size
of the hash, assume that digestAlgorithm field is not 
considered?

   
Decrypting and RSA signature should produce a byte string almost
as long as the RSA key length, e.g.127 bytes for 1024 bits, 255
bytes for 2048 bits etc.

Next step is to check if those 127/255/... bytes are formatted
according to the appropriate portion of PKCS#1, whichspecifies
TWO different formats, the old v1.5 format which is mostly the
DigestAlgorithm OID and the digest packed into a simple ASN.1
structure and then padded, and the new PSS format, where the
hash is combined with a random value using a formula which you
can only reverse if you know what the digest should be.

I suspect you may be encountering both formats, since the
countersignature and the primary signature are generated by
different computers belonging to different companies (the
countersignature is generated by a server owned and run be the CA,
the primary signature is generated by the manufacturer and/or
Symantec).

You also need to consider that other signature algorithms such as
DSS/DSA and ECDSA might be used, as specified in the certificates
used for the signatures.

Note: For RSA signatures, PKCS#1 == RFC3447.

Thanks.
-Prasad
   
   
On Sep 16, 2014, at 10:51 AM, Jakob Bohm 
jb-open...@wisemo.com mailto:jb-open...@wisemo.com   wrote:

   
On 16/09/2014 12:22, Prasad Dabak wrote:
Hello,
   
I am currently focusing on matching 
various digests that we

talked
about earlier in the thread.
   
1. Computing the hash of the executable 
(excluding the areas as
defined by MS) and matching it with the 
value stored in
spcIndirectData. This is straight forward 
and figured out.
2. Computing the hash of spcIndirectData 
and matching it

with with
messageDigest stored in 
AuthenticatedAttributes. I

realized that the
sequence and length bytes need to be 
skipped before

computing the hash
of the spcIndirectData? Is this documented 
anywhere?
This is specified in the PKCS#7 standard (RFC2315), 
in particular,
PKCS#7 specifies that when there is a non-empty 
contentInfo field
in the PKCS#7

Re: issuer_hash

2014-09-11 Thread Jakob Bohm

On 11/09/2014 09:40, Steven Madwin wrote:


I see that the x509 command used with the –issuer_hash option returns 
a four byte digest value. Is there any method using OpenSSL to procure 
the 20-byte SHA-1 digest value of the issuer name?



use -fingerprint

(-subject_hash and -issuer_hash are used to look up CAs in a disk-based
 database, as used by the -CAdir option to various other OpenSSL commands.
 Basically, each CA is listed under its own -subject_hash, and calling
 -issuer_hash on a certificate then tells where to look for the CA
 certificate).


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Verifying authenticode signature using openssl API

2014-09-09 Thread Jakob Bohm
 messgaeDigest with 
the computed digest? OR we also need to decrypt the encryptedDigest 
using the company public key and match that as well?
2. What does PKCS7_Verify exactly do? I looked at 
https://www.openssl.org/docs/crypto/PKCS7_verify.htmland I 
understand  that it verifies certificate chain. However, it's not 
clear to me as to what exactly it does with respect to signature 
verification?
3. I am assuming that I require to do both (1) and (2) in order to 
verify the authenticode signature?
4. What is the best way to verify if the executable is signed by 
specific company using that company's public key?


Any inputs will be greatly appreciated!





Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Value of DEFAULT cipher suite

2014-09-09 Thread Jakob Bohm

On 09/09/2014 00:42, Salz, Rich wrote:

We are considering removing weak cryptography from the value of DEFAULT.  That is, append 
:!LOW:!EXPORT

It is currently defined as this in include/openssl/ssl.h:
#define SSL_DEFAULT_CIPHER_LIST   ALL:!aNULL:!eNULL:!SSLv2

Please let us know if you have strong objections to this.


In addition to removing the very-weak (less than 70 bits security)
ciphers from the default list,this would be a good opportunity to
reorder the default list (either via the define, or bettervia whatever
internal priorities guide the interpretation of a similar user-provided
list), tomaximize security, similar to what is checked e.g. by the
online ssllabs checker.

Basically: Prefer PFS suites to non-PFS suites (i.e. prefer EDH/ECDH to
bare RSA) at each nominalsecurity level (256 bits, 192 bits, 128 bits,
...), also enable 0/n splitting (and/or prefer a stream cipher)for CBC
encryption with older TLS protocol versions whenever the send timing
makes them otherwise subject to BEAST.

The latter is, by the way, the reason many systems have *recently* been
configured to explicitly prefer RC4 as the only unbroken cipher
compatible with servers or clients that don't protect against BEAST in
other ways.

To protect from the known RC4 repeated-plaintext vulnerability, one
might consider adding rate limiting to some SSL/TLS protocol steps
whenever RC4 is actually used.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: On 2K keys and SHA-256

2014-09-09 Thread Jakob Bohm

On 09/09/2014 14:18, Salz, Rich wrote:

May I suggest 4096 bit with SHA-256.

I think the next step after 2K-RSA is ECC, and that 4K RSA isn't going to see 
much deployment because of the computational cost.  At least, that's how we see 
things at my employer.

There was (some years ago) a heated debate between Certicom/NSA and
the rest of the crypto community regarding the RSA sizes needed to
match a given symmetric key security level.  I don't know if it was
ever resolved, but I guess some are going to use the old Certicom
list to choose their RSA key sizes.

Another, related problem is the large amount of patent FUD (and
maybe real issues too) regarding the ECC patent situation, causing
many applications to only use traditional RSA, DSA and DH, rather
than their ECC counterparts.  Until this problem is truly resolved
for everybody (not just the OpenSSL project and the US Government),
supporting even painfully slow RSA, DSA and DH key lengths is a
practical necessity.  Note that the only public guidance I have
found on this was written by the NSA, which affects it credibility
in the current international political climate.

One problem which I recently encountered when using stunnel for a
dedicated long running session is that OpenSSL 1.0.1 apparently
rejects large client keys with SSL_accept: 1408E098: error:
1408E098:SSL routines:SSL3_GET_MESSAGE:excessive message size,
which forced me to downgrade from 6K RSA to 4K RSA for the client
auth.  But this was for a dedicated link where the CPU overhead
was acceptable.


And Chrome+Firefox still happily uses MD5 to sign SPKAC after offering you
to create Low (512), Medium (1024) or High (2048) grade encryption keys
(patch available for ages BTW) ...

If you can point me to patches, email, or whatever I can try to make sure those 
links get seen by folks in charge.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Value of DEFAULT cipher suite

2014-09-09 Thread Jakob Bohm

On 09/09/2014 19:20, Salz, Rich wrote:

In addition to removing the very-weak (less than 70 bits security) ciphers
from the default list,this would be a good opportunity to reorder the default

I'd prefer to wait until TLS 1.3 is implemented, which has some definite (and 
rather strong :) feelings on the subject.  Doing things like putting PFS first 
would greatly increase the computation load on servers and doesn't seem like 
the right thing to do as a quiet change.  (But yes, moving RC4 down to LOW does 
seem to me like the right thing to do. :)

You conveniently snipped the part of my post which explained why RC4 is
currently the*strongest* available cipher when talking to some clients,
being (in those situations)effectively stronger than AES-256 CBC, despite
its known weaknesses.

To protect from the known RC4 repeated-plaintext vulnerability, one might
consider adding rate limiting to some SSL/TLS protocol steps whenever RC4 is
actually used.

The TLS WG looked at adding arbitrary padding as a record type.  I hope it 
comes back.  Maybe the fact that the next TLS WG interim meeting will be at 
INRIA, home of the triple-handshake attack and the padding proposal, will have 
some effect :)

That arbitrary padding (or any other future TLS feature) will do nothing
to mitigate the problem that interoperating with some widely deployed real
world clients leaves the choice between CBC with no mitigation and RC4 with
limitedkey lifespan (e.g. max 2**?? bytes encrypted with any given key).

You really should look at the extensive research done by SSL Labsbefore
blindly deprecating stuff.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Verifying authenticode signature using openssl API

2014-09-07 Thread Jakob Bohm
://www.openssl.org/docs/crypto/PKCS7_verify.htmland I understand  
that it verifies certificate chain. However, it's not clear to me as 
to what exactly it does with respect to signature verification?

I think it also verifies that the signerinfo refers to a certificate
that can be traced via the included certificates collection to one of
the loaded certificate authority root certificates.  And that the
encryptedDigest actuallymatches a locally computed digest of the
authenticatedAttributes and that the messageDigest
authenticatedAttribute matches a locally computed digest of the
contentInfo.  It doesn't know about spcIndirectDataContext and the
attributes therein, so it won't check that.  Also, it currently
doesn't know about time stamping counterSignatures, so you have to
implement that yourself, including the possible use of a different
time during the certificate validations.

It is critical that you pass the verification functions both the CA
certificates and up to date CRLs for both the CAs and for any
intermediary CAs found in the PKCS#7 certificates collection.
Otherwise you may end up believing certificates made with stolen
keys, such as the stolen key that was used to sign the StuxNet
malware.  The OpenSSL code will happily verify signatures without a
CRL, but the result may be catastrophically wrong.

3. I am assuming that I require to do both (1) and (2) in order to 
verify the authenticode signature?

Yes.
4. What is the best way to verify if the executable is signed by 
specific company using that company's public key?



Perform all of the above checks, and also check that the Subject of
the certificate referenced in the signerInfo you check is that
company and that that particular certificate was issued by the
certificate authority known to be used by that company (if you happen
to know which one they prefer to use, for example Microsoft uses
certificates issued by their own CA, at least recently).

Any inputs will be greatly appreciated!


Additionally, note that many Microsoft files, and many device
drivers, are signed ina slightly different way.  There is (in a
specific subdirectory of the Windows dir)a .cat file which
contains a list of file hashes (each computed in the same way
as a hash in an spcIndirectDataContext) for unspecified files.
That .cat file is itselfsigned by either Microsoft or the
manufacturer in the same way as described above,but with
details indicating that this is a signature for the .cat file.
For each valid.cat file, you need to add the listed hashes
to your own list of file hashes that indicate thata file with
any of those hashes should be considered signed by the company
that signed the .cat file, even if it doesn't contain such a
signature itself.

Also note that for drivers, it is often the case that the driver
files themselves aresigned by the company that made them, but
the .cat file is signed by the Microsoftdepartment that manages
the list of certified compatible drivers.  This is called
WHQL signing.

Signatures on installed kernel mode .SYS, .DLL and other such files
often containan extra cross certificate in their certificates
collection, issued byMicrosoft Code Verification Root to the
actual CA that issued the companycertificate. This is because
the signature checking code in Microsoft's bootloaderonly knows
about that Microsoft CA.

Enjoy and good luck

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded




Re: Why does OpenSSL own all the prefixes in the world?

2014-09-07 Thread Jakob Bohm

And how would you do that without breaking compatibility with every
program (in C, C++ or any other language) that already uses openssl and
depends on the current API names?

Providing the API, semantics and portability of the original SSLeay
library is thesecond-most important feature of OpenSSL (right after
actually being a secure SSL/TLSimplementation when used correctly).

On 08/09/2014 01:15, Pierre DELAAGE wrote:

Hmm...
Switch strongly and definitely to C++
Not for fancy object programming, but for more practical syntaxES for 
things like this.


And I am an old C fan programmer...
Pierre Delaage



Le 08/09/2014 00:04, Kyle Hamilton a écrit :
The reason is legacy. Eric Young was not conscious of namespace 
pollution when he implemented SSLeay; since then, even after the 
migration to the OpenSSL name and team, the focus has been more on 
maintaining source compatibility than in creating new 
interoperability opportunities.


To meet the goal of interoperability while enabling an alternate 
symbolic namespace, what would you suggest?


-Kyle H

On September 7, 2014 1:30:11 PM PST, Iñaki Baz Castillo 
i...@aliax.net wrote:


Hi,

RAND_xxx
CRYPTO_xxx
ERR_xxx
ENGINE_xxx
EVP_xxx
sk_xxx
X509_xxx
BIGNUM_xxx
RSA_xxx
BN_xxx
ASN1_xxx
EC_xxx

etc etc etc.

May I understand why it was decided that OpenSSL can own all the
prefixes or namespaces in the world? How is it possible that OpenSSL
owns the ERR_ prefix (for example ERR_free_strings() and others)?

OpenSSL is a library. I should be able to integrate OpenSSL into my
own code and define my own prefixes without worrying about creating
conflicts with the near 200 prefixes that OpenSSL owns.


An example of a well designed C library is libuv [*], in which:

* Public API functions and structs begin with uv_.
* Private API functions begin with uv__.
* Public macros begin UV_.

That's a good design!


PS: In my project I use both openssl and libsrtp. In which of them
do
you expect the following macro is defined?:

   SRTP_PROTECTION_PROFILE




[*]https://github.com/joyent/libuv/



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


https://www.openssl.org/news/state.html is stale

2014-09-07 Thread Jakob Bohm

The page https://www.openssl.org/news/state.html, which is supposed
to indicate what the current/next version numbers are is out of date.
Specifically, it was not updated for the August 6 security updates,
so it still claims thatthe versions released on that day have not
yet been released.

Please update the page.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Signing .JAR files using OpenSSL for Windows

2014-09-03 Thread Jakob Bohm

On 01/09/2014 16:17, AUser ZUser wrote:




Hello
Can someone please help me with the following question.
I have a code signing certicate in my X509 store LocalMachine\My which I can 
use for signing PowerShell scripts for example
Set-AuthenticodeSignature ./MyScript.ps1 -certificate ( gci 
cert:\LocalMachine\My -CodeSigning)
No worries there
From the information I have re AthentiCode as above, the only file formats 
it currently supports are

 *
.cab files

 *
.cat files

 *
.ctl files

 *
.dll files

 *
.exe files

 *
.ocx and

Now the UNIX guys also need their .JAR files signing (they do not have the code 
signing cert)
So I want thinking along the following lines but need some help please
I downloaded OpenSSL for Windows and Install
What I want to do use OpenSSL from the Windows command line to sign a .jar file
I do not want to expose the code signing certificate by having is as a flat 
file (e.g. CodeSigningCert.pfx) on the file system, rather I would prefer  to 
keep it in the X509 store (whereby the private key is not exportable) and refer 
to the cert on the OpenSSL command line when signing the .jar file.
Is this possible? can any one please show me a few command line examples? if 
this is not possible is there another utility I can use to achive the above
Thanks All
AAnotherUser__






Note: I have successfully signed jar files (actually apk files,
which are jar files with different contents) using the openssl
command line, plus some scripting.

Basically, jar files are zip files containing extra files
describing the signature.  There is a specification on Oracle's
site, but fundamentally:

META-INF/MANIFEST.MF   contains hashes of all non-signature files
   in the zip file, this is generated when you
   sign the jar with any certificate (even an
   unimportant dummy key). This is a text file.

META-INF/$signaturename.SF  contains hashes of various parts of
MANIFEST.MF.  This too is generated
when you sign the jar with any
certificate, even though there is one
copy of this file for each signature.
This is a text file.

META-INF/$signaturename.RSA is the output from running the following
command (this is a binary file):

openssl cms -sign -outform DER -noattr -md $hashname \
   -signer $whatever.pem $engineorprivkeyoptions \
$signaturename.SF  $signaturename.RSA

META-INF/$signaturename.DSA is the same as the .RSA file if your
certificate happens to use a DSA public key.

So one way (there are more advanced ways) is to sign with a dummy
(unimportant, no security) key using jarsigner, then extract
META-INF/$signaturename.SF, pass it to openssl with appropriate
engine options, then use a generic ZIP program to replace the
dummy $signaturename.RSA with the real one.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: OpenSSL engine support in OpenSSL FIPS Object Module

2014-07-06 Thread Jakob Bohm

On 7/6/2014 10:44 AM, Kyle Hamilton wrote:


On 7/5/2014 10:51 AM, Jayalakshmi bhat wrote:

Thanks a lot for the explanation. We have range of products that
provides network connectivity.

1.  On these  we would be using TPM to provide additional security.

2.  On the products that are bit slow in software cryptographic
operation, we also would be using hardware acceleration chips, that
would do crypto operations.


I'm going to guess that you are grouping these into class 1 (related
to the TPM) and class 2 (related to offloading).  Since you already
have a thread for class 1, I'll only respond to your class 2
questions here.

For background, FIPS is basically a specific mode of operation for US
Federal agencies, and is targeted specifically to Federal procurement
mandates.  In government systems which are actually required to use FIPS
mode, you are not allowed to use any crypto services (whether from
OpenSSL or from any other device) that don't use an approved FIPS mode
of operation.  No other people actually *need* FIPS mode.  (I tend to
use it whenever I can because it tends to reduce crypto container
information leakage, and also makes it more likely that the cryptography
is correct and interoperable.)


(In the case of OpenSSL, this actually wins you very little).

Let me try to approach this from a different angle.

LEGALLY:

If you have the luxury of having more than one FIPS validated device
available to you, you probably (ask a lawyer to be absolutely sure),
can use all of them together.  However to claim FIPS compliance of the
resulting application, you must not do any cryptography outside those
devices, and it must be impossible for the FIPS-mode variant of your
application to fall back to any non-validated implementations in case
of errors etc.  Additionally you may or may not (really ask a lawyer)
be legally (not technically) required to treat any keys, passwords
etc. handed from one device to another AS IF those keys were traveling
over an insecure connection even though they never leave your process
address space on an EAL-whatever-level certified operating system on an
EAL-whatever-level certified computer.

TECHNICALLY:

If you want to combine the use of multiple FIPS validated devices,
one of which happens to be the OpenSSL FIPS cannister, and another
one a piece of hardware accessed using an OpenSSL Engine, it is an
open technical question if the FIPS-enabled OpenSSL (which is legally
outside both devices and /can/ be changed) will correctly combine use
of the OpenSSL FIPS canister with the ENGINE for accessing the hardware
device, or if it will somehow fail to do so.

For instance I am unsure what happens if the ENGINE plugin for the
FIPS validated hardware device calls back to OpenSSL for cryptographic
operations outside the scope of that device (it might do that because
that piece of hardware is also used outside USGov and the ENGINE code
was written for that case).  Will OpenSSL pass the calls to the FIPS
canister (if in FIPS mode) or use the non-validated software
implementations?

I am also unsure if the FIPS-enabled OpenSSL library allows use of
Engines when (runtime) configured in FIPS mode?

Finally /if/ it is legally required to go through additional
gymnastics when transporting parameters from one FIPS device to
another, I am unsure if the FIPS-enabled OpenSSL library will do so
when the transport is internal to OpenSSL and its ENGINE plugins.




To see the requirements of FIPS 140-2, I recommend you download the five
pieces of the specification itself from
http://csrc.nist.gov/publications/PubsFIPS.html .  It is written in
bureaucratese, and you'll likely need several servings of alcohol to get
through it.  You should also read FIPS 200, which describes the minimum
security requirements for federal information and the systems used to
process federal information.  You'll probably want to budget several
servings of alcohol for this one, too.  Once you read these, you'll have
a much stronger understanding of how incredibly foreign the US federal
government's policy on cryptography is to the rest of society.

And remember: for US federal procurement, these are law, and the law
cannot be ignored or violated just because it would make things faster
or easier.  US government doesn't really care about how long it takes,
US government cares that it is done correctly.

-Kyle H


Both posts looks similar. I apologize  I should have clearly mentioned
these 2 posts are in different contexts.

Thanks a lot.

Regards
Jayalakshmi






Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing List

Re: help with error

2014-07-04 Thread Jakob Bohm

On 7/3/2014 8:52 PM, Michael Sierchio wrote:

My Windoze knowledge is hazy, and from the distant past, but if you're
running this in a CMD window, you may simply need to increase the
available memory from the default for that process.



Too hazy, I am afraid.  Those memory settings are/were only for the
virtual MS-DOS machine when opening a 16 bit command prompt to run 16
bit DOS programs. They have no effect (and are not even shown) if the
programs run in the command window are all compiled for POSIX, OS/2 or
Windows.

(The POSIX and OS/2 cases apply only to NT-based x86_32 editions of
Windows, such as Windows 7.  The old MS-DOS based Windows 3.2 and older
do not allow Windows programs either in a command prompt, except via
3rd party hacks).

My memory comments were directed to the possibility he was running the
command from a very memory-constrained environment embedded into a
Cisco router.

Anyway Laksha found it was a bug in the openssl binary.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: OpenSSL roadmap

2014-07-03 Thread Jakob Bohm

On 7/3/2014 2:25 PM, Salz, Rich wrote:

Would the project consider moving to C99


Yes, we are.  We're trying to figure out platform and toolchain issues.  
(Platform is the operating system and hardware, and toolchain is like gcc or 
clang, for those who don't know.)

I think moving to c99 is an obvious thing to do :)

/r$




Please be aware that several target platforms do not have working
C99 compilers, and some probably never will.  Microsoft platforms
probably have the least up to date compilers amongst major
platforms, while minor platforms may be stuck with whatever gcc
or CPU vendor compiler was current at its time of inception.

It will probably be at least another 5 to 10 years before widely
used libraries such as OpenSSL can switch to C99 without loosing
users by the wayside.  This is because many compiler vendors
(including gcc) are only now (in 201x) approaching working C99
support in their bleeding edge compilers, and there is a typical
5 to 10 year delay between that happening and the majority of
platforms using an equivalent compiler as their system compiler.

The previous posters claims about initializing all variables is
equally possible in C90.  However his dead-code elimination
assumption will probably only be true for major CPU/platform
target combinations, because minor platforms often suffer from
missing or buggy optimizers as a rule of thumb.  Thus once again,
portability implies that overreliance on compilers being state
of the art is not portable.

Also note that the need to link actual application code to OpenSSL
(or any other portable library) significantly limits the choice
of compiler to those that produce compatible .o files to those
produced and used by the application linker.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: help with error

2014-07-03 Thread Jakob Bohm

On 7/3/2014 5:50 PM, Steven Kinney wrote:

I enter the following command, as instructed by Cisco:

req -new -config c:\openssl\share\openssl.cnf -newkey rsa:1024 -nodes
-keyout mykey.pem -out myreq.pem

And I get the following error:

Please enter the following 'extra' attributes

to be sent with your certificate request

A challenge password []:tester

Error adding attribute

7684:error:0D0BA041:asn1 encoding routines:ASN1_STRING_set:malloc
failure:./cryp

to/asn1/asn1_lib.c:381:

7684:error:0B08A041:x509 certificate
routines:X509_ATTRIBUTE_set1_data:malloc fa

ilure:./crypto/x509/x509_att.c:317:

problems making Certificate Request

error in req

Any help would be appreciated.



I think the important part is malloc failure, in which case you
simply don't have enough free ram to run the command.

Are you by any chance running the command on a heavily loaded
router?


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Hostname checking and X509_check_host

2014-07-03 Thread Jakob Bohm

On 7/3/2014 1:22 PM, Viktor Dukhovni wrote:

On Thu, Jul 03, 2014 at 12:35:23AM -0400, Jeffrey Walton wrote:



I guess what I am asking: what is the default behavior. Its not clear
from the basic description.


For each flag bit, the opposite behaviour to that obtained by
setting the bit is the default when the bit is zero.


*
For X509_CHECK_FLAG_NO_PARTIAL_WILDCARDS:

 If set, X509_CHECK_FLAG_NO_PARTIAL_WILDCARDS suppresses support
 for ``*'' as wildcard pattern in labels that have a prefix or
suffix, such as: ``www*''
 or ``*www''; this only aplies to X509_check_host.

Is that the leftmost rule? I.e., a wildcard must be at the leftmost label?



No, it is exactly what is described.  When the bit is clear such partial
wildcards are allowed.



So without the bit, a certificate can specify that it applies to
*www.example.com and match servers onewww.example.com,
twowww.example.com etc. but not onepay.example.com, while with
the bit, the only acceptable certificates are those issued to
exactly *.example.com or each of onewww.example.com and
twowww.example.com

Did I get that right?




What is the purpose of allowing a leading dot for a hostname? I.e.,
why is .example.com allowed?


It allows the verifier to specify a parent domain, rather than a
particular host.  Any host in that domain or sub-domain matches.


A leading dot does not appear to be a valid hostname nor a well formed
FQDN. I don't recall reading about it in the RFCs or the CA/B Forums
(RFCs 5280, 6125 or CA/B Baseline Requirements). I would expect a
certificate with it to be rejected as malformed.


This name is NOT in the certificate.  It is a fuzzy reference identity.



So you mean if .example.com is passed as the hostname to match
argument to X509_VERIFY_PARAM_set1_host() or X509_check_host(), then
it will accept any certificate issued to www.example.com,
mail.example.com, internal.mail.example.com, *.example.com or
*.mail.example.com, as just a few examples?

By the way does .example.com also match the bare example.com?


Is there an intersection with EV OIDs? Or is it out of scope for host
name matching?


EV is about whether the certificate is trusted by browsers.  It
has nothing to do with namechecks.


According to the CA/B EV Guide, wildcards are not
allowed in EV certificates. So I would expect a wilcarded cert to be
rejected as malformed if its an EV certificate.


Applications that process EV certs can disable wildcards in name checks
via the appropriate flag.


The common case is that an application accepts both EV and non-EV certs
during the TLS (etc.) handshake, then informs the user (via a color or
icon) if the negotiated cert is EV or not.

Thus to be a meaningful audited replacement for each application coding
its own certificate verification logic, the API introduced in OpenSSL
needs to handle that scenario, since it is the most common case.




Would it be possible to receive the reason for a failure? For example,
I would consider a DNS name in the CN as a soft failure that I could
recover from (both the RFC and CA/B Forums have deprecated the
practice, but its still frequently encountered). But a wildcard match
with trickery, such as a DNS name of *.com, would be a hard failure
that I would not attempt to recover from.


The specified reference identity either matches or not.


As another example, Java will fail a cert for overlapping DNS names in
Subject Alt Names of a certificate, like having both *.com and
www.*.com or having both *.com and example.com


Don't trust CAs that generate malformed certs.  OpenSSL does not
support *.com, there must be at least two labels after the
wildcard.  OpenSSL does not support www.*.com.au, the wildcard
must be in the left-most label.


So maybe something like the following, where `reason` is an optional
bitmask that is valid *if* the function fails.

  int X509_check_host(X509 *, const unsigned char *name,
  size_t namelen, unsigned int flags, int* reason);


This is too complex an interface.  You specify how matching is to
be done, and then it either works or does not.  Keep in mind that
you should not explicitly use X509_check_host() if at all possible.
Instead use X509_VERIFY_PARAM_set1_host(), then name checks are
performed as needed.  The verify callback is called with an error status
if the fail.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: SSL_CTX_set_tmp_ecdh_callback() semantics in 1.0.1?

2014-07-01 Thread Jakob Bohm

On 7/1/2014 2:42 AM, Jeffrey Walton wrote:

On Mon, Jun 30, 2014 at 4:32 PM, Jakob Bohm jb-open...@wisemo.com wrote:

Because there is no documentation for SSL_CTX_set_tmp_ecdh_callback()
in OpenSSL 1.0.1 and older, I am afraid I have to ask:

1. Is the EC_KEY* returned by the callback supposed to be allocated
   for each invocation or is it supposed to be a static shared by all
   invocations?

Static is fine.


   If the latter (a common object), are there any threading issues when
   multiple threads are running SSL connections simultaneously?

Well, there is a CRYPTO_LOCK_EC for the static lock.



Is this something that requires code outside openssl on my part, or is
it automatic on the major platforms?  The locking documentation was
always a bit ambivalent about its applicability to modern library and OS
versions (as opposed to early SSLeay versions on equally old platforms).


2. What does the keylength parameter to the ECDH callback represent:
   A) An RSA/DH keylength (e.g. 2048 for 128 bit security)
   B) An EC keylength (e.g. 130 for 128 bit security)
   C) A symmetric keylength (e.g. 128 for 128 bit security)

The keylength parameter is munged. You have to translate it from
DH/RSA bit lengths.

That is, a keylength of 1024 needs to be translated to a 160-bit curve
(both have a 80-bit security level), a keylength of 2048 needs to be
translated to a 224-bit curve (both have a 112-bit security level),
and a keylength of 3072 needs to be translated to a 256-bit curve
(both have a 128-bit security level), etc.


3. Are there particular cut-off-points for the keylength parameter
   which correlates with the largest of the predefined EC groups
   likely to be supported by the client (e.g. according to the
   cipher suite collection offered).


I use N + 4. For example:

 if(keylength = 160 + 4)
 return ECSH160(); // Returns EC_KEY*
 else if(keylength = 192 + 4)
 return ECSH192(); // Returns EC_KEY*
 else if(keylength = 224 + 4)
 return ECSH224(); // Returns EC_KEY*
 ...


This example seems to contradict your reply to #2. Should I compare
the keylength parameter received by the callback to 160+4 etc, or to
1024+24 etc.


But P-256 seems to be most popular for interop.


I am actually trying to choose between P-256 and a larger one, using the
keylength as an indication if the larger one can be expected to interop.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


SSL_CTX_set_tmp_ecdh_callback() semantics in 1.0.1?

2014-06-30 Thread Jakob Bohm

Because there is no documentation for SSL_CTX_set_tmp_ecdh_callback()
in OpenSSL 1.0.1 and older, I am afraid I have to ask:

1. Is the EC_KEY* returned by the callback supposed to be allocated
  for each invocation or is it supposed to be a static shared by all
  invocations?

  If the latter (a common object), are there any threading issues when
  multiple threads are running SSL connections simultaneously?

2. What does the keylength parameter to the ECDH callback represent:
  A) An RSA/DH keylength (e.g. 2048 for 128 bit security)
  B) An EC keylength (e.g. 130 for 128 bit security)
  C) A symmetric keylength (e.g. 128 for 128 bit security)

3. Are there particular cut-off-points for the keylength parameter
  which correlates with the largest of the predefined EC groups
  likely to be supported by the client (e.g. according to the
  cipher suite collection offered).


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Possibility to cache ca-bundle and reuse it between SSL sessions?

2014-06-25 Thread Jakob Bohm

On 6/25/2014 3:23 PM, Jens Maus wrote:

On 2014-06-25 at 15:06, Michel msa...@paybox.com wrote:


Excerpt from the book : Network Security with OpenSSL :

An SSL_CTX object will be a factory for producing SSL connection objects.
This context allows us to set connection configuration parameters before the 
connection is made, such as protocol version, certificate information, and 
verification requirements.
It is easiest to think of SSL_CTX objects as the containers for default values 
for the SSL connections to be made by a program.
…


Thanks for the reminder. But I read the OpenSSL manual pages already, of course 
- but as the documentation of OpenSSL is (to be honest) really bad, I wanted to 
make this absolutely clear.


In general, an application will create just one SSL_CTX object for all of the 
connections it makes.

And Yes, this is also true for multithreaded connections, as long as we are 
aware of :
https://www.openssl.org/docs/crypto/threads.html


Ok, but then please allow the question how I should deal with

SSL_CTX_set_cert_verify_callback(sslCtx, func, conn);

in that context? Because currently we use this function to define an own verify 
callback function and we supply ‘conn’ here as an application specific pointer 
argument (and extracting it via X509_STORE_CTX_get_app_data(x509_ctx) within 
the callback function) for filling in the individual results of the certificate 
verify process of a specific SSL connection. The problem that arises here is 
that this ‘conn’ pointer is connection specific in our case. That means I want 
to be able to use a connection specific ‘conn’ argument with 
SSL_CTX_set_cert_verify_callback(), but if I call this function once at the 
very beginning of my application I can only specify it once and calling 
SSL_CTX_set_cert_verify_callback() on the same sslCtx pointer for every 
parallel connection will of course overwrite the old setting.

So how can I specify an own app_data for every connection? IMHO there should be 
something like SSL_set_cert_app_data() so that I can specify different app_data 
for different SSL connections.



After calling ssl_ctx = SSL_new(master_ssl_ctx) to get the new context,
call

  X509_STORE store_obj = SSL_CTX_get_cert_store(ssl_ctx),

then set your pointer in the CRYPTO_EX_DATA of
store_obj-ex_data.

When your callback receives an X509_STORE_CTX store_ctx, you can access
that same X509_STORE as store_ctx-ctx and get your pointer from the
CRYPTO_EX_DATA at store_ctx-ctx-ex_data.

At least that is what it looks like to me.

(Figuring out how to use the generic CRYPTO_EX_DATA API is left as an
exercise for the reader).



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Possibility to cache ca-bundle and reuse it between SSL sessions?

2014-06-24 Thread Jakob Bohm

On 6/24/2014 7:58 PM, Jens Maus wrote:

Hello,

this is actually my first post to this list, so please apologize if it might be 
too lengthy or too short or might address a question already raised in the past 
(which I didn’t find in the list archives so far).

I am an application developer of an email client using openssl to secure POP3 
and SMTP connections. Since a while I have also added functionality to check 
the server certificates against a certificate bundle file (ca-bundle.crt) which 
users can store in the resource bundle of the mail client and the certificate 
check mechanism (via OpenSSL’ callback mechanisms) is working fine so far.

The only thing I am currently wondering is if there is a possibility to load 
the ca-bundle.crt file in advance and then reuse it between individual SSL 
connections. The reason why I am asking this is, that on the systems I am 
developing this email client for the SSL_CTX_load_verify_locations() function 
easily takes 2 - 3 seconds and AFAIK there is no functionality in OpenSSL to 
provide a preloaded certificate bundle to the SSL context structure.

So what my client currently does is (pseudo code):

— cut here —
[…]
conn-ssLCtx = SSL_CTX_new(SSLv23_client_method());
SSL_CTX_set_options(conn-sslCtx, SSL_OP_ALL | SSL_OP_NO_SSLv2);
SSL_CTX_load_verify_locations(conn-sslCtx, …);
SSL_CTX_set_default_verify_paths(…);
SSL_CTX_set_verify(conn-sslCtx, …);
SSL_CTX_set_cert_verify_callback(conn-sslCtx, …);
SSL_CTX_set_cipher_list(conn-sslCtx, …);
conn-ssl = SSL_new(conn-sslCtx);
SSL_set_fd(conn-ssl, (int)conn-socket);
SSL_connect(conn-ssl);
[…]
— cut here —

Looking at that execution sequence the SSL_CTX_load_verify_locations() call 
easily takes 2 - 3 seconds here either if the ca-bundle file is quite large or 
if the system is busy doing other stuff. This is especially critical since 
there are unfortunately some mail servers on the net (so-called ‚Nemesis‘ mail 
server from gmx.de, web.de and 1und1.de) which have a rather short SSL 
negotiation timeout (8 - 10 seconds only) right from the initiating STARTTLS 
call until the SSL negotiation have to finished. Otherwise they simply drop the 
connection - which IMHO is another problem and another story not to be 
discussed here.

So is there some possibility that I can load the ca-bundle.crt file in advance 
and simply supply the data to SSL_CTX instead of having to use 
SSL_CTX_load_verify_locations() which actually loads the ca-bundle.crt file 
from disk every time a new connection (and thus 
SSL_CTX_load_verify_locations()) is initiated?



Use SSL_CTX_get_cert_store() directly, this returns the X509_STORE
object, which you can then configure to lookup the CA certificates
from an in-memory structure of your own.

Unfortunately, the X509_STORE object is mostly undocumented, however
it seems you can simply call X509_STORE_add_cert() and
X509_STORE_add_crl() with X509 and X509_CRL objects for each of
the certificates and crls in your in-memory cache.

It seems undocumented if there is sufficient reference counting of
X509/X509_CRL objects to share them (read-only) amongst threads, or if
you will have to duplicate them before adding them to the X509_STORE.

If duplication is needed, the easiest would be to hold the ca-bundle
in memory as a single large (read only) byte array, then for each new
SSL session, loop over d2i_X509() until you reach the end of your array
or it fails.  Use a second array for the concatenated CRLs.  Note that
the arrays should be in DER format, not PEM format.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Advisory on CVE 2014-0195 not listed on main vulnerabilities page

2014-06-23 Thread Jakob Bohm

Dear OpenSSL web page subteam,

CVE 2014-0195 is listed in

  https://www.openssl.org/news/secadv_20140605.txt

as fixed by the latest round of security fixes, however it is
missing from the primary cross reference at

  https://www.openssl.org/news/vulnerabilities.html

You may wish to update the page to reflect this part of the
advisory.

This was also mentioned by Mr. Nageswar in an unanswered message
14 days ago.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Decryption succeed in GCM mode when tag is truncated

2014-06-19 Thread Jakob Bohm

On 6/19/2014 11:19 AM, Jeffrey Walton wrote:

...
CCM is probably the oldest of the three, its more complicated, and its
offline (you have to have all data beforehand - you cannot stream data
into it).

Personally, I don't care about GCM's parallelizability because I
require all data to be authenticated before being operated upon.


Note that the parallelizability applies to the sender too.

So with parallel GCM, the sender can start sending before it knows and
encrypts the last part of the plaintext, while a secure receiver still
needs to wait for the end before accepting the data.  So the total
delay is
  max(encrypt_time, transmit_time) + decrypt_time
while a non-parallelizable mode would have
  encrypt_time + transmit_time + decrypt_time

Of cause there are other drawbacks to the various mode that
needs to be considered before choosing one.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: OpenSSL Security Advisory

2014-06-06 Thread Jakob Bohm

On 6/5/2014 11:31 PM, Green, Gatewood wrote:

Openssl-0.9.8za will not build in FIPS mode. The openssl-fips-1.2(.4) seems to 
be missing the symbol BN_consttime_swap.



By the way, the BN_consttime_swap implementation in 1.0.1g (still
downloading 1.0.1h) doesn't seem to completely match its
description:

 - If nwords is 0, the code will overflow the input buffers by
  pretending that nwords is 10.  Adding case 0 to the bottom
  of the switch should fix that.
 - If BN_ULONG is not exactly BN_BITS2 in size, the condition may also
  bit mishandled, is this property guaranteed by the type definitions
  on all platforms?
 - Other than the assert checking the power-of-2 assumption, the code
  should work with any condition in the range
  0 to (1  (BN_BITS32-1)) inclusive, but not for larger values.
 - The only thing that needs a and b to be different variables is the
  assert checking that condition.

At least this is how I read the code.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Platinum Sponsorship by Huawei

2014-05-31 Thread Jakob Bohm

On 5/30/2014 11:03 PM, Geoffrey Thorpe wrote:

Oh I see.

OK == arbitrary private institutions with no representative or
ideological constraints other than the limit of the law. (And even then...)


More importantly: With an obvious self-interest in protecting themselves
against wiretapping rather than performing it.



Not OK == institutions that are (in theory at least) representative of
nations/countries/states and that are (in theory at least) accountable
to their people.



Under the no-Gov/Intelligence agency funding rule, not under all rules.

Governments, whatever their form, have the daily responsibility of
protecting a territory and its population against specific threats
(such as criminals and enemy armies), which makes it necessary for
them to employ and work closely with professional investigators of
some kind, with the explicit capability of wiretapping the bad
people as a daily need of those investigators.  Therefore
governments, by their very nature, will have an fundamental desire
to limit the anti-wiretapping technology available to their enemies
(and, by unfortunate necessity, the worldwide general public).

Most (not all) private institutions, whatever their other failures,
do not have this property, since they tend to rely on governments
to hunt down criminals, focus more on direct protection (such as
encrypting their own communications), and may from time to time
have something they want to hide from governments, including their
own.

There are of cause some wholly private entities that do resort to
wiretapping etc., or have some other fundamental anti-security
interests.

In other words, governments are the natural home of the necessary
evil of spying on bad people, hence they are unlikely to be aligned
with the interests of a worldwide public crypto project.


And of course;

  * It's straightforward to ensure that there is no blur between those
categories.


Well the bigger the contribution, the less the tolerance for shades of
Gov/Spies.  So at the Platinum level, a company with a significant
government ownership would be problematic.


  * This categorisation is essential to the mission of any open source
crypto project.


Well OpenSSL / SSLeay was specifically created to avoid the largest
government restriction of its day (US export limit of 40 bit strength).


  * Supranational corporations are the only way to be sure that the
motives are altruistic and impartial.
  o On the strict condition that such a private institution has no
dealings with the public sector, otherwise they're ipso facto
subversive.



No, that is taking it to the absurd level of self-contradiction.



Is that what you're trying to say, more or less?

If so, I must have gotten lost somewhere along the way. Or perhaps
you're channeling Sarah Palin? Those seem like the sort of things she
might say.



Not siding with (Ex?) Gov. Palin or any other specific political side
in this.  I am not against governments (and their spies) in general,
just noting that OpenSSL is not a place I want them to meddle.


If you think there is no reasonable potential for political or nefarious
behaviour in the corporate culture then nothing I can say is likely to
change your mind. But you might want to read up a bit on Goldman Sachs
(and many others) before drawing too many favourable comparisons between
them and, say, elected bodies. (Though who am I to judge? If Goldman
Sachs want to contribute to open source too, they will get no argument
from me.)



At least two of the company examples were specifically chosen for known
nefarious actions not linked to Gov/Spies, to emphasize the point that
there might/should be other rules against some classes of donors.  As a
matter of neutrality, I am not saying which company examples.


Thanks for making your opinion known, in any case.

Cheers,
Geoff



On Fri, May 30, 2014 at 4:22 PM, Jakob Bohm jb-open...@wisemo.com
mailto:jb-open...@wisemo.com wrote:

On 5/30/2014 12:24 AM, Geoffrey Thorpe wrote:

...


The only way to to avoid any political overtones in such a
situation (if
that really is your intention, because doing the right thing
is not an
apolitical notion) is to blindly accept all comers or refuse all
comers.
(Subject to the obvious outliers, ie. nothing criminal/illegal, no
conflict of interest, etc.) By erecting criteria beyond no strings
attached (which *is* a very explicit necessary condition), you
are in
fact condemning yourself to the problem you are chastising us for.


I believe the additional criteria suggested would be donor is not an
aspect of any government, military or intelligence organization,
anywhere.  So for example DARPA, the USPS, the city of Munich and (a
few years ago) Northern Rock Bank would all be out of the question,
while IBM, Google, Samsung and Goldman Sachs would be OK.

Any intermediary

Re: Build issue on Mac OS X 10.9 (64 bit) with JHBuild

2014-05-31 Thread Jakob Bohm

On 5/31/2014 2:26 PM, scl wrote:


Hi,

for days now I have tried to build and install OpenSSL 1.0.1g on OS X
Mavericks (64 bit), but to no avail.
The goal is to include OpenSSL into an application package for OS X
10.6+; I’m not aiming to install it locally on my computer.

My build is controlled by JHBuild. It runs
./Configure --prefix=/Users/username/product/10.9/inst \
-L'/Users/username/product/10.9/inst/lib' zlib no-krb5 \
shared darwin64-x86_64-cc

and then
make -j3 -j 3

Whatever I do I end up with:
making all in crypto/cmac...
…
if [ -n libcrypto.1.0.0.dylib libssl.1.0.0.dylib ]; then \
 (cd ..; make -j3 libcrypto.1.0.0.dylib); \
 fi
make[2]: warning: -jN forced in submake: disabling jobserver mode.
[ -z  ] || /Applications/Xcode.app/Contents/Developer/usr/bin/gcc
-fPIC -fno-common -DOPENSSL_PIC -DZLIB -DOPENSSL_THREADS -D_REENTRANT
-DDSO_DLFCN -DHAVE_DLFCN_H -arch x86_64 -O3 -DL_ENDIAN -Wall
-DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5
-DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM
-DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM -Iinclude
\
 -DFINGERPRINT_PREMAIN_DSO_LOAD -o fips_premain_dso  \
 fips_premain.c fipscanister.o \
 libcrypto.a -L/Users/username/product/10.9/inst/lib  -lz



This is not an error.  It is a warning that something inside the OpenSSL
make files is using the -j options to make itself, thus triggering a
warning that you have already set -j options on the outer make.

The easiest workaround would be to omit the -j options from the outer
make invocation, or to simply ignore the warning.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Re?: How to make a secure tcp connection without using certificate

2014-05-30 Thread Jakob Bohm

On 5/30/2014 12:03 AM, Dave Thompson wrote:

From: owner-openssl-us...@openssl.org On Behalf Of Jakob Bohm
Sent: Wednesday, May 28, 2014 13:04



On 5/25/2014 2:22 PM, Hanno Böck wrote:



Some clients (e.g. all common browsers) do fallbacks that in fact
can invalidate all improvements of later tls versions.

These fallbacks also can happen by accident (e.g. bad connections) and
sometimes disable features like SNI.

That's why I recommend to everyone that we need at least to deprecate
SSLv3.




There is also the very real issue that a few platforms which no longer
receive feature updates (such as new TLS protocol versions) are stuck
at SSLv3.  Permanently.  So until those platforms become truly extinct,
a lot of servers need to cater to their continued existence by allowing
ancient TLS versions.

At that point the problem is how to do the best defense against
man-in-the-middle downgrade-to-SSLv3 attacks.  For instance is there a
way to
ensure that the server certificate validation done by an SSLv3
(compatible) client will fail if both server and client were capable of
TLS v1.0, but a man in the middle tampered with the version negotiation?


I don't think you want it on the cert.


Sorry for the sloppy wording, I obviously meant the certificate powered
validation of the handshake, not the certificate attributes.



The Finished exchange protects against *tampering* in a handshake,
and has since SSLv3 (inclusive). The problem is clients that fall back
at the application level if the (good) handshake is *blocked* (denied).
Remember we had a fair number of legit cases of this when TLSv1.2
in 1.0.1 added numerous suites by default plus one extension and
ClientHello growing beyond 256 broke some servers -- even though
they claimed to implement specs that implicitly required it. In those cases
it was actually reasonable for a client to fall back to 1.1.


Failing that, is this something that could be added to the TLS v1.3
standard (i.e. some signed portion of the SSLv3 exchange being
unnaturally different if the parties could and should have negotiated
something better).


I see no reason to tie this to a TLSv1.3 document, when and if there is one.
This is a proposed change to SSL, which is not TLS (only technically similar).
The prohibition on SSLv2 is a standalone document: 6176, which updates
2246 4346 5246 to retroactively remove the SSLv2 compatibility.
(Of course an IETF prohibition has no legal force and doesn't actually
prevent or even deter people from continuing to use SSLv2, it just lets us
wag our fingers at them.) Since SSLv3 was belatedly retroactively published
as 6101, this could even be labelled as an update to that, FWIW.


Not remembering the SSLv3 spec details, one option could be to announce
support for a special we also support TLS v1.0 cipher suite, which no
one can really implement (by definition), but whose presence in a
cipher suite list from the other end indicates that said other end
announced SSLv3.1 (TLS v1.0) support in an unsigned part of the
exchange.  This could even be specified in an UPDATE RFC for the
existing TLS v1.0..v1.2 versions, and a CVE number assigned to the
common bug of its non-implementation (after library implementations
become available).


In other words like the Signaling CipherSuite Value (SCSV) used for
5746 Secure Renegotiation (aka the Apache bug) in cases where the
extension didn't work (or might not work reliably). I'd say experience
confirmed that worked well enough to be considered an option.

But many users, especially web users, want to connect to the server
even if it isn't truly secure. When we make it harder for https to
work, they *will* use http instead, or else complain very loudly.



Indeed, that is why such TLSvX.Y SCSVs need to be carefully designed to
specify what each one claims other than simply every (obscure or not)
aspect of some very long spec, which might have common  misimplementations.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Platinum Sponsorship by Huawei

2014-05-30 Thread Jakob Bohm

On 5/30/2014 12:24 AM, Geoffrey Thorpe wrote:

...

The only way to to avoid any political overtones in such a situation (if
that really is your intention, because doing the right thing is not an
apolitical notion) is to blindly accept all comers or refuse all comers.
(Subject to the obvious outliers, ie. nothing criminal/illegal, no
conflict of interest, etc.) By erecting criteria beyond no strings
attached (which *is* a very explicit necessary condition), you are in
fact condemning yourself to the problem you are chastising us for.



I believe the additional criteria suggested would be donor is not an
aspect of any government, military or intelligence organization,
anywhere.  So for example DARPA, the USPS, the city of Munich and (a
few years ago) Northern Rock Bank would all be out of the question,
while IBM, Google, Samsung and Goldman Sachs would be OK.

Any intermediary organization would need to do more than just launder
the money.  They would need to pool it with many other donations,
distribute to many other projects and give the donors no influence on
which projects benefit from their donations, thus obviously and
provably denying the donors even the appearance of a potential ability
to threaten to reward or punish a project via the purse strings.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Platinum Sponsorship by Huawei

2014-05-30 Thread Jakob Bohm
 portions of the
budget from any single entity which can be reasonably accused (even if
wrongly) of giving it as a bribe on behalf of itself or its masters.

If 200 unrelated entities donate a million each into the pool, it
doesn't matter if one or two might be suspected of wanting to bribe
OpenSSL because we can openly say that it would matter little if we
got that particular million or not.

But if just 2 unrelated entities each provide $10.000 and those are
the largest ever donations to the foundation, it matters a lot if
either of those is financially associated with an entity known to
openly want OpenSSL to be less secure (such as any Government capable
of deploying safer crypto for its own purposes while having at
least one wiretapping or spying department).



Another question anyway, from a basic contributor as I am (involved in
WCE port of openssl) :

How do I know that the foundation is really independant from States or
BIG companies (that are sometimes not really politically correct)...?
Maybe the list of contributors and amounts should be published annually
on some webpage in a neutral form,
WITHOUT any golden or platine award...

A good compromise I think...

Yours sincerely,
Pierre Delaage






Le 30/05/2014 22:22, Jakob Bohm a écrit :

On 5/30/2014 12:24 AM, Geoffrey Thorpe wrote:

...

The only way to to avoid any political overtones in such a situation (if
that really is your intention, because doing the right thing is not an
apolitical notion) is to blindly accept all comers or refuse all comers.
(Subject to the obvious outliers, ie. nothing criminal/illegal, no
conflict of interest, etc.) By erecting criteria beyond no strings
attached (which *is* a very explicit necessary condition), you are in
fact condemning yourself to the problem you are chastising us for.



I believe the additional criteria suggested would be donor is not an
aspect of any government, military or intelligence organization,
anywhere.  So for example DARPA, the USPS, the city of Munich and (a
few years ago) Northern Rock Bank would all be out of the question,
while IBM, Google, Samsung and Goldman Sachs would be OK.


The above paragraph is my non-political list of examples of government
versus non-government examples.



Any intermediary organization would need to do more than just launder
the money.  They would need to pool it with many other donations,
distribute to many other projects and give the donors no influence on
which projects benefit from their donations, thus obviously and
provably denying the donors even the appearance of a potential ability
to threaten to reward or punish a project via the purse strings.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Open SSL Upgrade

2014-05-30 Thread Jakob Bohm

On 5/29/2014 5:18 AM, Shunmugavel Krishnan wrote:

Hi,

I am planning to upgrade open SSL in my operating system(RHEL). I have
applications running in the system, i.e. Tomcat web application, Web server,
Message broker etc. Do i need to check for compatible issues before i go
with the upgrade. Thanks!




I don't know about RHEL, but I happen to know that the Debian and Ubuntu
distributions include detailed, instructions (and even automation) of
any necessary steps when they package OpenSSL upgrades.

In those distributions, they use so-names (major version numbers
inside the file names of the .so library files) to ensure that only
compatible openssl upgrades will be loaded into programs that were
compiled against openssl.  And they also provide scripts and prompts
to restart any pre-packaged processes that need to be restarted to load
the upgraded library files.

In general, if only the small letter at the end of an OpenSSL version
number increases, and the new library is compiled with the same
configuration options, compiler etc. as the old one, upgrading is
supposed to just work.  For example 1.0.1g should be a drop in upgrade
for 1.0.1d, but not for 0.9.8p.  There was however a minor change
somewhere between 1.0.1a and 1.0.1e which affects the default behavior
of SSL programs if compiled against the 1.0.1a header files but run with
the 1.0.1f or later DLLs.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Re?: How to make a secure tcp connection without using certificate

2014-05-28 Thread Jakob Bohm

On 5/25/2014 2:22 PM, Hanno Böck wrote:

On Fri, 23 May 2014 16:32:15 +
Viktor Dukhovni openssl-us...@dukhovni.org wrote:


On Fri, May 23, 2014 at 06:11:05PM +0200, nicolas@free.fr wrote:


use at the very least TLSv1 (and preferably TLSv1_2) protocol if
you want to use SSLv23_server_method(), don't forget to disable
SSLv2 and 3 protocols (and maybe TLSv1) with the command

SSL_CTX_set_options(ctx, SSL_OP_NO_SSLv2|SSL_OP_NO_SSLv3);


Typically, leaving SSLv3 enabled is just fine if both ends support
something stronger they'll negotiate that.


That's not always true.

Some clients (e.g. all common browsers) do fallbacks that in fact
can invalidate all improvements of later tls versions.

These fallbacks also can happen by accident (e.g. bad connections) and
sometimes disable features like SNI.

That's why I recommend to everyone that we need at least to deprecate
SSLv3.




There is also the very real issue that a few platforms which no longer
receive feature updates (such as new TLS protocol versions) are stuck
at SSLv3.  Permanently.  So until those platforms become truly extinct,
a lot of servers need to cater to their continued existence by allowing
ancient TLS versions.

At that point the problem is how to do the best defense against 
man-in-the-middle downgrade-to-SSLv3 attacks.  For instance is there a 
way to
ensure that the server certificate validation done by an SSLv3 
(compatible) client will fail if both server and client were capable of

TLS v1.0, but a man in the middle tampered with the version negotiation?

Failing that, is this something that could be added to the TLS v1.3 
standard (i.e. some signed portion of the SSLv3 exchange being

unnaturally different if the parties could and should have negotiated
something better).

Not remembering the SSLv3 spec details, one option could be to announce
support for a special we also support TLS v1.0 cipher suite, which no
one can really implement (by definition), but whose presence in a
cipher suite list from the other end indicates that said other end
announced SSLv3.1 (TLS v1.0) support in an unsigned part of the 
exchange.  This could even be specified in an UPDATE RFC for the 
existing TLS v1.0..v1.2 versions, and a CVE number assigned to the

common bug of its non-implementation (after library implementations
become available).



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Improving structure and governance

2014-04-29 Thread Jakob Bohm

On 4/25/2014 9:33 PM, Awi wrote:


As a US based organization, Apache is unsuited and (given fairly recent
public news) untrusted to have any power of a project such as OpenSSL.

Additionally, the Apache foundation has accumulated so many important
projects over the last few years that it they are becoming a single
point of failure for too many things (or too big to fail as it is
called in some other sectors).

Thus I think a different organization would be needed if OpenSSL were
to give up its independence.




There is a similar thread on the openssl-dev mailing list and it was
mentioned there about this project:
http://www.theverge.com/2014/4/24/5646178/google-microsoft-and-facebook-launch-project-to-stop-the


So it's likely that in one way or another OpenSSL will be influenced by
US based organization(s).



The involvement of Microsoft, makes this initiative highly suspect, and 
I wish the Linux Foundation had told them to get lost.  Ever since its

foundation, Microsoft has used every underhanded trick in the book to
sabotage open source projects (just remember Bill Gates open letter
on the subject decades ago).

As long as Microsoft, Oracle etc. (or any of their friends) have any
direct or indirect influence over this fund, it should be shunned like
poison, even by projects not concerned with specific issues of US
influence.

I guess someone at the Linux Foundation got caught up in the heartbleed
panic and fell for the We must do something, this is something, so we
must do this fallacy.

Note that I am not an FSF fanatic, I truly believe in the cooperation
of open and closed source projects, and make my living from closed
source.  But I am sufficiently experienced to see the damage certain
other closed source companies can and will do to open source projects
relied upon by other companies.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: slowness of _ssl.sslwrap() on first call

2014-04-29 Thread Jakob Bohm

On 4/25/2014 11:19 PM, summer wrote:

Furthur investigation shows the slowness is happening at _ssl.c line 306,

self-ctx = SSL_CTX_new(SSLv23_method()); /* Set up context */

Is this line code involving client/server communication yet?



I haven't checked, but maybe SSL_CTX_new() is initializing the OpenSSL
random number generator, which in turn initializes a shared random state
in the OS (/dev/random on *n*x, CryptoAPI RNG on Windows) or on disk 
(.rnd file).


This takes some time the first time in order to gather lots of random
events from around the system, while later calls (by any app) will cheat
and use what is already there.

Thus the slowness should be happening in whichever OpenSSL-based program
is run first. If the slow init is in the System random state, it should
happen in the first crypto program (OpenSSL-based or not) is run first.

Just a theory, I haven't checked the full call graph of SSL_CTX_new() 
and SSLv23_method().


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Increment certificate serial numbers randomly

2014-04-29 Thread Jakob Bohm

On 4/28/2014 10:53 AM, Mat Arge wrote:

I agree with Walter, that it is not exactly good practise to have a CA key
lying around on multiple servers. But anyway, if you need to do it you have to
create the random serial number externally by some script and write it into
the serial file (as set in the openssl configuration file used) prior to
issuing the openssl ca command.

As a workaround if you do not want do do this, you could set different serial
number ranges on the various servers. Server1 starts at serial 1, Server2 at
0x01 and so on. You'd still have incrementally growing serial numbers
(which is actually bad by itself) but from distinct ranges.



I seem to (vaguely) recall that there was once an option or standard for
using a certificate-contents-related hash as the serial number, but I 
can't seem to find it right now.


As for the use of a widely shared private key, I have seen this sensibly
used for test certificates, where the (insecure) test CA is trusted
amongst systems configured in test mode, as long as all those systems
were from the vendor who originally set up this test root and
distributed the private key with their systems.

Use of certificates issued by this test root would result in a very 
specific warning message summarizing the nature of those certificates,

while still allowing technical testing of the entire security system,
without exposing real (trusted) end entity private keys to insecure
test and compile environments.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Improving structure and governance

2014-04-25 Thread Jakob Bohm

On 4/25/2014 3:36 PM, Salz, Rich wrote:

While we’re still waiting to hear from the core team about changes, I
might as well add to the noise and throw this out there.

Perhaps openssl should become an Apache project? Keep the foundation for
financial reasons, but use their infrastructure and such.  Or perhaps
consider adopting a large portion of their “rules.”



As a US based organization, Apache is unsuited and (given fairly recent
public news) untrusted to have any power of a project such as OpenSSL.

Additionally, the Apache foundation has accumulated so many important
projects over the last few years that it they are becoming a single
point of failure for too many things (or too big to fail as it is
called in some other sectors).

Thus I think a different organization would be needed if OpenSSL were
to give up its independence.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: OpenSSL 1.0.1g Upgarade Issue

2014-04-10 Thread Jakob Bohm

On 4/10/2014 6:23 AM, Dedhia, Pratik wrote:

Hi Team,

I’m trying to upgrade OpenSSL to 1.0.1g version from 1.0.1f version to
resolve security issue but getting error while restarting Apache server.

Below are the steps of OpenSSL upgradation:

1.Extracted the tarball downloaded from OpenSSL site using command “tar
xzvf openssl-1.0.1g.tar.gz”

2.Changed directory to openssl-1.0.1g

3.Executed “./config --prefix=/usr/local/application/openssl/
enable-shared –fPIC” command to compile openssl

4.Executed make clean command after successful execution of step 3

5.Executed make command

6.Executed make install command

7.Changed directory to extracted httpd-2.4.7

8.Executed “./configure --prefix=/usr/local/application/apache
--enable-rewrite --enable-proxy --enable-so
--with-ssl=/usr/local/application/openssl --enable-ssl
--with-pcre=/usr/local/application/pcre” to compile apache with upgraded
OpenSSL.

9.Executed make clean command after successful execution of step 8

10.Executed make command

11.Executed make install command

12.After successful execution of above step tried to stop the apache
with “sudo /usr/local/application/apache/bin/apachectl stop” command

On execution of step 12 getting below error:

httpd: Syntax error on line 125 of
/usr/local/application/apache/conf/httpd.conf: Cannot load
modules/mod_ssl.so into server: libssl.so.1.0.0: cannot open shared
object file: No such file or directory



make install (step 11) should have created that file
(libssl.so.1.0.0) in /usr/local/lib otherwise you don't have the fixed
code.

Please check that your user account has write access to /usr/local/lib,
or that make install was run as root (either should do it).


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: OpenSSL version 1.0.1g release signed with unauthorized key???

2014-04-09 Thread Jakob Bohm
Attention: The .asc file I downloaded directly from openssl.org for the 
1.0.1g tarball was signed with a key NOT authorized by the 
fingerprints.txt file distributed in previous tarballs, nor by the 
(unverifiable) fingerprints.txt available from


   http://www.openssl.org/docs/misc/

Specifically, it was signed by a PGP key purporting to belong to Dr. 
Henson, but with a different identifier and a different e-mail address

than the authorized key listed for him in fingerprints.txt.

I suspect this is just a mixup at your end, but one cannot feel too
sure without a valid file signature consistent with the securely 
distributed signature list.


For now, I will have to avoid installing this critical security update
and try the workaround instead.

On 4/7/2014 7:38 PM, OpenSSL wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


OpenSSL version 1.0.1g released
===

OpenSSL - The Open Source toolkit for SSL/TLS
http://www.openssl.org/

The OpenSSL project team is pleased to announce the release of
version 1.0.1g of our open source toolkit for SSL/TLS. For details
of changes and known issues see the release notes at:

 http://www.openssl.org/news/openssl-1.0.1-notes.html

OpenSSL 1.0.1g is available for download via HTTP and FTP from the
following master locations (you can find the various FTP mirrors under
http://www.openssl.org/source/mirror.html):

  * http://www.openssl.org/source/
  * ftp://ftp.openssl.org/source/

The distribution file name is:

 o openssl-1.0.1g.tar.gz
   Size: 4509047
   MD5 checksum: de62b43dfcd858e66a74bee1c834e959
   SHA1 checksum: b28b3bcb1dc3ee7b55024c9f795be60eb3183e3c

The checksums were calculated using the following commands:

 openssl md5 openssl-1.0.1g.tar.gz
 openssl sha1 openssl-1.0.1g.tar.gz

Yours,

The OpenSSL Project Team.

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iQIcBAEBCAAGBQJTQtiiAAoJENNXdQf6QOniC/EQALRkau9Gx+qzyp1nx1FDTJI1
ox93n7SKC3QIjX4veVuFjpaPymNQXVRM8IbgET5tE4GPT5w+PrscpyGSJJr8yvWN
TKy48JSKl13GVMODnEC6nEffsS/sci5o2PHXhDYa7aC+xRF6UUSMa8tqXnhGJP7e
uv7a1tYjtgE8Ix9tdoK32UkPOM0Z1qr11lPFDdG0GrIs+mbjPirdKSgvQm22w4IU
jyn5AmmReA6ZnIpffOHGQY5OgpGTg4yg+aaFKenisOfIL80raNZlVuWrzDkTUS9k
+gikqtBRg1pFMd1UGpl0S7sIXZNm01yv4K4aO3a9aykXqPQLOc8WmvfDgf99+8HR
zUrowh7Xf1CvHsgIs4s0XaggZdXhkXpMpSWdWpVh7ZVm/TPInoPWwyj8Zp/TL8XF
N/GrNHRLuWvSgCuyA7qhkee33FmtCblnYTHSLyGQrVpfq/cVEzvpznsZnObjFG+/
4Gss0qUVQZ0LJUUKZHx5cGvHliXYEeZQaBz/VLJ7J8fvy6Fsp0vKFjbrobG6srB6
pa6NYQKjHhobx+eEW380j3r60iBiz1GjdMSOdLvnSOA9dOcWmXFxl5GLcASnM+F0
kGtZBjLXsaImnp749V50sme+bNgQ/ErUvikTLXefk0rtUnfjCmJec44Kn5Gh7J1k
iI/CjhJrI2B83C48m2kE
=lxo1
-END PGP SIGNATURE-
__
OpenSSL Project http://www.openssl.org
Announcement Mailing List openssl-annou...@openssl.org
Automated List Manager   majord...@openssl.org




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Server Certifcate Missing SAN

2014-01-24 Thread Jakob Bohm

On 1/24/2014 6:54 PM, Jeffrey Walton wrote:

I don't see a dumb mistake with this one

First, the CSR has multiple SANs:

$ openssl req -text -noout -verify -in servercert.csr
verify OK
Certificate Request:
 Data:
 Version: 0 (0x0)
 Subject: C=XX, ST=XX, L=XX, CN=Test 
Server/emailAddress=t...@example.com
 Subject Public Key Info:
 Public Key Algorithm: rsaEncryption
 Public-Key: (2048 bit)
 Modulus:
 00:ce:3d:58:7f:a0:59:92:aa:7c:a0:82:dc:c9:6d:
 ...
 f9:5e:0c:ba:84:eb:27:0d:d9:e7:22:5d:fe:e5:51:
 86:e1
 Exponent: 65537 (0x10001)
 Attributes:
 Requested Extensions:
 X509v3 Subject Key Identifier:
 1F:09:EF:79:9A:73:36:C1:80:52:60:2D:03:53:C7:B6:BD:63:3B:61
 X509v3 Basic Constraints:
 CA:FALSE
 X509v3 Key Usage:
 Digital Signature, Non Repudiation, Key Encipherment
 X509v3 Subject Alternative Name:
 DNS:example.com, DNS:www.example.com,
DNS:mail.example.com, DNS:ftp.example.com
 Netscape Comment:
 OpenSSL Generated Certificate
 Signature Algorithm: sha256WithRSAEncryption
  6d:e8:d3:85:b3:88:d4:1a:80:9e:67:0d:37:46:db:4d:9a:81:
  ...
  76:6a:22:0a:41:45:1f:e2:d6:e4:8f:a1:ca:de:e5:69:98:88:
  a9:63:d0:a7

Second, attempt to sign it. Notice the lack of SANs in the verification step.

$ openssl ca -config openssl-ca.cnf -policy signing_policy -extensions
signing_req -out servercert.pem -infiles servercert.csr
Using configuration from openssl-ca.cnf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
countryName   :PRINTABLE:'XX'
stateOrProvinceName   :ASN.1 12:'XX'
localityName  :ASN.1 12:''
commonName:ASN.1 12:'Test Server'
emailAddress  :IA5STRING:'t...@example.com'
Certificate is to be certified until Oct 20 17:44:51 2016 GMT (1000 days)

Third, here's the relevant section from openssl-ca.cnf:


[ signing_policy ]
countryName= optional
stateOrProvinceName= optional
localityName= optional
organizationName= optional
organizationalUnitName= optional
commonName= supplied
emailAddress= optional
# subjectAltName= optional


[ signing_req ]
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment

# subjectAltName=copy
# subjectAltName=dns:copy

Attempting to use `subjectAltName=dns:copy` results in a parse error,
so I know the section is being read.

The disconnect here seems to be I cannot put `subjectAltName =
@alternate_names` (with appropriate section) in the CA's conf. In this
case, the CA has the SANs in the CSR, but it does not have access to
the other conf file with the `alternate_names` section.

Any ides how to proceed?




This is a common problem with the openssl interface.  It is practically
a FAQ.

There are two methods, either should work:

- Temporarily edit/duplicate the CA openssl.conf, adding the alternate
 specific alternate_names section for the duration of a single signing.

- Use the setting to copy *all* extensions from the CSR, and carefully
 examine each CSR before signing it.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Appending to encrypted data.

2014-01-23 Thread Jakob Bohm

On 1/23/2014 4:55 PM, Sean Langley wrote:

Hi All,

I have been using AES 256, CTR mode to encrypt the contents of a file on
disk.  The IV for the file is written to the first 16 bytes followed by
the encrypted file data.  Up to now, this encrypted data is created with
a single encryption session.  This is all on a mobile device, using FIPS
mode with relatively limited resources, compared with a desktop.

I'd like to be able to append to this encrypted file. In order to do
this, I need to decrypt the final block (in the event there is a partial
block that has been written to the encrypted stream), start the
plaintext portion with this last block, and continue the encryption of
additional data in the file, using a new encryption session.

I've gone through the AES code, and the only way I've found is to set
the state of the initial decryption/encryption based on the number of
blocks, and creating a working IV for the start of the decryption and
encryption process.  This has not been successful for me yet, for some
reason.

Is there a better way to do this with the current OpenSSL API's (EVP, or
lower level)?

Any help would be greatly appreciated.




CTR mode doesn't really use an IV like CBC, just a block counter and a 
fixed value.  So for CTR mode you never decrypt the last block to set

up continuation and there is little point in using the first block as
an IV.

So basically, you just need to set it up with the same fixed value as
the first time, but with a counter corresponding to the block offset
where you will start.  Next, if the previous contents ended in the
middle of a block, just put some unused bytes (0 to block size - 1)
in front of the new data and throw that many bytes of the result away.

Another key trick for CTR mode is to run it in the background before
you even have the data (you may need to pass zeroes as input), and then
just XOR the resulting stream onto the data when you get it.  This can
be a real benefit on embedded devices that run AES slowly and get the
data to encrypt or decrypt from something other than its own
calculations.

With a little tweaking, these tricks also work for GCM mode, since it
is mostly CTR mode with a checksum computed in parallel and then
encrypted.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: A small note on Windows 8 GetVersion() depreciation

2014-01-09 Thread Jakob Bohm

On 1/9/2014 6:46 AM, Dongsheng Song wrote:

[1] GetVersionEx may be altered or unavailable for releases after
Windows 8.1. Instead, use the Version Helper APIs.

 [1] 
http://msdn.microsoft.com/en-us/library/windows/desktop/ms724451%28v=vs.85%29.aspx


Scandalous!  According to that page, Microsoft has essentially sabotaged
one of the first functions called by most language runtimes and also
introduced rules to actively prevent applications from sanely dealing
with OS differences.  For instance, it seems there is no longer any
interface to detect if the OS was made *after* the application code and
may thus differ subtly from whatever the application knows about the
OS-es in existence at the time.


I thinks use 'Version Information Functions'[2] is the better choice.

[2] 
http://msdn.microsoft.com/en-us/library/windows/desktop/ff468915%28v=vs.85%29.aspx


Well, those functions (or something even more low-level in case
Microsoft sabotages these too), could be a way to work around the
sabotage introduced in Windows NT 6.3 (Marketed as 8.1).




On Thu, Jan 9, 2014 at 3:11 AM, Jakob Bohm jb-open...@wisemo.com wrote:

While I have not specifically checked the Windows 8 SDK, my extensive
experience with the version detection APIs in Windows tells me the
following:

1. GetVersion() is the only version-detection API available on older
   platform versions.  Later platform versions added variants of
   GetVersionEx(), with each newer variant being available on less
   platforms.

2. The order of the bit fields returned by GetVersion() has
   historically confused many developers, therefore Microsoft has long
   told people to avoid it if they don't know what they are doing.
At one point, even the editor of the GetVersion() documentation
   got confused!

3. Starting a few years ago, Microsoft began a trend of using the
   compiler __declspec(deprecate) mechanism to scare developers
   away from functions that are not really deprecated, just not
   recommended for some other reason.  Those deprecations can
   usually be ignored safely by those with good reason to use those
   more portable APIs.

So, if this is just another political compiler warning, there is
little reason to head it.

Otherwise, the GetVersionEx() function can be used as a replacement,
but only by dropping support for Windows NT 3.10 and maybe Win32s
(NT 3.50 and all the Win9x and WinCE variants include the basic
form of GetVersionEx()).

P.S.

If there is still code in there to support 16 bit Windows 3.x, then
that API includes only GetVersion(), and with a different
specification than its 32/64 bit namesake.





Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: CRL checking failing in 1.0.1

2014-01-09 Thread Jakob Bohm

On 1/9/2014 8:14 PM, Dr. Stephen Henson wrote:

On Thu, Jan 09, 2014, Bin Lu wrote:


  Here is the problem, in cert_crl():

/* The rules changed for this... previously if a CRL contained
  * unhandled critical extensions it could still be used to indicate
  * a certificate was revoked. This has since been changed since
  * critical extension can change the meaning of CRL entries.
  */
 if (crl-flags  EXFLAG_CRITICAL)
 {
 if (ctx-param-flags  X509_V_FLAG_IGNORE_CRITICAL)
 return 1;
 ctx-error = X509_V_ERR_UNHANDLED_CRITICAL_CRL_EXTENSION;
 ok = ctx-verify_cb(0, ctx);
 if(!ok)
 return 0;
 }

Why are we making this change, skipping the critical CRL extensions? This is 
causing all the regressions. In this case, should we expect 
X509_V_ERR_UNHANDLED_CRITICAL_CRL_EXTENSION instead of the validation result 
based on the CRL content? Basically we fail the validation once we encounter a 
critical CRL extension, if flag IGNORE_CRITICAL is not set, or succeed if the 
flag is set, regardless whatsoever in the CRL ???



This is now a requirement of RFC5280 5.2:

If a CRL contains a critical extension
that the application cannot process, then the application MUST NOT
use that CRL to determine the status of certificates.



That seems a strange reading of the RFC.  If a flag to IGNORE this rule
is passed to OpenSSL, that should certainly ignore the rule, not the CRL.

A flag to ignore a MUST rule in an RFC, while obviously violating said
rule, also brings an implementation outside the scope of that rule, if
not the entire RFC (but only when that flag is specified).


What extension in your CRLs is critical?

Steve.
--
Dr Stephen N. Henson. OpenSSL project core developer.
Commercial tech support now available see: http://www.openssl.org
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: No fips and --with-fipsdir arguments in OpenSSL 1.0.0l configure script.

2014-01-08 Thread Jakob Bohm

On 1/8/2014 10:42 AM, Abdul Anshad wrote:

Hello All,

I noticed in trying to build OpenSSL 1.0.0l that, Configure doesn't
accept the fips and --with-fipsdir= arguments. But, the OpenSSl 1.0.1f
and OpenSSL 0.9.8y accepts the same.

Does that mean that the OpenSSL 1.0.0l wont support fips mode ? is the
branch OpenSSL 1.0.0 still under fips validation ?



OpenSSL 1.0.0 never had a variant with a FIPS validated submodule.

OpenSSL 0.9.8 can be used with the (old) OpenSSL FIPS module 1.0, by (as 
one of many steps) compiling OpenSSL 0.9.8 --with-fipsdir=


OpenSSL 1.0.1 can be used with the (current) OpenSSL FIPS module 2.0, by
(as one of many steps) compiling OpenSSL 1.0.1 --with-fipsdir=

In either case, it is technically only the FIPS module and not the
OpenSSL library which is subject to FIPS validation.

(Note: There was no OpenSSL 0.9.9)

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: OpenSSL version 1.0.1f released

2014-01-08 Thread Jakob Bohm

Given that Mr. Walton's initial description was wrong, and the
official Changelog is silent on the matter, what is *actually*
new in 1.0.1f and 1.0.0l compared to 1.0.1e and 1.0.0k?

On 1/6/2014 3:49 PM, OpenSSL wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


OpenSSL version 1.0.1f released
===


Snipped rest of announcement boilerplate


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


A small note on Windows 8 GetVersion() depreciation

2014-01-08 Thread Jakob Bohm

While I have not specifically checked the Windows 8 SDK, my extensive
experience with the version detection APIs in Windows tells me the
following:

1. GetVersion() is the only version-detection API available on older
  platform versions.  Later platform versions added variants of
  GetVersionEx(), with each newer variant being available on less
  platforms.

2. The order of the bit fields returned by GetVersion() has
  historically confused many developers, therefore Microsoft has long
  told people to avoid it if they don't know what they are doing.
   At one point, even the editor of the GetVersion() documentation
  got confused!

3. Starting a few years ago, Microsoft began a trend of using the
  compiler __declspec(deprecate) mechanism to scare developers
  away from functions that are not really deprecated, just not
  recommended for some other reason.  Those deprecations can
  usually be ignored safely by those with good reason to use those
  more portable APIs.

So, if this is just another political compiler warning, there is
little reason to head it.

Otherwise, the GetVersionEx() function can be used as a replacement,
but only by dropping support for Windows NT 3.10 and maybe Win32s
(NT 3.50 and all the Win9x and WinCE variants include the basic
form of GetVersionEx()).

P.S.

If there is still code in there to support 16 bit Windows 3.x, then
that API includes only GetVersion(), and with a different
specification than its 32/64 bit namesake.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Merkle signature scheme

2014-01-07 Thread Jakob Bohm

On 1/6/2014 9:05 PM, Andrey Utkin wrote:

Hi all.
It seems subj is not present in OpenSSL as implementation or any helper
functionality.


Hmm, I believe you are right, as I am not aware of any support for
limiting the number of invocation of a a private key, nor am I sure
the OpenSSL code is structured in a way suitable to the unusually
large key sizes needed.


At the moment I have no deep understanding of both MSS and OpenSSL
design, but I'd like to know qualified opinions: is there a possibility
for adding MSS implementation to OpenSSL? If yes, I could work on
implementation if i get some menthorship, or i could donate for it.




For a 256 bit security level (512 bit hashes), each one-time Lamport
private or public key will be 64 KB, and each Lamport signature 32KB.
To allow 2**n signatures per Merkle public key, the private key needs
to be 2**n * 64 KB (e.g. 64MB for 1024 signatures) and each Merkle
signature will be 32 KB + (n-1) * 64 bytes (e.g. 32.6 KB for 1024 
signatures).  The Merkle public key, however is just a single 64 byte

hash.

While it saves space, using a PRNG to replace the private key by an
RNG key that generates it is probably not a good idea if you worry about 
the level of attacker skill implied by the need for Merkle

signatures.

In practice, the infrastructure needed to handle Merkle signatures
with a practical usage count is going to be very similar to that
which is needed to implement one-time pads.  Which means some kind
of tape-like storage with gradual self-destruct as each private key
is read and used.  A modern 1TB backup tape could handle a private
key with 8 million uses, if supplemented by 1 GB of disk storage
for the public key hashes, however backup tape systems don't have
the gradual destruction mechanism, since they are designed for
backup survival, not destruction.

8 million uses corresponds to 8 million hits on a HTTPS server using
Merkle signatures with EDH or ECDH key exchange.  And we still need
a usable quantum-resistant encryption and package mac scheme to make
this worthwhile.  (Plus a quantum resistant key exchange that doesn't
require a private dark fiber between the parties).


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: OpenSSL CA and signing certs with SANs

2014-01-07 Thread Jakob Bohm

On 1/7/2014 12:17 AM, Biondo, Brandon A. wrote:

I am using ‘ca’ not ‘x509’. It too ignores/discards extensions. Turning
on copy_extensions solved the issue though, thanks. I have some
follow-up questions:

1.If including SANs in CSRs is non-standard, what is the accepted way of
passing all the metadata you want to an authority to construct your
certificate?



Many commercial CAs take all the certificate information out-of-band
on a web form, the only thing those CAs use from the CSR is that it is
signed with the requested public/private key pair and has the right
subject.


2.Why does the config file say to be careful using copy_extensions? Why
wouldn’t you want all your extensions to be part of your certificate?
Isn’t the whole point of a CSR to package up all the data you want in
your certificate?



Because copy-extensions copies all the extensions in the CSR, so if you
use it, you will need to carefully check every extension in every CSR 
you receive from users.  Note that security-wise, you should not 
blindly trust a CSR from a less secure computer than the CA computer,

even if you happen to be the person who generated that CSR (when you
take off your user hat and put on your CA administrator hat, you
need to check if the User's computer generated a different CSR than
what you agreed to sign).

When I generate certificates with SANs (which I usually do), I typically
use one of two approaches:

A) For the common case: The CA's openssl.cnf adds the usual SANs as 
extensions, taking the actual name parts from environment variables 
which my scripts set from my input before signing each cert.


B) For the handful of more complex cases, I construct a custom section
in openssl.cnf which adds those specific SANs, as well as any other
unusual extensions.



*From:*owner-openssl-us...@openssl.org
[mailto:owner-openssl-us...@openssl.org] *On Behalf Of *Dave Thompson
*Sent:* Monday, January 06, 2014 5:38 PM
*To:* openssl-users@openssl.org
*Subject:* RE: OpenSSL CA and signing certs with SANs

It is debatable whether putting SAN in the request is really ‘proper’;

I don’t know of any ‘real’ (public) CA that accepts it that way.

But for openssl:

If you are using ‘ca’, set copy_extensions in the config file. See the
man page.

If you are using ‘x509 –req’, that ignores/discards extensions from the CSR.

It can **add** extensions from a config file, but since you usually want
SAN

to be different for every subject cert that isn’t very convenient.

Do you really mean ‘x509 –signkey’ to selfsign, or ‘req –x509’?

The latter is IME much more common.

*From:*owner-openssl-us...@openssl.org
mailto:owner-openssl-us...@openssl.org
[mailto:owner-openssl-us...@openssl.org] *On Behalf Of *Biondo, Brandon A.
*Sent:* Monday, January 06, 2014 16:16
*To:* openssl-users@openssl.org mailto:openssl-users@openssl.org
*Subject:* OpenSSL CA and signing certs with SANs

Hello,

Forgive me if I breach etiquette. This is my first post to this list in
quite a while.

I am having trouble tracking down information regarding how you
reconfigure an OpenSSL CA to handle SANs in requests. There is a wealth
of information on how to configure OpenSSL to form a proper request, but
in my searching I can only ever find people who use the x509 function to
self-sign their certs. When you use an OpenSSL CA to sign this type of
request, the certificate is made without issue but the SANS are stripped
out of the final product. What am I missing here?

Regards,

Brandon Biondo




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: potential bug in ssl/s3_cbc.c

2013-09-11 Thread Jakob Bohm

On 8/20/2013 8:49 PM, Arthur Mesh wrote:

I am not 100% sure this is a real bug, hence mailing openssl-users
instead of rt@.


641 if (is_sslv3)
642 {
snip
647 unsigned overhang = 
header_length-md_block_size;

648 md_transform(md_state.c, header);
649 memcpy(first_block, header + 
md_block_size, overhang);


My suspicion lies in line 649, where we're copying overhang number of 
bytes

from (header + md_block_size). I believe that copying from (header +
md_block_size) is out-of-bound access (overrun).

header is an array of 13 unsigned chars, and md_block_size == 64 (or 
128 in some
cases). Hence (header + md_block_size) points outside of header[13]. 
Assuming

overhang  0, by doing a memcpy(), we have a problem, no?



I think you got this partially wrong.

If sizeof(header) == header_length 
   header_length = md_block_size 
sizeof(first_block) = header_length - md_block_size
then the above code will not overflow.

But:

If header_length  md_block_size
then
  This code will massively overflow and crash, as it tries to copy 
almost MAX_UNSIGNED_INT bytes


If sizeof(first_block)  header_length - md_block_size
then
  This code will overflow first_block.

I sure hope there is code in there which checks the validity of the two 
inequalities, either directly or by only using hardcoded known good 
values for those parameters.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: 32-bit Windows rebasing of OpenSSL FIPS library

2013-09-11 Thread Jakob Bohm

On 9/6/2013 6:26 PM, Perrow, Graeme wrote:

I am having trouble loading the OpenSSL FIPS DLLs (2.0.5, using OpenSSL
1.0.1e) in my 32-bit Windows application. Most of the time I get a
“fingerprint does not match” error from FIPS_mode_set but now and again,
with no code changes, it succeeds. I have a feeling it has to do with
“rebasing” and where the DLL is loaded into memory.

Running dumpbin on libeay32.dll shows an image base of “FB0” which I
believe is correct. My application executable loads a DLL which loads a
second DLL, and that second DLL is the one that loads the FIPS DLLs. My
DLLs are built with /FIXED:NO and /BASE: with a specific address. I have
made no changes to the OpenSSL build procedure.

My problem is that I don’t understand rebasing well enough to know if I
need to build my DLLs differently, or if I need to tweak the OpenSSL
build procedure.

I was seeing a similar problem on 64-bit Windows for a while but that
error just “magically” went away and I haven’t seen it since. I’m
worried that the 64-bit problem is still there and I’m just getting lucky.



Rebasing is like relinking with a new value of /BASE:, but with the
linker magically making all the same decisions and without needing
access to the original .obj and .lib files.

Rebasing is also like loading a DLL into a process where its preferred
address is already in use by another allocation or DLL, causing the
loader to load it at a different address and applying the relocation
table in the DLL file to make the code work at the new address.  This
is in fact the secret to how the rebasing tools work without source
code access.

The big problem is that the FIPS fingerprinting method used by OpenSSL
seems to assume that the code will never be relocated between the time
the fingerprint was generated by an app developer and the time when the
code checks the fingerprint, which is a false assumption on any
platform that does address space layout randomization and a dubious
assumption on any dynamic platform.

So I would suggest that any Windows file containing the FIPS module (be
it a DLL or an EXE), needs to be linked with /FIXED to make it
loadable only at the address specified with /BASE and neither
relocatable nor rebasable.  The effect of this is that Windows will then
try as hard as possible to load it at the specified base address or
report an explicit error otherwise.  Official Microsoft guidelines
suggest that the fixed base for a DLL should be in the range 0x5000
to 0x6FFF on 32 bit platforms and much higher on 64 bit platforms,
however you need to scan other (system and application) DLLs to make
sure you don't choose an address range used by any other DLL likely to
be loaded into your process.  EXE files should use the default /BASE 
value in any post-1997 linker (a few very old linkers used a different

value not compatible with later platforms but the new value is
backwards compatible).


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


<    2   3   4   5   6   7   8   9   10   11   >