Re: [openssl-dev] Speck Cipher Integration with OpenSSL

2018-01-08 Thread Paul Dale
I'm wondering if one of the more specialised embedded cryptographic toolkits 
mightn't be a better option for your lightweight IoT TLS stack.  There is a 
wide choice available: CycloneSSL, ECT, Fusion, MatrixSSL, mbedTLS, NanoSSL, 
SharkSSL, WolfSSL, uC/SSL and many others.  All of them claim to be the 
smallest, fastest and most feature laden :)  To sell to the US government,  
your first selection criteria should be "does the toolkit have a current FIPS 
validation?"  From memory this means: ECT, nanoSSL or WolfSSL.

The more comprehensive toolkits (OpenSSL, NSS, GNU TLS) are less suitable for 
embedded applications, especially tightly resource constrained ones.  It is 
possible to cut OpenSSL down in size but it will never compete with the 
designed for embedded toolkits.  Plus, the FIPS module is fixed and cannot be 
shrunk.

The current OpenSSL FIPS validation only applies to 1.0.2 builds currently.  
FIPS is on the project plan for 1.1 but it isn't available at the moment.  The 
US government is forbidden to purchase any product that contains cryptographic 
operations unless the product has a FIPS validation.  No FIPS, no sale.


Pauli
-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia

-Original Message-
From: William Bathurst [mailto:wbath...@gmail.com] 
Sent: Tuesday, 9 January 2018 7:10 AM
To: openssl-dev@openssl.org
Cc: llamour...@gmail.com
Subject: Re: [openssl-dev] Speck Cipher Integration with OpenSSL

Hi Hanno/all,

I can understand your view that "more is not always good" in crypto. The 
reasoning behind the offering can be found in the following whitepaper:

https://csrc.nist.gov/csrc/media/events/lightweight-cryptography-workshop-2015/documents/papers/session1-shors-paper.pdf

I will summarize in a different way though. We wish to offer an optimized 
lightweight TLS for IoT. A majority of devices found in IoT are resource 
constrained, for example a device CPU may only have 32K of RAM. Therefore 
security is an afterthought by developers. For some only AES 128 is available 
and they wish to use 256 bit encryption. Then Speck
256 would be an option because it has better performance and provides 
sufficient security.

Based on the above scenario you can likely see why we are interested in 
OpenSSL. First, OpenSSL can be used for terminating lightweight TLS connections 
near the edge, and then forwarding using commonly used ciphers.

[IoT Device] -TLS/Speck>[IoT Gateway]-TLS> [Services]

Also, we are interested in using OpenSSL libraries at the edge for client 
creation. One thing we would like to do is provide instructions for an highly 
optimized build of OpenSSL that can be used for contrained devices.

I think demand will eventually grow because there is an initiative by the US 
government to improve IoT Security and Speck is being developed and proposed as 
a standard within the government. Therefore, I see users who wish to play in 
this space would be interested in a version where Speck could be used in 
OpenSSL.

It is my hope to accomplish the following:

[1] Make Speck available via Open Source, this could be as an option or as a 
patch in OpenSSL.
[2] If we make it available as a patch, is there a place where we would 
announce/make it known that it is available?

We are also looking at open-sourcing the client side code. This would be used 
to create light-weight clients that use Speck and currently we also build basic 
OAuth capability on top of it.

Thanks for your input!

Bill

On 1/5/2018 11:40 AM, Hanno Böck wrote:
> On Fri, 5 Jan 2018 10:52:01 -0800
> William Bathurst  wrote:
>
>> 1) Community interest in such a lightweight cipher.
> I think there's a shifting view that "more is not always good" in 
> crypto. OpenSSL has added features in the past "just because" and it 
> was often a bad decision.
>
> Therefore I'd generally oppose adding ciphers without a clear usecase, 
> as increased code complexity has a cost.
> So I think questions that should be answered:
> What's the usecase for speck in OpenSSL? Are there plans to use it in 
> TLS? If yes why? By whom? What advantages does it have over existing 
> ciphers? (Yeah, it's "lightweight", but that's a pretty vague thing.)
>
>
> Also just for completeness, as some may not be aware: There are some 
> concerns about Speck due to its origin (aka the NSA). I don't think 
> that is a reason to dismiss a cipher right away, what I'd find more 
> concerning is that from what I observed there hasn't been a lot of 
> research about speck.
>

--
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


[openssl-dev] FIPS module for 1.1.x ?

2017-11-20 Thread Paul Dale
With the recent changes in OMC, how has the OpenSSL 1.1.x FIPS plan changed?  
Can the OMC provide some insight?  It was suggested that I wait for things to 
settle before asking and it has been a few weeks since the last of the 
announcements.

 

Previously, there was a plan to expand the engine interface so it could provide 
FIPS capability but I understand that other possibilities are again under 
consideration.

 

 

Pauli

-- 

Oracle

Dr Paul Dale | Cryptographer | Network Security & Encryption 

Phone +61 7 3031 7217

Oracle Australia

 
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Plea for a new public OpenSSL RNG API

2017-08-30 Thread Paul Dale
To access a PKCS#11 randomness source, it would be necessary to have an engine 
that implemented whatever new RNG API is defined which in turn talks to the P11 
device.  Possibly not ideal but workable.

As for the entropy argument to RAND_add et al, the callee will use it in a 
manner suitable to it.  For a DRBG, the buffer will likely be added as 
additional data and the argument ignored.  Other RNGs could use it differently. 
 NIST 800-90B 3.1.5.1.2 specifies the minimum amount of entropy that must be 
input in order achieve a desired output.  In this case the RNG would accumulate 
the entropy arguments until it achieves the requisite level.

The trust issue is both harder and easier.  I agree completely that working out 
if you trust the assessed entropy or not is an incredibly difficult 
(impossible) task.  However, we're a library and the user is telling us what 
_they_ trust.  It isn't OpenSSL's place to try to second guess this, issue a 
warning at best.  We don't stop users choosing poor passwords, using zero as an 
IV or generating a sixteen bit RSA key.


As for the new RNG engine API, I've been considering the benefits of having two 
calls: one to get random bytes, the other to request entropy.  The first can be 
whitened or produced by a DRBG etc, the second also returns an estimate as to 
the quality.  Essentially the difference between RDRAND and RDSEED.


Pauli

-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia

-Original Message-
From: Blumenthal, Uri - 0553 - MITLL [mailto:u...@ll.mit.edu] 
Sent: Thursday, 31 August 2017 12:27 AM
To: openssl-dev@openssl.org
Subject: Re: [openssl-dev] Plea for a new public OpenSSL RNG API

On 8/30/17, 00:59, "openssl-dev on behalf of Paul Dale" 
 wrote:

>My thoughts are that the new RNG API should be made public once it has
>been properly designed.  We've a chance to get this right, let's take the 
> time
>and make an effort to do so.  There is no rush.

Not quite. If there is an RNG involved in generating long-term keys now – users 
better be able to control/affect it now.

>   I also feel that having an engine interface to the new RNG API is 
> worthwhile.

+1

>   It allows hardware sources to be used via the same API.

I rather doubt this. For example, my smartcard (accessible via PKCS#11) is a 
hardware source, which I occasionally use. How do you see it used with the same 
API?

>I would like to see an entropy argument to the equivalent to RAND_add.
>   Anyone is welcome to always pass zero in but there should be the option to 
> not. 
>   Consider an hardware source with _provable_ output quality, why shouldn't 
> it be
>   allowed to meaningfully contribute?

What’s the purpose of passing the entropy argument? How is the callee (the RNG) 
going to use it? Why should the OpenSSL code in general trust the received 
value (how can OpenSSL tell that the received randomness is indeed from a 
hardware source with provable output quality)? Finally, what does it matter? 

And they all “meaningfully contribute”. The only question is whether this 
contribution should prevent RNG from acquiring more entropy from other sources. 
My opinion is resounding no.

>   I like the idea of two independent global RNGs.

+1  ;-)

>  This does increase seeding requirements however.

If you can seed one, you can seed two.



-Original Message-
From: Dr. Matthias St. Pierre [mailto:matthias.st.pie...@ncp-e.com] 
Sent: Tuesday, 29 August 2017 7:45 PM
To: openssl-dev@openssl.org
Subject: [openssl-dev] Plea for a new public OpenSSL RNG API

Hi everybody,

on the [openssl-dev] mailing list, there has been a long ongoing discussion 
about the new RAND_DRBG API and comparing it with the old RAND_METHOD API (see 
"[openssl-dev] Work on a new RNG for OpenSSL"). Two of the most controversal 
questions were:

 - Do we really need a new RNG API? Should the RAND_DRBG API be made public 
or kept private? (Currently, it's exported from libcrypto but only used 
internally by libssl.)
 - How much control should the user (programmer) be given over the 
reseeding process and/or should he be allowed to add his own additional 
randomness?

Many developers seem to be realizing the interesting possibilities of the 
DRBG API and are asking for public access to this new and promising API. One of 
the driving forces behind it is the question about how to do seeding and 
reseeding right. Among others, Uri Blumenthal asked for making the DRBG API 
public.

Currently, the OpenSSL core members seem to be reluctant to make the API 
public, at least at this early stage. I understand Rich Salz's viewpoint that 
this requires a thorough discussion, because a public interface can't be easily 
changed and wrong 

Re: [openssl-dev] Plea for a new public OpenSSL RNG API

2017-08-29 Thread Paul Dale
My thoughts are that the new RNG API should be made public once it has been 
properly designed.  We've a chance to get this right, let's take the time and 
make an effort to do so.  There is no rush.

I also feel that having an engine interface to the new RNG API is worthwhile.  
It allows hardware sources to be used via the same API.  Again, this will 
require design work to get right.  Likewise, supporting multiple DRBGs seems 
reasonable from a defence in depth perspective.

I would like to see an entropy argument to the equivalent to RAND_add.  Anyone 
is welcome to always pass zero in but there should be the option to not.  
Consider an hardware source with _provable_ output quality, why shouldn't it be 
allowed to meaningfully contribute?  Each RNG should decide if it uses the 
argument or not.  Thus, a DRBG that uses RAND_add to provide additional data 
needn't count it.

I like the idea of two independent global RNGs.  Keeping the generation of long 
lived key material segregated from other uses of randomness seems sensible -- 
there is no possibility of cross compromise.  This does increase seeding 
requirements however.


Pauli
-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia


-Original Message-
From: Dr. Matthias St. Pierre [mailto:matthias.st.pie...@ncp-e.com] 
Sent: Tuesday, 29 August 2017 7:45 PM
To: openssl-dev@openssl.org
Subject: [openssl-dev] Plea for a new public OpenSSL RNG API

Hi everybody,

on the [openssl-dev] mailing list, there has been a long ongoing discussion 
about the new RAND_DRBG API and comparing it with the old RAND_METHOD API (see 
"[openssl-dev] Work on a new RNG for OpenSSL"). Two of the most controversal 
questions were:

 - Do we really need a new RNG API? Should the RAND_DRBG API be made public or 
kept private? (Currently, it's exported from libcrypto but only used internally 
by libssl.)
 - How much control should the user (programmer) be given over the reseeding 
process and/or should he be allowed to add his own additional randomness?

Many developers seem to be realizing the interesting possibilities of the DRBG 
API and are asking for public access to this new and promising API. One of the 
driving forces behind it is the question about how to do seeding and reseeding 
right. Among others, Uri Blumenthal asked for making the DRBG API public.

Currently, the OpenSSL core members seem to be reluctant to make the API 
public, at least at this early stage. I understand Rich Salz's viewpoint that 
this requires a thorough discussion, because a public interface can't be easily 
changed and wrong decisions in the early phase can become a heavy burdon.

Nevertheless, I agree with Uri Blumenthal that the DRBG API should be made 
public. So here comes my

==
Plea for a new public OpenSSL RNG API:
==

The new RAND_DRBG is the superior API. It shouldn't be kept private and 
hidden behind the ancient RAND_METHOD API.
The philosophy of the two APIs is not very well compatible, in particular 
when it comes to reseeding and adding
additional unpredictable input. Hiding the RAND_DRBG behind the RAND_METHOD 
API only causes problems.
Also, it will force people to patch their OpenSSL copy if they want to 
use the superior API.

The RAND_DRBG API should become the new public OpenSSL RNG API and the old 
RAND_METHOD API should be deprecated
in the long run. This transition does not need to be rushed, but it would 
be good if there would be early consent
on the road map. I am thinking of a smooth transition with a phase of 
coexistence and a compatibility layer
mapping the default RAND_METHOD to the default public RAND_DRBG instance. 
(This compatibility layer already exists,
it's the 'RAND_OpenSSL()' method.)



Historical Background
=

As Rich already mentioned in his blog post, the RAND_DRBG isn't new. It's been 
a part of OpenSSL for a long time, hidden in the FIPS 2.0 Object Module.

I have been working with the FIPS DRBG for quite a while now, using a 
FIPS-capable OpenSSL 1.0.2x crypto library. The reason why our company switched 
to the FIPS DRBG is that one of our products runs on a small hardware device 
which does not have a reliable entropy source, but the product has to meet high 
security standards, in particular w.r.t. its RNG. So we decided to use the 
SmartCard RNG as primary entropy source for a deterministic AES-CTR based RNG 
and use /dev/urandom as additional input. Reseeding should occur on every 
generate request. Using the FIPS DRBG, these requirements were easily met, 
because the API gives such a fine grained control over reseeding and adding 
additional entropy.

The DRBG was well documented, its design in NIST SP800-90A (now: NIST 
SP800-90Ar1)  and its API in the OpenSSL FIP

Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-08-21 Thread Paul Dale
Uri wrote:
>>It might also use things like RDRAND / RDSEED which we don't trust.
> ...
>  From cryptography point of view, it cannot hurt, but may help a lot

There is a scenario where it does hurt: 
https://www.lvh.io/posts/2013/10/thoughts-on-rdrand-in-linux.html

This attack wouldn't be difficult to implement given all the out of order 
execution and look ahead that CPUs do.   It requires a compromised RDRAND 
instruction changing the behaviour of a subsequent XOR into a copy.  Not only 
would it not be producing random bits but it would remove any randomness from 
the bits you already have.


Pauli
-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-06-28 Thread Paul Dale
Cory asked:
> When you say “the linked article”, do you mean the PCWorld one?

My apologies I meant the one Ted referred to soon after.


Pauli
-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia


-Original Message-
From: Cory Benfield [mailto:c...@lukasa.co.uk] 
Sent: Wednesday, 28 June 2017 5:15 PM
To: openssl-dev@openssl.org
Subject: Re: [openssl-dev] Work on a new RNG for OpenSSL


> On 28 Jun 2017, at 04:00, Paul Dale  wrote:
> 
> 
> Peter Waltenberg wrote:
>> The next question you should be asking is: does our proposed design mitigate 
>> known issues ?. 
>> For example this:
>> http://www.pcworld.com/article/2886432/tens-of-thousands-of-home-routers-at-risk-with-duplicate-ssh-keys.html
> 
> Using the OS RNG won't fix the lack of boot time randomness unless there is a 
> HRNG present.
> 
> For VMs, John's suggestion that /dev/hwrng should be installed is reasonable.
> 
> For embedded devices, a HRNG is often not possible.  Here getrandom() (or 
> /dev/random since old kernels are common) should be used.  Often /dev/urandom 
> is used instead and the linked article is the result.  There are possible 
> mitigations that some manufacturers include (usually with downsides).

When you say “the linked article”, do you mean the PCWorld one? Because that 
article doesn’t provide any suggestion that /dev/urandom has anything to do 
with it. It is at least as likely that the SSH key is hard-coded into the 
machine image. The flaw here is not “using /dev/urandom”, it’s “exposing your 
router’s SSH access on the external side of the router”, plus the standard 
level of poor configuration done by shovelware router manufacturers.

Cory

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-06-27 Thread Paul Dale

Peter Waltenberg wrote:
> The next question you should be asking is: does our proposed design mitigate 
> known issues ?. 
> For example this:
> http://www.pcworld.com/article/2886432/tens-of-thousands-of-home-routers-at-risk-with-duplicate-ssh-keys.html

Using the OS RNG won't fix the lack of boot time randomness unless there is a 
HRNG present.

For VMs, John's suggestion that /dev/hwrng should be installed is reasonable.

For embedded devices, a HRNG is often not possible.  Here getrandom() (or 
/dev/random since old kernels are common) should be used.  Often /dev/urandom 
is used instead and the linked article is the result.  There are possible 
mitigations that some manufacturers include (usually with downsides).

The question is should a security toolkit try to do a good job regardless?

I've seen more cases than I care to count where long term cryptographic 
material is generated on first boot out of the factory.  I've even seen some 
cases where this was done during the factory test.


Pauli
-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-06-27 Thread Paul Dale
Ben wrote:
> On 06/27/2017 07:24 PM, Paul Dale wrote:
>> The hierarchy of RNGs will overcome some of the
>> performance concerns.  Only the root needs to call getrandom().
>> I do agree that having a DRBG at the root level is a good idea though.
 
> Just to check my understanding, the claim is that adding more layers of 
> hashing and/or encryption will still be faster than a larger number of 
> syscalls?

I'm not sure if it will be faster or not, although it seems likely.  The kernel 
will have to do the same cryptographic operations so using it adds a syscall 
overhead.  If the kernel is doing different cryptographic operations, then it 
could be faster.
 
However, I'm more interested in separation of the random sources.  I'd prefer 
to not be sharing my RNG with others if possible.  A compromise is unlikely but 
if one happens it would be nice to limit the damage.


Pauli
-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia

From: Benjamin Kaduk [mailto:bka...@akamai.com] 
Sent: Wednesday, 28 June 2017 11:22 AM
To: openssl-dev@openssl.org; Paul Dale 
Subject: Re: [openssl-dev] Work on a new RNG for OpenSSL


-Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-06-27 Thread Paul Dale
The hierarchy of RNGs will overcome some of the performance concerns.  Only the 
root needs to call getrandom().

I do agree that having a DRBG at the root level is a good idea though.

 

Pauli

-- 

Oracle

Dr Paul Dale | Cryptographer | Network Security & Encryption 

Phone +61 7 3031 7217

Oracle Australia

 

From: Salz, Rich via openssl-dev [mailto:openssl-dev@openssl.org] 
Sent: Wednesday, 28 June 2017 4:56 AM
To: Kaduk, Ben ; openssl-dev@openssl.org; Matt Caswell 

Subject: Re: [openssl-dev] Work on a new RNG for OpenSSL

 

For windows RAND_bytes should just call CryptGenRandom (or its equiv).  For 
modern Linux, probably call getrandom(2).  For OpenBSD call arc4random().

 

Getrandom() is a syscall, and I have concerns about the syscall performance.  I 
would rather feed getrandom (or /dev/random if that’s not available) into a 
FIPS DRBG generator.

 
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-06-26 Thread Paul Dale
d of it.

Perhaps make it optional?  Embedded systems have trouble with random state at 
boot and a ~/.rnd file or equivalent is beneficial here.  I've implemented this 
to seed /dev/random a couple of times now.  It isn't ideal but it is better 
than nothing.


Pauli
-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


[openssl-dev] Code Health Tuesday - summary

2017-04-12 Thread Paul Dale
Code Health Tuesday is over once again.

 

In total 27 PRs were raised for the event with three of these as yet unmerged.  
In total about thirty tests were updated which represents roughly half of the 
outstanding test cases.

All in all, a solid outcome for testing uniformity.

 

 

Pauli

-- 

Oracle

Dr Paul Dale | Cryptographer | Network Security & Encryption 

Phone +61 7 3031 7217

Oracle Australia

 

From: Paul Dale 
Sent: Thursday, 6 April 2017 3:40 PM
To: openssl-dev@openssl.org
Subject: [openssl-dev] Code Health Tuesday - test modernisation

 

Next week on the 11th of April it is Code Health Tuesday again.

 

This fortnight it will be about updating the C unit tests to use the test 
framework. Everyone is invited to participate to help bring consistency and 
order to the unit tests.




Many of the existing C tests are ad-hoc.  The desired form of C test 
executables is described at the end of test/README.  A brief description of the 
condition and output framework is in the list archives: 
https://www.mail-archive.com/openssl-dev@openssl.org/msg46648.html.  Some tests 
have already been updated to use both to serve as examples.

 

 

Regards,

 

Pauli

(at the suggestion of the dev team)

 

 

FAQ:

 

Q: How do I participate?

A: Once you've update your tests, create a Github pull request and put "code 
health" in the title. Such commits will be monitored for quick turnaround.

 

Q: Which tests should I convert?

A: There is a spreadsheet: conversion: 
https://docs.google.com/spreadsheets/d/1VJTmEVT1EyYxZ90GnhAPd4vtFg74Ij3Y-pthjXdmH80/edit#gid=0
 This lists all the C tests, select one you want to work on and tag it to avoid 
duplication.

 

Q: Which branch should I target?

A: Master is the one.  It is the only branch with the new infrastructure.

 

Q: Where do I go if the infrastructure isn't working?

A: Post the problem here.

 

Q: Can I suggest improvements to the infrastructure?

A: Sure thing, post them here too.

 

 

-- 

Oracle

Dr Paul Dale | Cryptographer | Network Security & Encryption 

Phone +61 7 3031 7217

Oracle Australia

 
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Code Health Tuesday - test modernisation

2017-04-09 Thread Paul Dale
A quick reminder that tomorrow is _test update_ Code Health Tuesday.


Pauli
-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia

From: Paul Dale 
Sent: Thursday, 6 April 2017 3:40 PM
To: openssl-dev@openssl.org
Subject: [openssl-dev] Code Health Tuesday - test modernisation

Next week on the 11th of April it is Code Health Tuesday again.

This fortnight it will be about updating the C unit tests to use the test 
framework. Everyone is invited to participate to help bring consistency and 
order to the unit tests.

    
Many of the existing C tests are ad-hoc.  The desired form of C test 
executables is described at the end of test/README.  A brief description of the 
condition and output framework is in the list archives: 
https://www.mail-archive.com/openssl-dev@openssl.org/msg46648.html.  Some tests 
have already been updated to use both to serve as examples.


Regards,

Pauli
(at the suggestion of the dev team)


FAQ:

Q: How do I participate?
A: Once you've update your tests, create a Github pull request and put "code 
health" in the title. Such commits will be monitored for quick turnaround.

Q: Which tests should I convert?
A: There is a spreadsheet: conversion: 
https://docs.google.com/spreadsheets/d/1VJTmEVT1EyYxZ90GnhAPd4vtFg74Ij3Y-pthjXdmH80/edit#gid=0
 This lists all the C tests, select one you want to work on and tag it to avoid 
duplication.

Q: Which branch should I target?
A: Master is the one.  It is the only branch with the new infrastructure.

Q: Where do I go if the infrastructure isn't working?
A: Post the problem here.

Q: Can I suggest improvements to the infrastructure?
A: Sure thing, post them here too.


-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


[openssl-dev] Code Health Tuesday - test modernisation

2017-04-05 Thread Paul Dale
Next week on the 11th of April it is Code Health Tuesday again.

 

This fortnight it will be about updating the C unit tests to use the test 
framework. Everyone is invited to participate to help bring consistency and 
order to the unit tests.




Many of the existing C tests are ad-hoc.  The desired form of C test 
executables is described at the end of test/README.  A brief description of the 
condition and output framework is in the list archives: 
https://www.mail-archive.com/openssl-dev@openssl.org/msg46648.html.  Some tests 
have already been updated to use both to serve as examples.

 

 

Regards,

 

Pauli

(at the suggestion of the dev team)

 

 

FAQ:

 

Q: How do I participate?

A: Once you've update your tests, create a Github pull request and put "code 
health" in the title. Such commits will be monitored for quick turnaround.

 

Q: Which tests should I convert?

A: There is a spreadsheet: conversion: 
https://docs.google.com/spreadsheets/d/1VJTmEVT1EyYxZ90GnhAPd4vtFg74Ij3Y-pthjXdmH80/edit#gid=0
 This lists all the C tests, select one you want to work on and tag it to avoid 
duplication.

 

Q: Which branch should I target?

A: Master is the one.  It is the only branch with the new infrastructure.

 

Q: Where do I go if the infrastructure isn't working?

A: Post the problem here.

 

Q: Can I suggest improvements to the infrastructure?

A: Sure thing, post them here too.

 

 

-- 

Oracle

Dr Paul Dale | Cryptographer | Network Security & Encryption 

Phone +61 7 3031 7217

Oracle Australia

 
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


[openssl-dev] Test framework improvements

2017-03-28 Thread Paul Dale
A number of improvements to the output of the C portion of the test framework 
have been made.
Specifically, a number of functions have been added to provide uniform 
reporting of test case failures.

You access this functionality by including "testutil.h"

There are two unconditional functions: TEST_info and TEST_error which print 
informative and error messages respectively.  They have no return value and 
accept a printf format string plus arguments.

All of the remaining functions are conditional tests.  They return 1 if the 
condition is true and 0 if false.  They output a uniform diagnostic message in 
the latter case and nothing in the former.  The majority of these are of the 
form TEST_type_cond, where _type_ is the type being tested and _cond_ is the 
relation being tested.  The currently available types are:

typeC type
int int 
uintunsigned int
charchar
uchar   unsigned char   
longlong
ulong   unsigned long   
size_t  size_t  
ptr void *  
str char *  
mem void *, size_t  


For the integral types, cond can be:

condC comparison
eq  ==
ne  !=
gt  >
ge  >=
lt  <
le  <=


For the pointer types, cond can only be _eq_ or _ne_.  In the case of _str_ and 
_mem_, the memory pointed to is compared.  For _ptr_ just the pointer 
themselves are compared.  The _mem_ comparisons take a pair of pointers + sizes 
as arguments (i.e. ptr1, size1, ptr2, size2).


There are two additional short hand calls for ptr:

TEST_ptrptr != NULL
TEST_ptr_null   ptr == NULL


Finally, there are two calls to check Boolean values:

TEST_true   checks for != 0
TEST_false  checks for == 0


In all cases, it is up to the test executable to process the return codes and 
to indicate success or failure to the PERL test framework.  This would usually 
be done using:

if (!TEST_cond(args))
return 0;


See the brief notes at the end of test/README and look at some of the test 
cases that have been converted already:

test/asn1_internal_test.c
test/cipherlist_test.c
test/crltest.c
test/lhash_test.c
test/mdc2_internal_test.c
test/pkey_meth_test.c
test/poly1305_internal_test.c
test/ssl_test.c
test/ssl_test_ctx_test.c
test/stack_test.c
test/tls13encryptiontest.c
test/tls13secretstest.c
test/x509_internal_test.c
test/x509_time_test.c


To see some examples output of failing tests, test the test_test case:

make test TESTS=test_test VERBOSE=1

To see examples of all of the new test functions have a look in 
test/test_test.c  This features both passing and failing calls.  However, the 
actual error handling is not normal for a test executable because it treats 
desired failures as passes.


Pauli
-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] About Chinese crypto-algorithms

2016-09-27 Thread Paul Dale
There are a couple of draft standards available:

SM2 DSA: https://tools.ietf.org/html/draft-shen-sm2-ecdsa-02
SM3 Hash: https://tools.ietf.org/html/draft-shen-sm3-hash-01

Neither of these two looks like it would be difficult to implement.

I've not located English versions of the other algorithms but I haven't looked 
too deeply.


Pauli

-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia


-Original Message-
From: Salz, Rich [mailto:rs...@akamai.com] 
Sent: Wednesday, 28 September 2016 2:26 AM
To: openssl-dev@openssl.org; robin 
Subject: Re: [openssl-dev] About Chinese crypto-algorithms

> Is there currently any documentation at all on these Chinese algorithms?
> I'm certainly curious, and I'm sure others in the OpenSSL community will be.

Also, please know that we are already looking at several large projects (TLS 
1.3, FIPS, etc).  In my personal opinion, I would be surprised if anyone on the 
team had a lot of time to spend on this.  We have already turned down 
Camellia-GCM, for example.

An English specification, test vectors, and a complete implementation as a Pull 
Request are the most likely ways for it to happen.  Even better would be to 
implement it as a separate ENGINE, like Gost is.  Then we only need to reserve 
a few #define's for you.
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] DRBG entropy

2016-07-28 Thread Paul Dale
I probably should have mentioned this in my earlier message, but the 
exponential example is valid for the NSIT SP800-90B non-IID tests too: 5.74889 
bits per byte of assessed entropy.  Again about as good a result as the tests 
will ever produce given the ceiling of six on the output.  There is still zero 
actual entropy in the data.  The tests have massively over estimated.


Pauli
-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia


-Original Message-
From: Kurt Roeckx [mailto:k...@roeckx.be] 
Sent: Friday, 29 July 2016 8:31 AM
To: openssl-dev@openssl.org
Subject: Re: [openssl-dev] DRBG entropy

On Wed, Jul 27, 2016 at 05:32:49PM -0700, Paul Dale wrote:
> John's spot on the mark here.  Testing gives a maximum entropy not a minimum. 
>  While a maximum is certainly useful, it isn't what you really need to 
> guarantee your seeding.

Fom what I've read, some of the non-IID tests actually underestimate the actual 
entropy.  Which is of course better the overestimating it, but it's also 
annoying.

It will also never give a value higher than 6, since one of the tests only uses 
6 bits of the input.

> IID is a statistical term meaning independent and identically 
> distributed which in turn means that each sample doesn't depend on any 
> of the other samples (which is clearly incorrect)

You shouldn't run the IID tests when you clearly know it's not an IID.  If 
fact, if you're not sure it's an IID you should use the non-IID tests.


Kurt

--
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] DRBG entropy

2016-07-27 Thread Paul Dale
John's spot on the mark here.  Testing gives a maximum entropy not a minimum.  
While a maximum is certainly useful, it isn't what you really need to guarantee 
your seeding.

A simple example which passes the NIST SP800-90B first draft tests with flying 
colours:

seed = π - 3
for i = 1 to n do
seed = frac(exp(1+2*seed))
entropy[i] = 256 * frac(2^20 * seed)

where frac is the fractional part function, exp is the exponential function.

I.e. start with the fractional part of the transcendental π and iterate with a 
simple exponential function.  Take bits 21-28 of each iterate as a byte of 
"entropy".  Clearly there is really zero entropy present: the formula is simple 
and deterministic; the floating point arithmetic operations will all be 
correctly rounded; the exponential is evaluated in a well behaved area of its 
curve where there will be minimal rounding concerns; the bits being extracted 
are nowhere near where any rounding would occur and any rounding errors will 
likely be deterministic anyway.

Yet this passes the SP800-90B (first draft) tests as IID with 7.89 bits of 
entropy per byte!

IID is a statistical term meaning independent and identically distributed which 
in turn means that each sample doesn't depend on any of the other samples 
(which is clearly incorrect) and that all samples are collected from the same 
distribution.  The 7.89 bits of entropy per byte is pretty much as high as the 
NIST tests will ever say.  According to the test suite, we've got an "almost 
perfect" entropy source.


There are other test suites if you've got sufficient data.  The Dieharder suite 
is okay, however the TestU01 suite is most discerning I'm currently aware of.  
Still, neither will provide an entropy estimate for you.  For either of these 
you will need a lot of data -- since you've got a hardware RNG, this shouldn't 
be a major issue.  Avoid the "ent" program, it seems to overestimate the 
maximum entropy present.


John's suggestion of collecting additional "entropy" and running it through a 
cryptographic has function is probably the best you'll be able to achieve 
without a deep investigation.  As for how much data to collect, be 
conservative.  If the estimate of the maximum entropy is 2.35 bits per byte, 
round this down to 2 bits per byte, 1 bit per byte or even ½ bit per byte.  The 
lower you go the more likely you are to be getting the entropy you want.  The 
trade-off is the time for the hardware to generate the data and for the 
processor to hash it together.


Pauli
-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia

-Original Message-
From: John Denker [mailto:s...@av8n.com] 
Sent: Wednesday, 27 July 2016 11:40 PM
To: openssl-dev@openssl.org
Subject: Re: [openssl-dev] DRBG entropy

On 07/27/2016 05:13 AM, Leon Brits wrote:
> 
> I have a chip (FDK RPG100) that generates randomness, but the 
> SP800-90B python test suite indicated that the chip only provides
> 2.35 bits/byte of entropy. According to FIPS test lab the lowest value 
> from all the tests are used as the entropy and 2 is too low. I must 
> however make use of this chip.

That's a problem on several levels.

For starters, keep in mind the following maxim:
 Testing can certainty show the absence of entropy.
 Testing can never show the presence of entropy.

That is to say, you have ascertained that 2.35 bits/byte is an /upper bound/ on 
the entropy density coming from the chip.  If you care about security, you need 
a lower bound.  Despite what FIPS might lead you to believe, you cannot obtain 
this from testing.
The only way to obtain it is by understanding how the chip works.
This might require a trmendous amount of effort and expertise.



Secondly, entropy is probably not even the correct concept.  For any given 
probability distribution P, i.e. for any given ensemble, there are many 
measurable properties (i.e. functionals) you might look at.
Entropy is just one of them.  It measures a certain /average/ property.
For cryptologic security, depending on your threat model, it is quite possible 
that you ought to be looking at something else.  It may help to look at this in 
terms of the Rényi functionals:
  H_0[P] = multiplicity  = Hartley functional
  H_1[P] = plain old entropy = Boltzmann functional
  H_∞[P] = adamance

The entropy H_1 may be appropriate if the attacker needs to break all messages, 
or a "typical" subset of messages.  The adamance H_∞ may be more appropriate if 
there are many messages and the attacker can win by breaking any one of them.

To say the same thing in other words:
 -- A small multiplicity (H_0) guarantees the problem is easy for the attacker.
 -- A large adamance (H_∞) guarantees the problem is hard for the attacker.

===

Re: [openssl-dev] [openssl.org #4386] [PATCH] Add sanity checks for BN_new() in OpenSSL-1.0.2g

2016-03-07 Thread Paul Dale
If one of the allocation calls succeeds and the other fails, the patched code 
will leak memory.
It needs something along the lines of:

if (order != NULL) BN_clear_free(order);
if (d != NULL) BN_clear_free(d);

in the failure case code.


Pauli

-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia

On Mon, 7 Mar 2016 05:55:23 PM Bill Parker via RT wrote:
> Hello All,
> 
> In reviewing code in directory 'engines/ccgost', file 'gost2001.c',
> there are two calls to BN_new() which are not checked for a return
> value of NULL, indicating failure.
> 
> The patch file below should address/correct this issue:
> 
> --- gost2001.c.orig 2016-03-06 11:32:49.676178425 -0800
> +++ gost2001.c  2016-03-06 11:38:04.604204158 -0800
> @@ -434,6 +434,10 @@
>  int gost2001_keygen(EC_KEY *ec)
>  {
>  BIGNUM *order = BN_new(), *d = BN_new();
> +if (!order || !d) {
> +   GOSTerr(GOST_F_GOST2001_KEYGEN, ERR_R_MALLOC_FAILURE);
> +   return 0;
> +}
>  const EC_GROUP *group = EC_KEY_get0_group(ec);
> 
>  if(!group || !EC_GROUP_get_order(group, order, NULL)) {
> 
> 

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl.org #4227] openssl rand 10000000000 does not produce 10000000000 random bytes

2016-01-12 Thread Paul Dale
On Wed, 13 Jan 2016 12:32:39 AM Viktor Dukhovni wrote:
> In most cases, just overwriting a disk with zeros is as good as
> with any other pattern.

Peter Gutmann published a paper showing that it is possible to read zeroed bits 
with the right equipment: 
https://www.usenix.org/legacy/publications/library/proceedings/sec96/full_papers/gutmann/index.html

I remember a report not long after the original paper was published where the 
writer zeroed a drive and went to several data recovery companies who couldn't 
retrieve anything (sorry, can't find the reference).

Also note that this technique doesn't work on newer drives: 
http://seclists.org/bugtraq/2005/Jul/464


If you are protecting against governments or extremely well equipped 
organisations, a zeroed disc might be recoverable with a large investment of 
time and effort.  If you are in this case and what you are protecting is worth 
that much, follow use one of the approved secure disc erasure methods -- 
several times.


- Pauli

-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia

___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl.org #4227] openssl rand 10000000000 does not produce 10000000000 random bytes

2016-01-11 Thread Paul Dale
On Tue, 12 Jan 2016 03:36:59 AM Kaduk, Ben via RT wrote:
> There's also the part where asking 'openssl rand' for gigabytes of data
> is not necessarily a good idea -- I believe in the default configuration
> on unix, it ends up reading 32 bytes from /dev/random and using that to
> seed EAY's md_rand.c scheme, which is not exactly a state-of-the-art
> CSPRNG these days...

This matches my understanding, although I thought these bytes would be read 
from /dev/urandom first.

The unwritten but implied part is that, in the default configuration, the 
deterministic generator is never reseeded -- those 32 bytes are all the entropy 
it will ever get.


Pauli
-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia

___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl-team] Discussion: design issue: async and -lpthread

2015-12-09 Thread Paul Dale
Nico,

Thanks for the clarification.  I was making an assumption that following the 
existing locking model, which did seem over complicated, was desirable.  Now 
that that is shot down, things can be much simpler.

It would make more sense to have a structure containing the reference counter 
and (optionally?) a lock to use for that counter.
With atomics, the lock isn't there or at least isn't used.  Without them, it 
is.  This is because, I somewhat suspect having a fall back global lock for all 
atomic operations would be worse than the current situation were at least a few 
different locks are used.

There is also the possibility of only using the per reference counter lock and 
not using atomic operations at all -- this would reduce the contention a lot 
and might not hurt performance much.  It would be easy to benchmark an 
uncontested lock/add/unlock versus atomic add on the target platforms to see 
the difference.


Thanks against for the insights,

Pauli
-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia
On Wed, 9 Dec 2015 03:27:51 AM Nico Williams wrote:
> On Wed, Dec 09, 2015 at 02:33:46AM -0600, Nico Williams wrote:
> > No more installing callbacks to get locking and atomics.
> 
> I should explain why.
> 
> First, lock callbacks are a serious detriment to usability.
> 
> Second, they are an admission that OpenSSL is incomplete.
> 
> Third, if we have lock callbacks to install, then we have the risk of
> racing (by multiple libraries using OpenSSL) to install them.  Unless
> there's a single function to install *all* such callbacks, then there's
> no way to install callbacks atomically.  But every once in a while we'll
> need to add an Nth callback, thus breaking the ABI or atomicity.
> 
> So, no, no lock callbacks.  OpenSSL should work thread-safely out of the
> box like other libraries.  That means that the default configuration
> should be to use pthreads on *nix, for example.  We'll need an atomics
> library (e.g., OpenPA, or something new) with safe and sane -if not very
> performant- defaults that use global locks for platform/compiler
> combinations where there's no built-in atomics intrinsics or system
> library.  It should be possible to have a no-threads configuration where
> the locks and atomics are non-concurrent-safe implementations.
> 
> Nico
> -- 



___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl-team] Discussion: design issue: async and -lpthread

2015-12-08 Thread Paul Dale

The "have-atomics" is intended to test if the callback was installed by the 
user.  If we're using an atomic library or compiler support, then it isn't 
required since we know we've got them.

Likewise, the lock argument isn't required if atomics are used everywhere.  
However, some code will need fixing since there are places that adjust 
reference counters directly using arithmetic operators (while holding the 
appropriate lock).  These will have to be changed to atomics and the locked 
sections of code checked to see that that doesn't introduce other problems.

All possible of course.


Pauli


On Tue, 8 Dec 2015 10:01:20 PM Nico Williams wrote:
> On Wed, Dec 09, 2015 at 09:27:16AM +1000, Paul Dale wrote:
> > It will be possible to support atomics in such a way that there is no
> > performance penalty for machines without them or for single threaded
> > operation.  My sketcy design is along the lines of adding a new API
> > CRYPTO_add_atomic that takes the same arguments as CRYPTO_add (i.e.
> > reference to counter, value to add and lock to use):
> > 
> > CRYPTO_add_atomic(int *addr, int amount, int lock)
> > if have-atomics then
> > atomic_add(addr, amount)
> > else if (lock == have-lock-already)
> > *addr += amount
> > else
> > CRYPTO_add(addr, amount, lock)
> 
> "have-atomics" must be known at compile time.
> 
> "lock" should not be needed because we should always have atomics, even
> when we don't have true atomics: just use a global lock in a stub
> implementation of atomic_add() and such.  KISS.  Besides, this will add
> pressure to add true atomics wherever they are truly needed.
> 
> Nico
> -- 

-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia

___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl-team] Discussion: design issue: async and -lpthread

2015-12-08 Thread Paul Dale
It will be possible to support atomics in such a way that there is no 
performance penalty for machines without them or for single threaded operation.
My sketcy design is along the lines of adding a new API CRYPTO_add_atomic that 
takes the same arguments as CRYPTO_add (i.e. reference to counter, value to add 
and lock to use):

CRYPTO_add_atomic(int *addr, int amount, int lock)
if have-atomics then
atomic_add(addr, amount)
else if (lock == have-lock-already)
*addr += amount
else
CRYPTO_add(addr, amount, lock)

The have-lock-already will need to be a new code that indicates that the caller 
has the relevant lock held and there is no need to lock before the add.  Some 
conditional compilation like CRYPTO_add & CRYPTO_add_lock have can be done to 
get the overhead down to zero in the single threaded case and the case where it 
is known beforehand that there are no atomic operations.  It is also possible 
for the atomic_add function to be passed in as a user call back as per the 
other locking callbacks which means OSSL doesn't actually need to know how any 
of this works underneath.

Once this is done, most instances of CRYPTO_add can be changed to 
CRYPTO_add_atomic.  Unfortunately, not all can be changed, so this would 
involve manual inspection of each lock for which CRYPTO_add is used to see if 
atomics are suitable.  I've done a partial list of which could be changed over 
(attached) but it is pretty rough and needs rechecking.

It would be prudent to have a CRYPTO_add_atomic_lock call underneath 
CRYPTO_add_atomic, like CRYPTO_add has CRYPTO_add_lock, to get the extra debug 
output.


Finally, can someone explain what the callback passed to 
CRYPTO_set_add_lock_callback is supposed to do?  Superficially, it seems like a 
way to use atomic operations instead of full locking -- but that breaks things 
due to the way the locking is done elsewhere.  So this call back needs to lock, 
add and unlock like the alternate code path in the CRYPTO_add_lock function.  
There is no obvious benefit to providing it.


Pauli

On Tue, 8 Dec 2015 11:22:01 AM Nico Williams wrote:
> On Tue, Dec 08, 2015 at 11:19:32AM +0100, Florian Weimer wrote:
> > > Maybe http://trac.mpich.org/projects/openpa/ would fit the bill?
> > 
> > It seems to have trouble to keep up with new architectures.
> 
> New architectures are not really a problem because between a) decent
> compilers with C11 and/or non-C11 atomic intrinsics, b) asm-coded
> atomics, and c) mutex-based dumb atomics, we can get full coverage.
> Anyone who's still not satisfied can then contribute missing asm-coded
> atomics to OpenPA.  I suspect that OpenSSL using OpenPA is likely to
> lead to contributions to OpenPA that will make it better anyways.
> 
> What's the alternative anyways?
> 
> We're talking about API and performance enhancements to OpenSSL to go
> faster on platforms for which there are atomics, and maybe slower
> otherwise -- or maybe not; maybe we can implement context up-/down-ref
> functions that use fine-grained (or even global) locking as a fallback
> that yields performance comparable to today's.
> 
> If OpenPA's (or some other such library's) license works for OpenSSL,
> someone might start using it.  That someone might be me.  So that seems
> like a good question to ask: is OpenPA's license compatible with
> OpenSSL's?  For inclusion into OpenSSL's tree, or for use by OpenSSL?
> 
> Nico
> 

-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia
Lock #  Note
CRYPTO_LOCK_ERR 28  one problem with decrement using 
CRYPTO_add -- leave as is
CRYPTO_LOCK_EX_DATA 7   safe, no CRYPTO_add
CRYPTO_LOCK_X50910  most likely safe, needs deeper recheck 
down call chains
CRYPTO_LOCK_X509_INFO   2   safe, only uses CRYPTO_add
CRYPTO_LOCK_X509_PKEY   2   safe, only uses CRYPTO_add
CRYPTO_LOCK_X509_CRL5   unsure
CRYPTO_LOCK_X509_REQ2   only ASN1_SEQUENCE_re
CRYPTO_LOCK_DSA 5   safe with atomic add (double check)?
CRYPTO_LOCK_RSA 19  most likely safe, needs deeper recheck 
down call chains
CRYPTO_LOCK_EVP_PKEY27  safe with atomic add
CRYPTO_LOCK_X509_STORE  35  assume unsafe, one CRYPTO_add only
CRYPTO_LOCK_SSL_CTX 26  one increment unsafe, rest seems okay 
-> make increment atomic
CRYPTO_LOCK_SSL_CERT3   safe, only uses CRYPTO_add
CRYPTO_LOCK_SSL_SESSION 9   safe if atomic add is used inside 
locked block in ssl/ssl_sess.c
CRYPTO_LOCK_SSL_SESS_CERT   1   unused
CRYPTO_LOCK_SSL 11  safe without compression, probably safe 
with but would ne

Re: [openssl-dev] [openssl-team] Discussion: design issue: async and -lpthread

2015-12-01 Thread Paul Dale
The figures were for connection reestablishment, RSA computations etc simply 
don't feature.  For initial connection establishment, on the other hand, they 
are the single largest factor.  The crypto is definitely not the bottleneck for 
this case.


Pauli
-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia

___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl-team] Discussion: design issue: async and -lpthread

2015-11-30 Thread Paul Dale
> are you sure that the negotiated cipher suite is the same and that the 
> NSS is not configured to reuse the server key share if you're using DHE 
> or ECDHE?

The cipher suite was the same.  I'd have to check to see exactly which was 
used.  It is certainly possible that NSS was configured as you suggest and, if 
so, this would improve its performance.


However, the obstacle preventing 100% CPU utilisation for both stacks is lock 
contention.  The NSS folks apparently spent a lot of effort addressing this and 
they have a far more scalable locking model than OpenSSL: one lock per context 
for all the different kinds of context versus a small number of global locks.

There is definitely scope for improvement here.  My atomic operation suggestion 
is one approach which was quick and easy to validate, better might be more 
locks since it doesn't introduce a new paradigm and is more widely supported 
(C11 notwithstanding).


Regards,

Pauli
-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia

___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl-team] Discussion: design issue: async and -lpthread

2015-11-29 Thread Paul Dale
et, I think).

I'm not sure I can share code and associated infrastructure at this point, we'd 
like to but we need approvals through the various internal channels.
It might be possible for us to run these tests against your patches and mail 
you the results (less than ideal but probably workable), I'd have to ask the 
engineer who did this to see if they can justify the time involved.


The bottom line is OpenSSL wants for finer grain locking, which the atomic 
operations provided.  Having locks in the various contexts would achieve the 
same result and be less platform/compiler specific.


For reference, my proof of concept atomic patch was:
diff --git a/include/openssl/crypto.h b/include/openssl/crypto.h
index 56afc51..803d7b7 100644
--- a/include/openssl/crypto.h
+++ b/include/openssl/crypto.h
@@ -220,8 +220,13 @@ extern "C" {
 CRYPTO_lock(CRYPTO_LOCK|CRYPTO_READ,type,__FILE__,__LINE__)
 #   define CRYPTO_r_unlock(type)   \
 CRYPTO_lock(CRYPTO_UNLOCK|CRYPTO_READ,type,__FILE__,__LINE__)
-#   define CRYPTO_add(addr,amount,type)\
-CRYPTO_add_lock(addr,amount,type,__FILE__,__LINE__)
+#   if defined(__GNUC__) || defined(__INTEL_COMPILER)
+#define CRYPTO_add(addr,amount,type)\
+ __sync_add_and_fetch(addr, amount)
+#   else
+#define CRYPTO_add(addr,amount,type)\
+ CRYPTO_add_lock(addr,amount,type,__FILE__,__LINE__)
+#   endif
 #  endif
 # else
 #  define CRYPTO_w_lock(a)
This should never be applied, it breaks things and is quick and ugly.


Regards,

Pauli

-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia

___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl-team] Discussion: design issue: async and -lpthread

2015-11-23 Thread Paul Dale
Thanks for the quick reply.  That patch looks much improved on this front.

We'll wait for the changes and then retest performance.


Thanks again,

Pauli

On Mon, 23 Nov 2015 10:18:27 PM Matt Caswell wrote:
> 
> On 23/11/15 21:56, Paul Dale wrote:
> > Somewhat tangentially related to this is the how thread locking in
> > OpenSSL is slowing things up.
> 
> Alessandro has submitted an interesting patch to provide a much better
> threading API. See:
> 
> https://github.com/openssl/openssl/pull/451
> 
> I'm not sure what the current status of this is though.
> 
> Matt
> ___
> openssl-dev mailing list
> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev

-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia

___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl-team] Discussion: design issue: async and -lpthread

2015-11-23 Thread Paul Dale
Somewhat tangentially related to this is the how thread locking in OpenSSL is 
slowing things up.

We've been doing some connection establishment performance analysis recently 
and have discovered a lot of waiting on locks is occurring.  By far the worst 
culprit is CRYPTO_LOCK_EVP_PKEY in CRYPTO_add_lock calls.  Changing these to 
gcc's atomic add operations (__sync_add_and_fetch) improved things 
significantly:

base OpenSSL11935 connections/s85% CPU utilisation
with Atomic Change  16465 connections/s22% CPU utilisation

So fifty percent more connections for a quarter the CPU.  At this point a 
number of other locks are causing the slow down.

Now, I'm not sure if such a change would be interesting to the community or 
not, but there definitely  is room for significant gains in the multi threaded 
locking.  Ignoring the atomic operations and moving to a separate lock per 
reference count would likely save a an amount of blocking -- is this a suitable 
use for dynamic locks?


I also submitted a bug report and fix recently [openssl.org #4135] to do with 
threading, which will hopefully get included eventually.


Regards,

Pauli

-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia

___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev