Re: [openssl-dev] Mailman version used by OpenSSL is misconfigured and/or broken in relation to DKIM

2015-08-05 Thread mancha
On Wed, Aug 05, 2015 at 04:54:25PM +0200, Kurt Roeckx wrote:
 On Wed, Aug 05, 2015 at 06:54:33AM -0700, Quanah Gibson-Mount wrote:
  Yesterday, I was alerted by a member of the list that my emails to
  openssl-dev are ending up in their SPAM folder.  After examining my
  emails as sent out by OpenSSL's mailman, I saw that it is mucking
  with the headers, causing DKIM failures.  This could be because of
  one of two reasons:
 
 You seems to be running with p=reject.  In my opinion p=reject is
 only useful for domains that don't have any users.

Yahoo adopted a reject DMARC policy back in 2014 and that caused all
kinds of mailing list havoc.

  a) The version of mailman used by the OpenSSL project (2.1.18) has a
  known bug around DKIM that was fixed in 2.1.19
 
 That seems to be about wrapped messages in case of moderation?

Possibly referencing that 2.1.9 fixed an issue with not honoring
REMOVE_DKIM_HEADERS=2.

  b) The mailman configuration is incorrect.
 
 You mean things like: - We change the subject to include the list
 name?

I interpret the comment to mean that, because OpenSSL lists modify
messages (see below), they should strip DKIM headers (see above) before
distribution to prevent false negatives in recipient implementations.

zimbra.com includes the subject header when computing its header digest
so yes, adding [list-name] invalidates its DKIM signature.

 - We add a footer about the list?

That also invalidates zimbra.com's DKIM sig because they don't use body
hash length limits.

 - We don't rewrite the From address?

  Error is: Authentication-Results: edge01.zimbra.com (amavisd-new);
  dkim=fail (1024-bit key) reason=fail (message has been altered)
  header.d=zimbra.com
 
 You really should consider moving to at least a 2048 bit key.

Good suggestion though orthogonal to the issue.

--mancha (https://twitter.com/mancha140)


pgpGuFx7MTUHU.pgp
Description: PGP signature
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Mailman version used by OpenSSL is misconfigured and/or broken in relation to DKIM

2015-08-05 Thread mancha
On Wed, Aug 05, 2015 at 09:33:02PM +0200, Kurt Roeckx wrote:
 On Wed, Aug 05, 2015 at 04:54:57PM +, mancha wrote:
  
  I interpret the comment to mean that, because OpenSSL lists modify
  messages (see below), they should strip DKIM headers (see above)
  before distribution to prevent false negatives in recipient
  implementations.
 
 Won't that always give DKIM failures instead, without also rewriting
 the From?

I'm no expert on this but I believe the answer is not always. I think it
depends on if a) the domain has an ADSP and, if it does, b) what its
signing-practice is. I just did a quick check and it seems zimbra.com
doesn't have an ADSP. Yahoo.com has an ADSP but doesn't specify all
messages will be signed (has an unknown tag value).

OpenSSL is certainly not alone in its practice of mangling headers and
adding body footers so I'd be curious to hear how other lists handle
domains such as yahoo.com.

--mancha (https://twitter.com/mancha140)


pgpiDnFRnLBZp.pgp
Description: PGP signature
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] common factors in (p-1) and (q-1)

2015-08-03 Thread mancha
On Sun, Aug 02, 2015 at 12:59:49AM +, p...@securecottage.com wrote:
 
 I'd like to thank several people for looking into my assertion that it
 is possible for common factors in p-1 and q-1 to leak from the
 factorisation of n-1.

Hi Paul.

I came across a paper by Mckee and Pinch [1] you might be interested in.
They describe an algo to factor n=pq when a common factor b of (p-1) and
(q-1) is known. Fortunately (for RSA), their algo is O(N^(1/4)/b) so to
just break even with GNFS the factor needs to be quite large:

modulus (n)   common factor (b)
(bits)(bits)
 51264
1024   169
2048   395
3072   629
4096   868

A literature review might uncover other interesting (and more recent)
ways to leverage common factors.

 (factorisation of p*q-1 is shown):
 
 n-1 = 2 * 3^3 * 7 * 13 * 67 * 2399 * 28559 *
 5485062554686449262177590194597345407327047899375366044215091312099734701911004226037445837630559113651708968440813791318544450398897628
 67234233761906471233193768567784328338581360170038166729050302672416075037390699071355182394190448204086007354388034161296410061846686501
 4941425056336718955019

Here's a slight improvement to my initial decomposition: 2 * 3^3 * 7 *
13 * 67 * 2399 * 28559 * 475108039391 * 2304683693240273 *
2708193637206233815756979 *
184975060461462389844824492821434552152567410456158418229544246514131718793664172374877783767032501925392596713947281388784630550689770489020593012165694501623162932662905453122620777823162028887054350359842476275647758696138793577667109927

Note: factoring the last term in my product is left as an exercise to
the reader.

 Any rsa key generation in SAFE mode will always have a gcd(p-1,q-1)=2,
 so SAFE mode always avoids common factors.
 
 My conclusion is that openssl code can have common factors (must be
 above 17863) in its rsa keys every 20,000 key generations or so when
 not generated in SAFE mode, and that at this time approximately 30
 bits of the totient will be revealed out of the 1024 bits of the full
 totient. There is, of course, no way of knowing which of the 20,000
 key generations will have the common factors.
 
 Most people felt that the check (for gcd(p-1,q-1) 16) was possible
 but they were not sure it was worth doing.
 
 I'd like to point out from a cyber-attack possibility the check is
 worth doing.

Safe primes (p=2q+1) play an important role in Diffie-Hellman to avoid
subgroup confinement attacks. I just confirmed OpenSSL does indeed use
them for DH; from dh_builtin_genparams: 

  if (!BN_generate_prime_ex(ret-p, prime_len, 1, t1, t2, cb))
  goto err;

So, OpenSSL users are already quite used to whatever performance hit
is associated with them.

 As such it is appropriate that some checks be made at
 RSA_generate_key_ex() to make sure that the other software hasn't
 returned a bad key.  Openssl software is a significant part of the
 Internet security infastructure and so it would obviously be the
 target of hacking and cyber infiltration.  Some redundant checks are
 appropriate because of this.
 
 A gcd(p-1,q-1)16 check will disallow less than 1 percent of the
 currently acceptable keys, won't take much time to run, and would
 defeat cyber attempts to create a key with a significant common factor
 within it.
 
 Thanks
 
 Paul Cheffers

Given the low costs, one would be hard-pressed to find an intelligent
objection to your suggestion. Thanks for bringing this up.

--mancha (https://twitter.com/mancha140)

---
[1] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.33.1333


pgpHjc5ziJSkT.pgp
Description: PGP signature
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] common factors in (p-1) and (q-1)

2015-08-03 Thread mancha
On Sun, Aug 02, 2015 at 08:08:52PM -0600, Hilarie Orman wrote:
 For primes p and q for which p-1 and q-1 have no common factor = n,
 probability of gcd(p, q)  1 is very roughly 1/n.

Hi

There's a typo or two here. Assuming p!=q, we always have gcd(p,q)=1.
 
 Therefore, 1.  Use strong primes as in Rivest/Silverman.  Simply
 described, choose large primes r and s.  Choose small factors i and j,
 gcd(i, j) = 1.  Find p such that 1+2*i*r is prime and q such that
 1+2*j*s is prime.

This appears a generalization of safe primes (i=j=1) which aren't overly
costly to generate. In fact, OpenSSL surely uses them already for
Diffie-Hellman MODPs. If they don't I'd be surprised (and alarmed). An
added small benefit is they mitigate Pollard's 'p-1' (only a concern
should p-1 or q-1 happen to be extremely smooth).

Strong primes have a bit more structure and afford protection against
repeated encryption attacks. If, as sometimes required of strong primes
p, p+1 must have a large prime factor, they also mitigate Williams'
'p+1' algo.

 
 Or
 
 2.  Find large primes p and q such that gcd(p^2-1, q^2-1)  10^6.

That's an interesting formulation. What's the importance of 10^6?

Thanks.

--mancha (https://twitter.com/mancha140)

 
 Hilarie


pgppgOu9zL1ue.pgp
Description: PGP signature
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] common factors in (p-1) and (q-1)

2015-08-01 Thread mancha
On Fri, Jul 31, 2015 at 06:46:22PM +, Viktor Dukhovni wrote:
 On Fri, Jul 31, 2015 at 11:19:39AM -0700, Bill Cox wrote:
 
  Cool observation.  From running a bit of Python code, it looks like
  the probability that GCD(p-1, p-q) == 4 is a bit higher than 15%, at
  least for random numbers between 2048 and 4096 bits long.  It looks
  like putting in a GCD(p-1, q-1) check will slow down finding
  suitable p and q by around a factor of 6.5.
 
 A smaller slow-down would be incurred one were to restrict both of p,q
 to 3 mod 4. In that case 2 would be the largest common even factor of
 (p-1) and (q-1), and any appreciably large common odd factor
 (necessarily above 17863 due to how each of p/q is chosen) would be
 very rare.
 
 Is there a good argument for adding the gcd test?  How big does the
 common factor have to be for any information it might provide to be
 substantially useful in finding 1/e mod phi(m)?
 
 The larger the common factor is, the smaller the probability of p-1
 and q-1 sharing it (for a given sufficiently large prime factor r of
 (p-1), the probability of (q-1) also having that factor is 1/(r-1)).
 If say r needs be 80 bits long to be useful in attacking RSA 1024,
 then only ~1 in 2^80 (p-1,q-1) pairs will have such a common factor,
 which is sufficiently rare not warrant any attention.
 
 Also one still needs to be able to fully factor (n-1).  After tens of
 thousands of trials, I managed to generate a (p,q,n) triple with a
 1024-bit modulus n in which (p-1,q-1) have a common odd factor.
 
 n =
 
 123727085863382195696899362818055010267368591819174730632443285012648773223152448218495408371737254282531468855140111723936275062312943433684139231097953508685462994307654703316031424869371422426773001891452680576333954733056995016189880381373567072504551999849596021790801362257131899242011337424119163152403
 
 e = F_4 = 65537
 
 gcd(p-1,q-1) = 2 * 28559
 
 What can the OP tell us about d, p or q?  Can anyone produce a full
 factorization of n - 1?

n-1 = 2 * 3^3 * 7 * 13 * 67 * 2399 * 28559 *
5485062554686449262177590194597345407327047899375366044215091312099734701911004226037445837630559113651708968440813791318544450398897628672342337619064712331937685677843283385813601700381667290503026724160750373906990713551823941904482040860073543880341612964100618466865014941425056336718955019

--
https://twitter.com/mancha140


pgpTqUnBerhoH.pgp
Description: PGP signature
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] common factors in (p-1) and (q-1)

2015-08-01 Thread mancha
On Sat, Aug 01, 2015 at 01:50:00PM +, Ben Laurie wrote:
 On Sat, 1 Aug 2015 at 14:22 mancha manc...@zoho.com wrote:
 
  On Fri, Jul 31, 2015 at 06:46:22PM +, Viktor Dukhovni wrote:
   On Fri, Jul 31, 2015 at 11:19:39AM -0700, Bill Cox wrote:
  
Cool observation.  From running a bit of Python code, it looks
like the probability that GCD(p-1, p-q) == 4 is a bit higher
than 15%, at least for random numbers between 2048 and 4096 bits
long.  It looks like putting in a GCD(p-1, q-1) check will slow
down finding suitable p and q by around a factor of 6.5.
  
   A smaller slow-down would be incurred one were to restrict both of
   p,q to 3 mod 4. In that case 2 would be the largest common even
   factor of (p-1) and (q-1), and any appreciably large common odd
   factor (necessarily above 17863 due to how each of p/q is chosen)
   would be very rare.
  
   Is there a good argument for adding the gcd test?  How big does
   the common factor have to be for any information it might provide
   to be substantially useful in finding 1/e mod phi(m)?
  
   The larger the common factor is, the smaller the probability of
   p-1 and q-1 sharing it (for a given sufficiently large prime
   factor r of (p-1), the probability of (q-1) also having that
   factor is 1/(r-1)).  If say r needs be 80 bits long to be useful
   in attacking RSA 1024, then only ~1 in 2^80 (p-1,q-1) pairs will
   have such a common factor, which is sufficiently rare not warrant
   any attention.
  
   Also one still needs to be able to fully factor (n-1).  After tens
   of thousands of trials, I managed to generate a (p,q,n) triple
   with a 1024-bit modulus n in which (p-1,q-1) have a common odd
   factor.
  
   n =
  
   
  123727085863382195696899362818055010267368591819174730632443285012648773223152448218495408371737254282531468855140111723936275062312943433684139231097953508685462994307654703316031424869371422426773001891452680576333954733056995016189880381373567072504551999849596021790801362257131899242011337424119163152403
  
   e = F_4 = 65537
  
   gcd(p-1,q-1) = 2 * 28559
  
   What can the OP tell us about d, p or q?  Can anyone produce a
   full factorization of n - 1?
 
  n-1 = 2 * 3^3 * 7 * 13 * 67 * 2399 * 28559 *
 
  5485062554686449262177590194597345407327047899375366044215091312099734701911004226037445837630559113651708968440813791318544450398897628672342337619064712331937685677843283385813601700381667290503026724160750373906990713551823941904482040860073543880341612964100618466865014941425056336718955019
 
 
 That is not a prime factorisation.

Just helping get things started. Feel free to take over and claim the
last number in my product.


pgp2BFvzlN2bz.pgp
Description: PGP signature
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] common factors in (p-1) and (q-1)

2015-08-01 Thread mancha
On Fri, Jul 31, 2015 at 11:31:08PM +, p...@securecottage.com wrote:
 Hi Mancha,
 
 Since p*q-1==(p-1)*(q-1)+(p-1)+q-1) any prime that divides (p-1) and
 (q-1) will divide all 4 of the terms in the definition of p*q-1.  Thus
 it will be a common factor in the totient.

Hi Paul, many thanks for your reply.

Yes, it's clear any f that divides both (p-1) and (q-1) also divides
(pq-1), (p-1)(q-1), and p-q.

In another email in this thread, I mention one concern regarding having
p-1 and q-1 share a large factor is that searching for a private
exponent d becomes easier as lcm((p-1),(q-1)) decreases.  However,
approximations for the distribution of g=gcd((p-1),(q-1)) for randomly
chosen 1024-bit primes p,q estimate P(g=20)=91% and P(g=100)=98% and
that largely allays this concern because it can be expected with high
probability lcm((p-1),(q-1)) and (p-1)(q-1) will be close in size.  

That said, I am certainly supportive of your suggestion to use safe
primes, thereby ensuring gcd((p-1),(q-1))=2, as long as it's not overly
costly.

Your concerns appear different, however. Please correct me if I
misunderstood but it seems you require Mallory (who only has access to
{n,e}) discover a factor f common to (p-1) and (q-1). My question was on
the mechanics of how this discovery occurs assuming Mallory is able to
fully factor pq-1 (ie. n-1).

Thanks.

--mancha

 I have checked through the key generation code of the openssl ssl
 code. I hacked it to report the greatest common divisor of p-1 and
 q-1. I then ran 100 key generations. It only had greatest common
 divisors of 2, 4 , 8, and 16. There were no other primes reported
 besides small powers of 2.
 
 So there doesn't seem to be a practical problem with common divisors
 in the openssl code.
 
 Still, I think this is a theoretical problem.  There should be a
 gcd(p-1,q-1)16 check for the two primes in key generation.
 
 Paul
 
 
 Quoting mancha manc...@zoho.com:
 
 On Fri, Jul 31, 2015 at 02:36:03AM +, p...@securecottage.com
 wrote:
 
 Hi there,
 
 I have looked at the RSA protocol a bit and have concluded that
 
 1) common factors in (p-1) and (q-1) are also in the factorisation
 of (p*q-1).  2) by factoring (p*q-1) you can come up with candidates
 for squares in the totient.  3) you can also come up with d mod
 commonfactor^2 if there is a common factor.
 
 the math is shown in my wikipedia users page math blog at:
 
 https://en.wikipedia.org/wiki/User:Endo999#The_Bad_Stuff_That_Happens_When_There_Are_Common_Factors_Between_.28P-1.29_and_.28Q-1.29
 
 [SNIP]
 
 Hi. How are you finding a common factor f such that f|(p-1) and
 f|(q-1)?
 
 Thanks.
 
 --mancha
 
 -- https://twitter.com/mancha140


pgpvdeQxHAgIL.pgp
Description: PGP signature
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] common factors in (p-1) and (q-1)

2015-07-31 Thread mancha
On Fri, Jul 31, 2015 at 02:36:03AM +, p...@securecottage.com wrote:
 
 Hi there,
 
 I have looked at the RSA protocol a bit and have concluded that
 
 1) common factors in (p-1) and (q-1) are also in the factorisation of
 (p*q-1).  2) by factoring (p*q-1) you can come up with candidates for
 squares in the totient.  3) you can also come up with d mod
 commonfactor^2 if there is a common factor.
 
 the math is shown in my wikipedia users page math blog at:
 
 https://en.wikipedia.org/wiki/User:Endo999#The_Bad_Stuff_That_Happens_When_There_Are_Common_Factors_Between_.28P-1.29_and_.28Q-1.29

[SNIP]

Hi. How are you finding a common factor f such that f|(p-1) and f|(q-1)?

Thanks.

--mancha

-- https://twitter.com/mancha140


pgpAx_elyqxEZ.pgp
Description: PGP signature
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] common factors in (p-1) and (q-1)

2015-07-31 Thread mancha
On Fri, Jul 31, 2015 at 11:19:39AM -0700, Bill Cox wrote:
 Cool observation.  From running a bit of Python code, it looks like
 the probability that GCD(p-1, p-q) == 4 is a bit higher than 15%, at
 least for random numbers between 2048 and 4096 bits long.  It looks
 like putting in a GCD(p-1, q-1) check will slow down finding suitable
 p and q by around a factor of 6.5.
 
 I am not saying OpenSSL should or should not do this check, but
 hopefully making that decision is easier knowing the runtime penalty.

To clarify, the worry is that lcm((p-1),(q-1))  (p-1)(q-1) thus making
the computation of d=1/e (mod lcm((p-1),(q-1))) comparatively easier?

If so, here's my quick  dirty back-of-envelope calculation (mod bound)
for the probability the gcd of two randomly chosen integers x,y is at
most k:

k   p(gcd(x,y)=k)
-   --
1   60.79%
2   75.99%
3   82.75%
4   86.55%
5   88.98%
6   90.67%
7   91.91%
8   92.86%
9   93.61%
10  94.21%

As can be seen, the probability is quite high the gcd will be small so
(p-1)(q-1) ~ lcm((p-1)(q-1)) removing the above benefit.

But it's the end of the week and the neurons need respite so please let
me know if I'm missing something.

--mancha

--
https://twitter.com/mancha140


pgpuPoMiAQmxY.pgp
Description: PGP signature
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


[openssl-dev] CVE-2015-1793 tester (alt.chain.fail)

2015-07-09 Thread mancha
Hi.

Vulnerability tester for CVE-2015-1793 (alternative chains certificate
forgery) based on Matt Caswell's test now available:

https://twitter.com/mancha140/status/619316033241923585

--mancha


pgp5yz3YFF0V2.pgp
Description: PGP signature
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] sizeof (HMAC_CTX) changes with update, breaks binary compatibility

2015-06-12 Thread mancha
On Thu, Jun 11, 2015 at 09:07:18PM -0400, Dan McDonald wrote:
 I noticed that a new field was added to HMAC_CTX in the 1.0.2a-b or
 1.0.1m-n update:
 
 typedef struct hmac_ctx_st { const EVP_MD *md; EVP_MD_CTX md_ctx;
 EVP_MD_CTX i_ctx; EVP_MD_CTX o_ctx; unsigned int key_length; unsigned
 char key[HMAC_MAX_MD_CBLOCK]; + int key_init; } HMAC_CTX;
 
 This breaks binary compatibility.  I found this out the hard way
 during an attempt to update OmniOS's OpenSSL to 1.0.2b ('014, bloody)
 or 1.0.1n (006, 012).  Observe our use of HMAC_CTX in illumos (which
 OmniOS is a distribution of):
 
 struct Mac { char*name; int enabled; u_int
 mac_len; u_char  *key; u_int   key_len; int
 type; const EVP_MD*evp_md; HMAC_CTXevp_ctx; }; struct Comp
 { int type; int enabled; char*name; }; struct Newkeys {
 Enc enc; Mac mac; Compcomp; /* XXX KEBE SAYS THIS GETS
 CLOBBERED!!! */ };
 
 You can see the code here:
 
   
 http://src.illumos.org/source/xref/illumos-gate/usr/src/cmd/ssh/include/kex.h#100
 
 What is supposed to happen in this situation?  I was under the
 impression that letter releases don't break binary compatibility.  The
 SSH in illumos breaks because of this, but it appears OpenSSH has
 worked around such a situation.
 
 Clues are welcome.
 
 Thanks, Dan McDonald -- OmniOS Engineering

Hi Dan. Many thanks for your report. I've checked and the issue you've
identified potentially affects OpenSSH 4.7 through 6.5, inclusive.

OpenSSH 6.6 replaces the OpenSSL HMAC with its own implementation making
the ABI change a NOP for OpenSSH 6.6 onwards.  

Cheers.

--mancha


pgpFAEEHMADT6.pgp
Description: PGP signature
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] What key length is used for DHE by default ?

2015-05-23 Thread mancha
On Fri, May 22, 2015 at 06:43:28PM +0200, Rainer Jung wrote:
 Am 22.05.2015 um 18:32 schrieb Nayna Jain:
 Ok, I think this is what I didn't know. I was using openssl 1.0.1g
 client. I still didn't have openssl 1.0.2 .
 
 If it were trivial I think showing the temp key size would be a
 welcome backport to 1.0.1 before the next release. It is very useful
 in light of logjam but many people are not yet at 1.0.2. Of course
 they wont get the latest 1.0.1 immediately, but distros have a chance
 to backport.
 
 Regards,
 
 Rainer

Hi Rainer (and devs).

I had already done this for personal consumption. When I saw your email
I decided to make a pull request (devs, for your consideration):

  https://github.com/openssl/openssl/pull/291

If you'd like to patch OpenSSL 1.0.1m immediately, grab my patch
(https://github.com/mancha1/openssl/commit/a59f22520bb5.patch), remove
the first hunk (to CHANGES), and apply it.

--mancha


pgpgKICc2PbF4.pgp
Description: PGP signature
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Weak DH and the Logjam

2015-05-23 Thread mancha
On Thu, May 21, 2015 at 03:29:23AM +, mancha wrote:
 On Wed, May 20, 2015 at 11:31:00PM +0200, Kurt Roeckx wrote:
  On Wed, May 20, 2015 at 08:58:54PM +, mancha wrote:
   On Wed, May 20, 2015 at 07:17:43PM +0200, Kurt Roeckx wrote:
On Wed, May 20, 2015 at 07:11:42AM +, mancha wrote:
 Hello.
 
 Given Adrien et al. recent paper [1] together with their
 proof-of-concept attacks against 512-bit DH groups [2], it
 might be a good time to resurrect a discussion Daniel Kahn
 Gillmor has started here in the past.

Please see
http://www.openssl.org/blog/blog/2015/05/20/logjam-freak-upcoming-changes/


Kurt
   
   Hi Kurt. Thanks for the link and congrats to EK for a well-written
   blog.
   
   A few questions...
   
   1. On ECC:
   
   Did I correctly understand that starting with 1.0.2b, OpenSSL
   clients will only include secp256r1, secp384r1, and secp521r1 on
   the prime side and sect283k1, sect283r1, sect409k1, sect409r1,
   sect571k1, sect571r1 on the binary side in supported elliptic
   curves extensions?
  
  It also has the 3 brainpool curves and secp256k1.
 
 Yep, forgot about the addition of brainpool curves in 1.0.2.
 
   Will OpenSSL consider making this change in 1.0.1 as well?
  
  1.0.1 doesn't support the auto ecdh, so we at least can't do exactly
  the same there.  But maybe we should also update the default used by
  the client?
 
 The following pull request for 1.0.1-stable removes elliptic curves
 that provide less than the equivalent of 128 bits of symmetric key
 security from the list clients announce via supported elliptic curves
 extensions.  
 
   https://github.com/openssl/openssl/pull/288
 

Commit now mentions changes in CHANGES. New pull request is #290.

https://github.com/openssl/openssl/issues/290

--mancha


pgpUkDAwHh404.pgp
Description: PGP signature
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


[openssl-dev] Weak DH and the Logjam

2015-05-20 Thread mancha
Given Adrien et al. recent paper [1] together with their
proof-of-concept attacks against 512-bit DH groups [2], it might be a
good time to resurrect a discussion Daniel Kahn Gillmor has brought up
in the past.

Namely, whether it makes sense for OpenSSL to reject DH groups smaller
than some minimum. Say, 1024 bits or more. Currently, a client
implementation built on OpenSSL will happily accept small DH groups from
a peer (e.g. 16-bit DH group [3]).  

[1] https://weakdh.org/imperfect-forward-secrecy.pdf
[2] https://weakdh.org/logjam.html
[3] openssl s_client -connect demo.cmrg.net:443  /dev/null

--mancha

PS My understanding is Google Chrome will soon be rejecting all DH
groups smaller than 1024 bits.


pgp_TrBuHeXcL.pgp
Description: PGP signature
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


[openssl-dev] Weak DH and the Logjam

2015-05-20 Thread mancha
Hello.

Given Adrien et al. recent paper [1] together with their
proof-of-concept attacks against 512-bit DH groups [2], it might be a
good time to resurrect a discussion Daniel Kahn Gillmor has started
here in the past.

Namely, whether it makes sense for OpenSSL to reject DH groups smaller
than some minimum. Say, 1024 bits (or more). Currently, a client
implementation built on OpenSSL will happily accept small DH groups from
a peer (e.g. 16-bit DH group [3]).  

[1] https://weakdh.org/imperfect-forward-secrecy.pdf
[2] https://weakdh.org/logjam.html
[3] openssl s_client -connect demo.cmrg.net:443  /dev/null

--mancha

PS My understanding is Google Chrome will soon be rejecting all DH
groups smaller than 1024 bits.


pgpY4hEOloNIZ.pgp
Description: PGP signature
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


[openssl-dev] Weak DH and the Logjam

2015-05-20 Thread mancha
Given Adrien et al. recent paper [1] together with their proof-of-concept
attacks against 512-bit DH groups [2], it might be a good time to
resurrect a discussion Daniel Kahn Gillmor has brought up in the past.

Namely, whether it makes sense for OpenSSL to reject DH groups smaller
than some minimum (1024 bits or more). Currently, client implementations
built on OpenSSL will happily accept small DH groups from a peer (e.g.
16-bit DH group [3]).

[1] https://weakdh.org/imperfect-forward-secrecy.pdf
[2] https://weakdh.org/logjam.html
[3] openssl s_client -connect demo.cmrg.net:443  /dev/null

--mancha

PS My understanding is Google Chrome will soon be rejecting all DH
groups smaller than 1024 bits.

___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Weak DH and the Logjam

2015-05-20 Thread mancha
On Wed, May 20, 2015 at 07:17:43PM +0200, Kurt Roeckx wrote:
 On Wed, May 20, 2015 at 07:11:42AM +, mancha wrote:
  Hello.
  
  Given Adrien et al. recent paper [1] together with their
  proof-of-concept attacks against 512-bit DH groups [2], it might be
  a good time to resurrect a discussion Daniel Kahn Gillmor has
  started here in the past.
 
 Please see
 http://www.openssl.org/blog/blog/2015/05/20/logjam-freak-upcoming-changes/
 
 
 Kurt

Hi Kurt. Thanks for the link and congrats to EK for a well-written blog.

A few questions...

1. On ECC:

Did I correctly understand that starting with 1.0.2b, OpenSSL clients
will only include secp256r1, secp384r1, and secp521r1 on the prime side
and sect283k1, sect283r1, sect409k1, sect409r1, sect571k1, sect571r1 on
the binary side in supported elliptic curves extensions?

Will OpenSSL consider making this change in 1.0.1 as well?

2. On FF DH:

Is it possible for OpenSSL to provide a tentative timeline for its
planned transition (no minimum - 768-bit min - 1024-bit min)? Right
now the move to 1024-bit is slated for soon but tentative dates are
likely more effective prods for sites (and others) using Jurassic
modp's.

Cheers.

--mancha


pgpCi6CdaMB_l.pgp
Description: PGP signature
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Weak DH and the Logjam

2015-05-20 Thread mancha
On Wed, May 20, 2015 at 11:31:00PM +0200, Kurt Roeckx wrote:
 On Wed, May 20, 2015 at 08:58:54PM +, mancha wrote:
  On Wed, May 20, 2015 at 07:17:43PM +0200, Kurt Roeckx wrote:
   On Wed, May 20, 2015 at 07:11:42AM +, mancha wrote:
Hello.

Given Adrien et al. recent paper [1] together with their
proof-of-concept attacks against 512-bit DH groups [2], it might
be a good time to resurrect a discussion Daniel Kahn Gillmor has
started here in the past.
   
   Please see
   http://www.openssl.org/blog/blog/2015/05/20/logjam-freak-upcoming-changes/
   
   
   Kurt
  
  Hi Kurt. Thanks for the link and congrats to EK for a well-written
  blog.
  
  A few questions...
  
  1. On ECC:
  
  Did I correctly understand that starting with 1.0.2b, OpenSSL
  clients will only include secp256r1, secp384r1, and secp521r1 on the
  prime side and sect283k1, sect283r1, sect409k1, sect409r1,
  sect571k1, sect571r1 on the binary side in supported elliptic curves
  extensions?
 
 It also has the 3 brainpool curves and secp256k1.

Yep, forgot about the addition of brainpool curves in 1.0.2.

  Will OpenSSL consider making this change in 1.0.1 as well?
 
 1.0.1 doesn't support the auto ecdh, so we at least can't do exactly
 the same there.  But maybe we should also update the default used by
 the client?

The following pull request for 1.0.1-stable removes elliptic curves that
provide less than the equivalent of 128 bits of symmetric key security
from the list clients announce via supported elliptic curves extensions.  

  https://github.com/openssl/openssl/pull/288

--mancha


pgpwM1VYrPaan.pgp
Description: PGP signature
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Circumstances cause CBC often to be preferred over GCM modes

2014-12-16 Thread mancha
On Tue, Dec 16, 2014 at 06:28:03PM +0100, Hanno Böck wrote:
 On Tue, 16 Dec 2014 17:17:01 +
 Viktor Dukhovni openssl-us...@dukhovni.org wrote:
 
  However, where do we fit ChaCha20/Poly-1305?  Again, not
  hand-placement, but some extensible algorithm.
 
 How about this simpler criterion:
 AEAD always beats non-AEAD. GCM and poly1305 are both AEAD. Done with
 it.

Has there been significant cryptanalysis done on ChaCha20-Poly1305? My
quick scan reveals a dearth of peer-reviewed literature.

--mancha


pgpda3AvPyuW3.pgp
Description: PGP signature
___
openssl-dev mailing list
openssl-dev@openssl.org
https://mta.opensslfoundation.net/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl.org #3627] Enhancement request: add more Protocol options for SSL_CONF_CTX

2014-12-12 Thread mancha
On Thu, Dec 11, 2014 at 07:37:39PM +0100, Steffen Nurpmeso wrote:
 Salz, Rich via RT r...@openssl.org wrote:
   So you want a separate openssl-conf package.  Fine, then provide
   it and give an easy mechanism for applications to hook into it.
   And for users to be able to overwrite system defaults.  But this
   has not that much to do with #3627.
  
  Yes it does.  :)  A newer simpler API that does what you want \ seems
  exactly the way forward.  Go for it.
 
 You sound pretty good and done here..  Gratulations.  [Laughter]

Emails sent to an RT issue are automatically forwarded by the system to
the openssl-dev ML. There's no need to explicitly cc: openssl-dev as
you're doing - all that does is clutter the ML with duplicates. 

--mancha


pgpEIPrzJdpDY.pgp
Description: PGP signature
___
openssl-dev mailing list
openssl-dev@openssl.org
https://mta.opensslfoundation.net/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl.org #3627] Enhancement request: add more Protocol options for SSL_CONF_CTX

2014-12-12 Thread mancha
On Thu, Dec 11, 2014 at 07:37:39PM +0100, Steffen Nurpmeso wrote:
 Salz, Rich via RT r...@openssl.org wrote:
   So you want a separate openssl-conf package.  Fine, then provide
   it and give an easy mechanism for applications to hook into it.
   And for users to be able to overwrite system defaults.  But this
   has not that much to do with #3627.
  
  Yes it does.  :)  A newer simpler API that does what you want \ seems
  exactly the way forward.  Go for it.
 
 You sound pretty good and done here..  Gratulations.  [Laughter]

Emails sent to an RT issue are automatically forwarded by the system to
the openssl-dev ML. There's no need to explicitly cc: openssl-dev as
you're doing - all that does is clutter the ML with duplicates. 

--mancha


pgpZ4yK6BYk8V.pgp
Description: PGP signature
___
openssl-dev mailing list
openssl-dev@openssl.org
https://mta.opensslfoundation.net/mailman/listinfo/openssl-dev


Re: [openssl.org #3576] [PATCH] Speed up AES-256 key expansion by 1.9x

2014-10-21 Thread mancha
On Tue, Oct 21, 2014 at 02:09:03AM -0400, Salz, Rich wrote:
 
   AES 128 is worth supporting.
  
  Not for me; doing this strictly for fun.
 
 Sure, I understand that.
 
 We're unlikely to incorporate the patch without finishing it and
 doing AES 128.  Nobody said it had to be you :)
 
 It will take awhile anyway, and it won't show up in 1.0.2
 

Rich:

You might want to toggle off base64 encoding on your emails. Some mail
clients choke on it as do list aggregators (e.g.
http://marc.info/?l=openssl-devm=141387182603109w=2).

I long ago trained myself to read base64 while building up an immunity
to iocaine powder but others might have trouble.

--mancha


pgp9aQHfUMjfz.pgp
Description: PGP signature


Re: Patch to mitigate CVE-2014-3566 (POODLE)

2014-10-17 Thread mancha
On Thu, Oct 16, 2014 at 02:50:58PM +0200, Bodo Moeller wrote:
 This is not quite the same discussion as in the TLS Working Group, but
 I certainly think that the claim that new SCSV does not help with
 [the SSL 3.0 protocol issue related to CBC padding] at all is wrong,
 and that my statement that TLS_FALLBACK_SCSV can be used to counter
 CVE-2014-3566 is right.

The point is more nuanced and boils down to there being a difference
between CVE-2014-3566 (SSLv3's vulnerability to padding oracle attacks
on CBC-mode ciphers) and POODLE (an attack that exploits CVE-2014-3566
by leveraging protocol fallback implementations to force peers into
SSLv3 communication).

TLS_FALLBACK_SCSV does not fix or mitigate CVE-2014-3566. With or
without 0x5600, SSLv3 CBC-mode cipher usage is broken.

Chrome, Firefox, etc. intentionally implement protocol fallback (which I
presume is why there are no MITRE CVE designations for the behavior per
se). However, one can make a strong case protocol fallback
implementations that are MITM-triggerable deserve CVE designations.  

TLS_FALLBACK_SCSV could then be accurately described as partially
mitigating those CVEs.

--mancha


pgpLCPRz8jV7G.pgp
Description: PGP signature


Re: Patch to mitigate CVE-2014-3566 (POODLE)

2014-10-14 Thread mancha
On Wed, Oct 15, 2014 at 01:46:40AM +0200, Bodo Moeller wrote:
 Here's a patch for the OpenSSL 1.0.1 branch that adds support for
 TLS_FALLBACK_SCSV, which can be used to counter the POODLE attack
 (CVE-2014-3566; https://www.openssl.org/~bodo/ssl-poodle.pdf).

Hi Bodo. Many thanks for the OOB patch that I just saw commited to git.
Any reason for the s_client -fallback_scsv option check to be within an
#ifndef OPENSSL_NO_DTLS1 block?  

Thanks.

--mancha


pgpQP0dloJSeZ.pgp
Description: PGP signature


Re: [openssl.org #3375] Patch: Off-by-one errors in ssl_cipher_get_evp()

2014-06-21 Thread mancha
On Sat, Jun 21, 2014 at 08:51:35PM +0200, Otto Moerbeek wrote:
 
 You care confusing the matter. Kurt already expained he got the fix
 from OpenBSD. After that explanation, the OpenSSL repo was fixed to
 contain the attribution. 

Hi. I can't seem to find the attribution fix you allude to. Can you
provide a link?

--mancha


pgpB_rbIzvfJz.pgp
Description: PGP signature


Re: Prime generation

2014-05-27 Thread mancha
On Tue, May 27, 2014 at 08:23:29AM +0200, Otto Moerbeek wrote:
 On Tue, May 27, 2014 at 05:23:45AM +, mancha wrote:
 
  On Mon, May 26, 2014 at 09:01:53PM +, mancha wrote:
   On Mon, May 26, 2014 at 08:49:03PM +, Viktor Dukhovni wrote:
On Mon, May 26, 2014 at 08:20:43PM +, mancha wrote:

 For our purposes, the operative question is whether the
 distribution bias created can be leveraged in any way to
 attack factoring (RSA) or dlog (DH).

The maximum gap between primes of size $n$ is conjectured to be
around $log(n)^2$.  If $n$ is $2^k$, the gap is at most $k^2$,
with an average value of $k$.  Thus the most probable primes are
most $k$ times more probable than is typical, and we lose at
most $log(k)$ bits of entropy.  This is not a problem.
   
   One consequence of the k-tuple conjecture (generally believed to
   be true) is that the size of gaps between primes is distributed
   poisson.
   
   You're right when you say the entropy loss between a uniform
   distribution to OpenSSL's biased one is small. In that sense there
   is not much to be gained entropy-wise from using a process that
   gives uniformly distributed primes over what OpenSSL does.
   
   However, if a way exists to exploit the OpenSSL distribution bias,
   it can be modified to be used against uniformly distributed primes
   with only minimal algorithmic complexity increases. In other
   words, the gold standard here isn't a uniform distribution.
   
   --mancha
  
  This is probably more wonkish than Ben intended with his question
  but for those interested, the Poisson result I alluded to is due to
  Gallagher [1].
  
  [1] Gallagher, On the distribution of primes in short intervals,
  Mathematika, 1976
 
 Would this work: if you are worried the algorithm will never pick the
 highest of a prime pair, just make it search backward half of the
 time?
 
 But I understand it has no real security implications.
 
   -Otto

The issue is not limited to twin primes though that extreme drives the
point home. In the twin case {p,p+2}, OpenSSL only finds p+2 if p+1 or
p+2 happens to be the randomly selected start point. So, the proportion
of primes OpenSSL finds that are twins will be significantly lower than
theory predicts. With OpenSSL's incremental search, the probability a
particular probable prime p is selected is proportional to the length of
the gap of composites which immediately precedes it.

Brandt and Damgard [1], from what I can tell the fathers of the
incremental search OpenSSL uses, share Viktor Dukhovni's view and use an
entropy argument to conclude little or no additional risk exists with
incremental searches relative to uniformly distributed prime generation.

Mihailescu (of Catalan Conjecture fame) establishes a complexity
equivalence class to argue improved attacks against an incremental
search can be converted to attacks against uniformly distributed prime
generation with comparable runtimes [2]. For Mihailescu, incremental
search bias is tolerable, especially in light of the lower entropy
costs and efficiency gains relative to naive discard  repeat. The
improvements he models are, in essence, improvements in the
state-of-the-art not the result of leveraging bias.

Fouque and Tibouchi [3] offer the differing view that it's preferable to
minimize bias and generate primes that are almost uniform even if it is
not immediately clear how such biases can help an adversary. They
suggest a few algorithms that improve on naive discard  repeat by
discarding only the top N bits of a candidate at each iteration, among
other innovations.

---
[1] Brandt and Damgard, On Generation of Probable Primes by Incremental
Search, 1988
]2] Mihailescu, Measuring the Cryptographic Relevance of Biased Public
Key Distributions, 1998
[3] Fouque and Tibouchi, Close to Uniform Prime Number Generation With
Fewer Random Bits, 2011



pgpFbAsgV12x4.pgp
Description: PGP signature


Re: Prime generation

2014-05-26 Thread mancha
On Mon, May 26, 2014 at 08:23:07PM +0100, Ben Laurie wrote:
 On 26 May 2014 19:52, Viktor Dukhovni openssl-us...@dukhovni.org
 wrote:
  On Mon, May 26, 2014 at 07:24:54PM +0100, Ben Laurie wrote:
 
  Finally, all of them have a bias: they're much more likely to pick
  a prime with a long run of non-primes before it than one that
  hasn't (in the case of the DH ones, the condition is slightly more
  subtle, depending on parameters, but its there nevertheless). Is
  this wise?
 
  Where do you see the bias?
 
 They all work by picking a random number and then stepping upwards
 from that number until a probable prime is found. Clearly, that is
 more likely to find primes with a long run of non-primes before than
 primes with a short run.

To put this another way, take OpenSSL's probable_prime(). It finds
probable primes by searching arithmetic progressions (common
difference of 2) with random start points.

The starting point is more likely to be chosen from within long runs
of composites thus biasing prime selection to ones at tail ends of
longer sequences of composites.

For our purposes, the operative question is whether the distribution
bias created can be leveraged in any way to attack factoring (RSA) or
dlog (DH).

--mancha



pgp8Sh6nO8iDZ.pgp
Description: PGP signature


Re: Prime generation

2014-05-26 Thread mancha
On Mon, May 26, 2014 at 08:49:03PM +, Viktor Dukhovni wrote:
 On Mon, May 26, 2014 at 08:20:43PM +, mancha wrote:
 
  For our purposes, the operative question is whether the distribution
  bias created can be leveraged in any way to attack factoring (RSA)
  or dlog (DH).
 
 The maximum gap between primes of size $n$ is conjectured to be around
 $log(n)^2$.  If $n$ is $2^k$, the gap is at most $k^2$, with an
 average value of $k$.  Thus the most probable primes are most $k$
 times more probable than is typical, and we lose at most $log(k)$ bits
 of entropy.  This is not a problem.

One consequence of the k-tuple conjecture (generally believed to be
true) is that the size of gaps between primes is distributed poisson.

You're right when you say the entropy loss between a uniform
distribution to OpenSSL's biased one is small. In that sense there is
not much to be gained entropy-wise from using a process that gives
uniformly distributed primes over what OpenSSL does.

However, if a way exists to exploit the OpenSSL distribution bias, it
can be modified to be used against uniformly distributed primes with
only minimal algorithmic complexity increases. In other words, the gold
standard here isn't a uniform distribution.

--mancha


pgpkQPBWaNcH0.pgp
Description: PGP signature


Re: Prime generation

2014-05-26 Thread mancha
On Mon, May 26, 2014 at 09:01:53PM +, mancha wrote:
 On Mon, May 26, 2014 at 08:49:03PM +, Viktor Dukhovni wrote:
  On Mon, May 26, 2014 at 08:20:43PM +, mancha wrote:
  
   For our purposes, the operative question is whether the
   distribution bias created can be leveraged in any way to attack
   factoring (RSA) or dlog (DH).
  
  The maximum gap between primes of size $n$ is conjectured to be
  around $log(n)^2$.  If $n$ is $2^k$, the gap is at most $k^2$, with
  an average value of $k$.  Thus the most probable primes are most $k$
  times more probable than is typical, and we lose at most $log(k)$
  bits of entropy.  This is not a problem.
 
 One consequence of the k-tuple conjecture (generally believed to be
 true) is that the size of gaps between primes is distributed poisson.
 
 You're right when you say the entropy loss between a uniform
 distribution to OpenSSL's biased one is small. In that sense there is
 not much to be gained entropy-wise from using a process that gives
 uniformly distributed primes over what OpenSSL does.
 
 However, if a way exists to exploit the OpenSSL distribution bias, it
 can be modified to be used against uniformly distributed primes with
 only minimal algorithmic complexity increases. In other words, the
 gold standard here isn't a uniform distribution.
 
 --mancha

This is probably more wonkish than Ben intended with his question but
for those interested, the Poisson result I alluded to is due to
Gallagher [1].

[1] Gallagher, On the distribution of primes in short intervals,
Mathematika, 1976


pgpxwpGf9U9Nm.pgp
Description: PGP signature


Re: [openssl.org #3321] NULL pointer dereference with SSL_MODE_RELEASE_BUFFERS flag

2014-05-02 Thread mancha
Kurt Roeckx via RT rt at openssl.org writes:
 
 There is a potentional patch for this in libresll, you can see it
 at:
 http://anoncvs.estpak.ee/cgi-bin/cgit/openbsd-src/commit
 /lib/libssl?id=e76e308f1fab2253ab5b4ef52a1865c5ffecdf21
 
 Kurt

Hello.

This issue has been assigned CVE-2014-0198. Any news on an 
OpenSSL fix?

Thanks.

--mancha

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: OpenSSL has exploit mitigation countermeasures to make sure its exploitable

2014-04-14 Thread mancha
On Sat, Apr 12, 2014 at 09:02:50PM -0400, Salz, Rich wrote:
  Would you please elaborate on how it differs from what you've been
  using in production?
 
 Local platform issues, mainly.  Conceptually, nothing different about
 the security.
 

Hello Rich et al.

I believe Akamai's secure malloc, in current form, to be ineffective
against heartbleed.

In order to achieve ~4-fold improvements in RSA signing speeds, many
implementations (including OpenSSL) bundle pre-computed Chinese
remainder theorem parameters in private keys (so-called quintuple
representation). [1]

Akamai's secure malloc appears to only protect the private exponent (d)
and two primes (p and q) leaving both CRT exponents (e1  e2) and the
first CRT coefficient (coeff) unprotected. 

However, the public exponent (e), modulus (n), and either CRT exponent
(e1 or e2) is sufficient to recover a prime and therefore the full
private key.

Rather than plaster math equations here, I've attached a small perl
script that demonstrates this by way of an example.

Recommendation: protect the rest of the private key material.

My analysis has focused on problems related to *what* should be
protected not on the effectiveness of *how* it is protected. The *how*
also merits close review.  One immediate observation I have on that
front is that secure_malloc_init() is never called.

--mancha

[1] https://tools.ietf.org/html/rfc3447


akamai.pl
Description: Perl program


pgp4KhDIjk_fD.pgp
Description: PGP signature


Re: seems openssl version 1.0.1g also infected

2014-04-14 Thread mancha
On Mon, Apr 14, 2014 at 10:57:37PM +0530, LOKESH JANGIR wrote:
 Hi team,
 
 I am using amazon ami release  Amazon Linux AMI release 2014.03. When i
 restart httpd service then i can see in logs that old version of openssl is
 loading with this. Can you please guide me what to do in this case ?
 
 Regards,
 Lokesh

Hello.

You are more likely to receive help through Amazon support or 
openssl-users. The development list is concerned primarily with
development.

Thank you.

--mancha


pgpNjHFuwB9X9.pgp
Description: PGP signature


Re: OpenSSL has exploit mitigation countermeasures to make sure its exploitable

2014-04-12 Thread mancha
[From: openssl-users]

On Fri, Apr 11, 2014, Salz, Rich wrote:

 This patch is a variant of what we've been using to help protect
 customer keys for a decade.

Would you please elaborate on how it differs from what you've been using
in production?

 OpenSSL is important to us, and this is the first of what we hope will
 be several significant contributions in the near future.

I applaud Akamai's initiative and hope it serves as an example to other
organizations.

Message to all: If you use and benefit from OpenSSL and have developed
significant in-house improvements, consider reciprocating by making your
enhancements available to the community. No need to wait until the next
disaster.

--mancha

PS I cleaned the patch up a bit (i.e. swapped in the correct cmm_init)
and include it here. It applies cleanly to 1.0.1g which then builds fine
on Linux with -pthread. Regression tests pass. Now on to real testing...

Also, -dev is probably a better place for this than -users, isn't it?

From 8320f4697305785971a6a3cc5a5bed7b30cc46cd Mon Sep 17 00:00:00 2001
From: mancha mancha1 AT zoho DOT com
Date: Sat, 12 Apr 2014
Subject: Akamai secure memory allocator

Akamai Technologies patch that adds a secure arena used to store
RSA private keys. The arena is mmap'd with guard pages, before and
after, so pointer over- and under-runs won't wander in. It is
also locked into memory so it doesn't appear on disk and, when
possible, kept out of core files.

This is a variant of what Akamai has been using to help protect
customer keys for a decade.

Ref: http://marc.info/?t=13972371245r=1w=2

---

 crypto/Makefile  |8 +
 crypto/asn1/tasn_dec.c   |   32 +++
 crypto/buddy_allocator.c |  411 +++
 crypto/crypto.h  |   24 +-
 crypto/secure_malloc.c   |  223 +
 crypto/secure_malloc.h   |   45 
 6 files changed, 726 insertions(+), 17 deletions(-)

--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -35,14 +35,16 @@ GENERAL=Makefile README crypto-lib.com i
 LIB= $(TOP)/libcrypto.a
 SHARED_LIB= libcrypto$(SHLIB_EXT)
 LIBSRC=cryptlib.c mem.c mem_clr.c mem_dbg.c cversion.c ex_data.c 
cpt_err.c \
-   ebcdic.c uid.c o_time.c o_str.c o_dir.c o_fips.c o_init.c fips_ers.c
+   ebcdic.c uid.c o_time.c o_str.c o_dir.c o_fips.c o_init.c fips_ers.c \
+   secure_malloc.c buddy_allocator.c
 LIBOBJ= cryptlib.o mem.o mem_dbg.o cversion.o ex_data.o cpt_err.o ebcdic.o \
-   uid.o o_time.o o_str.o o_dir.o o_fips.o o_init.o fips_ers.o $(CPUID_OBJ)
+   uid.o o_time.o o_str.o o_dir.o o_fips.o o_init.o fips_ers.o 
$(CPUID_OBJ) \
+   secure_malloc.o buddy_allocator.o
 
 SRC= $(LIBSRC)
 
 EXHEADER= crypto.h opensslv.h opensslconf.h ebcdic.h symhacks.h \
-   ossl_typ.h
+   ossl_typ.h secure_malloc.h
 HEADER=cryptlib.h buildinf.h md32_common.h o_time.h o_str.h o_dir.h 
$(EXHEADER)
 
 ALL=$(GENERAL) $(SRC) $(HEADER)
--- a/crypto/crypto.h
+++ b/crypto/crypto.h
@@ -365,20 +365,16 @@ int CRYPTO_is_mem_check_on(void);
 #define MemCheck_off() CRYPTO_mem_ctrl(CRYPTO_MEM_CHECK_DISABLE)
 #define is_MemCheck_on() CRYPTO_is_mem_check_on()
 
-#define OPENSSL_malloc(num)CRYPTO_malloc((int)num,__FILE__,__LINE__)
-#define OPENSSL_strdup(str)CRYPTO_strdup((str),__FILE__,__LINE__)
-#define OPENSSL_realloc(addr,num) \
-   CRYPTO_realloc((char *)addr,(int)num,__FILE__,__LINE__)
-#define OPENSSL_realloc_clean(addr,old_num,num) \
-   CRYPTO_realloc_clean(addr,old_num,num,__FILE__,__LINE__)
-#define OPENSSL_remalloc(addr,num) \
-   CRYPTO_remalloc((char **)addr,(int)num,__FILE__,__LINE__)
-#define OPENSSL_freeFunc   CRYPTO_free
-#define OPENSSL_free(addr) CRYPTO_free(addr)
-
-#define OPENSSL_malloc_locked(num) \
-   CRYPTO_malloc_locked((int)num,__FILE__,__LINE__)
-#define OPENSSL_free_locked(addr) CRYPTO_free_locked(addr)
+#include openssl/secure_malloc.h
+#define OPENSSL_malloc(s)   secure_malloc(s)
+#define OPENSSL_strdup(str) secure_strdup(str)
+#define OPENSSL_free(a) secure_free(a)
+#define OPENSSL_realloc(a,s)secure_realloc(a,s)
+#define OPENSSL_realloc_clean(a,o,s) secure_realloc_clean(a,o,s)
+#define OPENSSL_remalloc(a,s) (OPENSSL_free(a), OPENSSL_malloc(s))
+#define OPENSSL_freeFunc  secure_free
+#define OPENSSL_malloc_locked(s) OPENSSL_malloc(s)
+#define OPENSSL_free_locked(a) OPENSSL_free(a)
 
 
 const char *SSLeay_version(int type);
--- a/crypto/asn1/tasn_dec.c
+++ b/crypto/asn1/tasn_dec.c
@@ -169,6 +169,11 @@ int ASN1_item_ex_d2i(ASN1_VALUE **pval,
int otag;
int ret = 0;
ASN1_VALUE **pchptr, *ptmpval;
+
+int ak_is_rsa_key  = 0; /* Are we parsing an RSA key? */
+int ak_is_secure_field = 0; /* should this field be allocated from the 
secure arena? */
+int ak_is_arena_active = 0; /* was the secure arena already activated? 
*/
+
if (!pval)
return 0;
if (aux  aux-asn1_cb)
@@ -407,6 +412,11 @@ int

Re: 1.0.1g doesn't compile due to pod2man documentation errors (diff file with fixes attached)

2014-04-08 Thread mancha
On Tuesday, April 08, 2014 at 10:20 AM, Christoph Martens wrote:

Hey guys,


I found several documentation bugs in an up2date pod2man environment.
Compiled on bleeding-edge Ubuntu 14.04 dev.

The fixes diff file for the documentation is applied.  Version: 1.0.1g

Github Issue (for tracking):
https://github.com/openssl/openssl/issues/57

I prepared these a while back to deal with the stricter pod2man syntax
in perl 5.18+ (note: the 1.0.1f patch applies fine to 1.0.1g).

http://sf.net/projects/mancha/files/misc/openssl-0.9.8y-perl-5.18.diff
http://sf.net/projects/mancha/files/misc/openssl-1.0.1f-perl-5.18.diff

--mancha

-
PGP: 0x25168EB24F0B22AC
[56B7 100E F4D5 811C 8FEF  ADD1 2516 8EB2 4F0B 22AC]

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: 1.0.1g doesn't compile due to pod2man documentation errors (diff file with fixes attached)

2014-04-08 Thread mancha
On Tuesday, April 08, 2014 at 10:20 AM, Christoph Martens wrote:

Hey guys,


I found several documentation bugs in an up2date pod2man environment.
Compiled on bleeding-edge Ubuntu 14.04 dev.

The fixes diff file for the documentation is applied.  Version: 1.0.1g

Github Issue (for tracking):
https://github.com/openssl/openssl/issues/57

I prepared these a while back to deal with the stricter pod2man syntax
in perl 5.18+ (note: the 1.0.1f patch applies fine to 1.0.1g).

http://sf.net/projects/mancha/files/misc/openssl-0.9.8y-perl-5.18.diff
http://sf.net/projects/mancha/files/misc/openssl-1.0.1f-perl-5.18.diff

--mancha

-
PGP: 0x25168EB24F0B22AC
[56B7 100E F4D5 811C 8FEF  ADD1 2516 8EB2 4F0B 22AC]

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #3288] openssl 1.1 - X509_check_host is wrong and insufficient

2014-04-01 Thread mancha
Viktor Dukhovni openssl-users at dukhovni.org writes:


 On Tue, Apr 01, 2014 at 12:36:18PM -0400, Daniel Kahn Gillmor wrote:

  I think the current best approach to this is the public suffix
  list, http://publicsuffix.org/ it's a horrible kludge (a
  fully-enumerated list of all zones that are known to allow
  registration of sub-zones to the public), but it's better than just
  counting labels.
 
  there are a few C libraries that could be used to make this
  abstraction available to OpenSSL (if building against external
  libraries is OK) without requiring much extra work in OpenSSL
  itself.

 I, for one, would not want OpenSSL to employ such a complex and
 fragile mechanism.

I, too, favor a KISS approach. A simple and self-contained algorithm to
ensure RFC 6125 compliance can be the near-term goal.

However, RFC 6125 makes a point of saying it only recommends client-side
wildcard cert handling to accomodate existing infrastructure (i.e.
backwards compat).

So, a medium-term higher-order discussion can be on the sanity of
continuing to support wildcards given the security implications,
ambiguities in specifications, variability in client implementations,
etc. Maybe that's a topic best suited for the uta wg.

--mancha

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: CVE-2014-0076 and OpenSSL 0.9.8

2014-03-26 Thread mancha
On Wed, 26 Mar 2014 06:55:41 + geoff_l...@mcafee.com wrote:
It looks as though CVE-2014-0076 affects OpenSSL 0.9.8-based 
distributions as well, correct?

Yes, 0.9.8y also uses the same Lopez/Dahab algo when computing
elliptic scalar mult on curves defined over binary fields
(i.e. GF(2^m)).

It doesn't appear that the fix has been applied to the 
OpenSSL_0_9_8-stable branch yet though.  I suppose it might need a 
few tweaks to apply there cleanly...

The tweaks are minimal and I've placed a backport here:

http://sf.net/projects/mancha/files/sec/openssl-0.9.8y_CVE-2014-
0076.diff
(.sig in same dir)

Note: all 0.9.8y ecdsa regression tests passed post-patch.

--mancha

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: CVE-2014-0076 and OpenSSL 0.9.8

2014-03-26 Thread mancha
Dr. Stephen Henson steve at openssl.org writes:
  On Wed, Mar 26, 2014, Viktor Dukhovni wrote:
  Perhaps given the number of post-0.9.8y commits pending on the
  OpenSSL_0_9_8-stable branch, one final z release could be issued,
  no more commits made after that, and plans to not make any further
  releases announced?
  
 
 That sounds reasonable to me. Though it would be version 0.9.8za.
 
 Steve.

It sounds like good news if 0.9.8 released one more update but would
sound even better if it adopted my fix for its broken alert handling
(see http://marc.info/?t=13676007772r=1w=2 for details).

--mancha


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: open ssl SHA256 issue

2014-03-16 Thread mancha
On Sun, 16 Mar 2014 15:56:34 + Aya Montasser wrote:
Please I want to get self-signed certificate with signature 
algorithm and signature hash algorithm with sha256 but I can't
find appropriate commands 

To do it , it is always sha1
Also I use the command below

x509 -req -in server.csr -signkey server.key -out server.crt

Hello.

Sign with -sha256:

openssl x509 -req -sha256 -in server.csr -signkey server.key -out 
server.crt

--mancha

PS openssl-users is probably better suited for this type of
question. This list is primarily concerned with development.

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: Patch for Correct fix for CVE-2013-0169 for openssl-.0.9.8y

2013-09-29 Thread mancha
Costas Stasimos coststasimos at gmail.com writes:
 Is there already prepared patch for 0.9.8y for this issue? If yes
 where I could download it?

Hi. there's a fix already committed in the git tree which means
it'll be included in the next 0.9.8 release.

You can grab it here:

https://github.com/openssl/openssl/commit
/59b1129e0a50fdf7e4e58d7c355783a7bfc1f44c

--mancha


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #3038] [PATCH]: Fix warning-level alert handling in 0.9.8

2013-08-19 Thread mancha
mancha mancha1 at hush.com writes:
 Yet another bug report I came upon by accident (not an Ubuntu user):
 https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1144408
 
 From the report I gather this issue affects all users of Ubuntu's
 Lucid version. 
 

A few more folks discussing problems stemming from this issue. From
[1] it seems GitHub went as far as patching their server to work with
buggy 0.9.8x clients.

[1] https://github.com/composer/composer/issues/2042
[2] https://github.com/madmimi/madmimi-php/issues/5
[3] https://jamfnation.jamfsoftware.com/discussion.html?id=7599
[4] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=698219





__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #3038] [PATCH]: Fix warning-level alert handling in 0.9.8

2013-08-16 Thread mancha
mancha mancha1 at hush.com writes:
 
 Hello.
 
 I never received a reply to this patch submission but wanted
 to follow up because I am receiving update requests from affected
 users (e.g. http://sourceforge.net/p/curl/bugs/1037/?page=3).
 
 I imagine 0.9.8 is in feature-freeze however I believe this
 qualifies as a bug-fix more than a feature-enhancement.
 
 Would someone let me know if this code might eventually make its
 way into 0.9.8 so I know how to respond to people requesting
 status updates from me?
 
 Thanks.
 
 --mancha

Yet another bug report I came upon by accident (not an Ubuntu user):
https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1144408

From the report I gather this issue affects all users of Ubuntu's
Lucid version. 

--mancha




__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #3038] [PATCH]: Fix warning-level alert handling in 0.9.8

2013-08-07 Thread mancha
mancha1 at hush.com via RT rt at openssl.org writes:

 
 Hello.
 
 OpenSSL 0.9.8y does not properly handle warning level
 alerts in SSLv23 client method unlike OpensSSL 1.0.0+.
 
 For example, when OpenSSL 0.9.8 initiates a connection
 using TLS-SNI extensions in SSLv23 mode and the server
 replies to client hello with an unrecognized_name warning
 alert, the handshake terminates client-side.
 
 This issue has been reported by many clients linked against
 OpenSSL 0.9.8 (see footer links).
 
 When connecting to a server that sends warning-level alerts
 on hostname mismatch in TLS-SNI, eg.:
 
   $ openssl s_client -CApath /etc/ssl -connect \
 $CorrectHostname:443 -servername $InvalidHostname \
 -state  /dev/null 21 | grep -E 'alert|error'
 
 Current 0.9.8y behavior (output):
   SSL3 alert read:warning:unknown
   SSL_connect:error in SSLv2/v3 read server hello A
   7632:error:14077458:SSL 
 routines:SSL23_GET_SERVER_HELLO:reason(1112):s23_clnt.c:602:
 
 Desired behavior (output) [consistent with OpenSSL 1.0.1e]:
   SSL3 alert read:warning:unrecognized name
   SSL3 alert write:warning:close notify
 
 Patch applies cleanly to OpenSSL_0_9_8-stable (HEAD at a44c9b9c)
 and makes behavior consistent with OpenSSL 1.0.1e. Also, it
 adds support for new alerts (RFC 6066 and RFC 4279).
 
 Please consider its inclusion after appropriate code review.
 
 --mancha
 
 Note: A higher-level discussion is whether non-fatal
 unrecognized_name alerts should be sent at all. Per RFC 6066,
 If a server name is provided but not recognized, the server
 should either continue the handshake without an error or send
 a fatal error. Sending a warning-level message is not
 recommended because client behavior will be unpredictable.
 
 =
 
 [1] http://marc.info/?l=openssl-usersm=131736995412529w=2
 [2] http://sourceforge.net/p/curl/bugs/1037/
 [3] https://bugs.php.net/bug.php?id=61276
 [4] https://github.com/joyent/node/issues/3033
 
 Attachment
(0001-Fix-handling-of-warning-level-alerts-in-SSL23-client.patch):
application/octet-stream, 11 KiB


Hello.

I never received a reply to this patch submission but wanted
to follow up because I am receiving update requests from affected
users (e.g. http://sourceforge.net/p/curl/bugs/1037/?page=3).

I imagine 0.9.8 is in feature-freeze however I believe this
qualifies as a bug-fix more than a feature-enhancement.

Would someone let me know if this code might eventually make its
way into 0.9.8 so I know how to respond to people requesting
status updates from me?

Thanks.

--mancha



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [PATCH] s_client, proxy support

2013-06-14 Thread mancha
On Wed, 07 Dec 2011 m.tr...@gmx.de wrote:
Hi,

I have added support for the 'HTTP CONNECT' command to s_client.
Maybe it's useful for someone else.

Regards
Michael

Hello Michael.

I was doing some SSL diagnostics through a series of
proxy tunnels and was about to hack HTTP CONNECT support
for s_client when I came across your patch. Thank you for
sharing that!

I introduced a few slight changes: a) made CONNECT string
RFC2817 compliant, b) shutdown on non-success of CONNECT,
and c) check to prevent NULL deref. of connect_str.

Similarly, I share for those who might find this useful.

Cheers.

--mancha

openssl-0.9.8y-s_client-proxy.patch
Description: Binary data


openssl-1.0.1e-s_client-proxy.patch
Description: Binary data


[openssl.org #2854] [PATCH] Add display of old-style issuer hash to crl

2012-07-27 Thread mancha via RT
Inspired by the good work of Willy Weisz to add old-style 
compatibility
hashes to x509, I set out to do the same for crl.

This enhancement request shares the motivation of Weisz's code 
(mainlined
via PR-2136). Namely, to permit users to generate and display 
issuer name
hashes compatible with 0.9.x.

The attached patch modifies crl.c and crl.pod and applies cleanly 
to 1.0.1c.

Please consider its inclusion.

Best,
mancha
Add a flag to crl to generate legacy hashes as used by OpenSSL pre-1.0.
  -mancha

===

--- a/apps/crl.c			2012-07-16
+++ b/apps/crl.c			2012-07-16
@@ -81,6 +81,9 @@ static const char *crl_usage[]={
  -in arg - input file - default stdin\n,
  -out arg- output file - default stdout\n,
  -hash   - print hash value\n,
+#ifndef OPENSSL_NO_MD5
+ -hash_old   - print old-style (MD5) hash value\n,
+#endif
  -fingerprint- print the crl fingerprint\n,
  -issuer - print issuer DN\n,
  -lastupdate - lastUpdate field\n,
@@ -108,6 +111,9 @@ int MAIN(int argc, char **argv)
 	int informat,outformat;
 	char *infile=NULL,*outfile=NULL;
 	int hash=0,issuer=0,lastupdate=0,nextupdate=0,noout=0,text=0;
+#ifndef OPENSSL_NO_MD5
+	int hash_old=0;
+#endif
 	int fingerprint = 0, crlnumber = 0;
 	const char **pp;
 	X509_STORE *store = NULL;
@@ -192,6 +198,10 @@ int MAIN(int argc, char **argv)
 			text = 1;
 		else if (strcmp(*argv,-hash) == 0)
 			hash= ++num;
+#ifndef OPENSSL_NO_MD5
+		else if (strcmp(*argv,-hash_old) == 0)
+			hash_old= ++num;
+#endif
 		else if (strcmp(*argv,-nameopt) == 0)
 			{
 			if (--argc  1) goto bad;
@@ -304,6 +314,13 @@ bad:
 BIO_printf(bio_out,%08lx\n,
 	X509_NAME_hash(X509_CRL_get_issuer(x)));
 }
+#ifndef OPENSSL_NO_MD5
+			if (hash_old == i)
+{
+BIO_printf(bio_out,%08lx\n,
+	X509_NAME_hash_old(X509_CRL_get_issuer(x)));
+}
+#endif
 			if (lastupdate == i)
 {
 BIO_printf(bio_out,lastUpdate=);
--- a/doc/apps/crl.pod			2012-07-16
+++ b/doc/apps/crl.pod			2012-07-16
@@ -14,6 +14,7 @@
 [B-out filename]
 [B-noout]
 [B-hash]
+[B-hash_old]
 [B-issuer]
 [B-lastupdate]
 [B-nextupdate]
@@ -62,6 +63,11 @@
 output a hash of the issuer name. This can be use to lookup CRLs in
 a directory by issuer name.
 
+=item B-hash_old
+
+output a hash of the issuer name using the old algorithm as used by
+OpenSSL prior to version 1.0.0.
+
 =item B-issuer
 
 output the issuer name.