Re: Using a CRL in an OpenSSL program

2010-01-11 Thread Shane Steidley
Ron,

I believe you just need to make the following calls:

int vflags |= X509_V_FLAG_CRL_CHECK|X509_V_FLAG_CRL_CHECK_ALL;

SSL_CTX_load_verify_locations(ctx,CRLfile,CRLpath);

store = SSL_CTX_get_cert_store(ctx);
X509_STORE_set_flags(store, vflags);

Generally, functions that work for CAs also work for CRLs.  Note that
the SSL_CTX_load_verify_locations only works on PEM encoded CRLs and
CAs.

On Mon, Jan 11, 2010 at 9:13 AM, ronald braswell  wrote:
>
> Hi,
>
> I have published a CRL using openssl.   How do I use it in a custom OpenSSL 
> server to reject certificates in the CRL.   Is it as simple as registering a 
> file in the CTX or do I have to read in the CRL and examine it every time a 
> client certificate is presented?
>
> Thanks,
>
> Ron Braswell
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: memory growing when using SSL connections

2010-01-11 Thread Dr. Stephen Henson
On Mon, Jan 11, 2010, David wrote:

> Kevin,
>
> Valgrind apparently is a Linux based tool, but I'm having this problem on 
> AIX. I actually have a simple memory leak finder that showed that SSL did 
> not have an obvious memory leak. It looks like most, if not all memory 
> management functions for SSL goes thru CRYPTO_malloc(), CRYPTO_realloc() 
> and CRYPTO_free() and their variants. I found a couple of calloc()s 
> elsewhere, but it looks like everything else got covered in these 
> functions.
>
> So pretty much, when memory is allocated (or reallocated) I put it on a 
> linked list. When it is freed, I remove it from the list. After all my 
> sessions have stopped, I see that my linked list hasn't  grown from the 
> previous run.
>
> I'm a bit suspicious about the realloc()s, since this is can be a source of 
> memory fragmentation. However, what is curious is that I don't see this 
> problem on Solaris.
> IBM has a product called PURIFYPLUS for detecting memory leaks. Perhaps 
> I'll see if I can run it with the application and see if it shows up 
> anything.
>

OpenSSL has some built in leak detection. If you call something like:

MemCheck_start();

before a leaking section then:

CRYPTO_mem_leaks_fp(stderr);

after you think you've freed everything up you should get some useful results
if the leak is internal to OpenSSL. The output takes a bit of getting used to
(I usually run it twice setting breakpoints) but can be very useful.

Steve.
--
Dr Stephen N. Henson. OpenSSL project core developer.
Commercial tech support now available see: http://www.openssl.org
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: memory growing when using SSL connections

2010-01-11 Thread David

Kevin,

Valgrind apparently is a Linux based tool, but I'm having this problem on 
AIX. I actually have a simple memory leak finder that showed that SSL did 
not have an obvious memory leak. It looks like most, if not all memory 
management functions for SSL goes thru CRYPTO_malloc(), CRYPTO_realloc() and 
CRYPTO_free() and their variants. I found a couple of calloc()s elsewhere, 
but it looks like everything else got covered in these functions.


So pretty much, when memory is allocated (or reallocated) I put it on a 
linked list. When it is freed, I remove it from the list. After all my 
sessions have stopped, I see that my linked list hasn't  grown from the 
previous run.


I'm a bit suspicious about the realloc()s, since this is can be a source of 
memory fragmentation. However, what is curious is that I don't see this 
problem on Solaris.
IBM has a product called PURIFYPLUS for detecting memory leaks. Perhaps I'll 
see if I can run it with the application and see if it shows up anything.


Thanks,
David

--
From: "Kevin Regan" 
Sent: Monday, January 11, 2010 3:19 PM
To: 
Subject: RE: memory growing when using SSL connections

I had a similar issue that was fixed by moving the 
CRYPTO_cleanup_all_ex_data command to the end of the application (as has 
been suggested).  Have you tried running Valgrind with the application? 
This may tell you where the memory leak is occurring.


--Kevin

p.s.  You will need to compile OpenSSL with the -DPURIFY option (and you 
may need to grab a recent patch that I mailed to openssl-dev to 
get -DPURIFY working properly).


--Kevin

-Original Message-
From: owner-openssl-us...@openssl.org 
[mailto:owner-openssl-us...@openssl.org] On Behalf Of David

Sent: Monday, January 11, 2010 11:48 AM
To: openssl-users@openssl.org
Subject: Re: memory growing when using SSL connections

Hi Jeremy,

   I did try removing the CRYPTO_cleanup_all_ex_data() call, based on Dr.
Henson's response, but I still have the same problem.

   I  bypassed the SSL calls and used pure telnet to my server and there
were no signs of the application growing.

   Incidentally, I don't see this problem when running on a Solaris box.
Perhaps, it has something to do with the AIX environment.

Regards,
David

--
From: "Jeremy Hunt" 
Sent: Sunday, January 10, 2010 6:56 PM
To: 
Subject: Re: memory growing when using SSL connections


Hi David et al,

On reading the responses so far two new thoughts occur to me:

1. In view of Dr Henson's response, I wonder if removing the
CRYPTO_cleanup_all_ex_data() call in your loop will fix the problem.
Perhaps reusing the context structure after calling it may have the
reverse effect.

2. It may not be an SSL problem at all. Can you remove the SSL calls from
your application and see if you still get the memory leak? Your 
underlying

telnet application may be the cause.

Good Luck,

Jeremy

Dr. Stephen Henson wrote:

[safeTgram (safetgram-in) receive status: NOT encrypted, NOT signed.]


On Thu, Jan 07, 2010, David wrote:



Hi,

I'm using tn3270 sessions running over SSL. I may have up to 124
sessions activated concurrently, although I plan to get up to 250
sessions at some point.
Whenever the sessions are stopped and restarted, I notice 
intermittently

that memory grows in multiples of 4K bytes.
I'm running on AIX 5.1, 5.2 and 5.3 and using openssl-0.9.8l.  There
doesn't appear to be an obvious memory leak in either my application or
the OpenSSL stuff (all memory allocated when the sessions are started
are freed when the sessions are stopped).
Here's a summary of the code structure:

SSL_library_init();
meth = TLSv1_client_method();
RAND_seed();
ctx = SSL_CTX_new(meth);

while ([some telnet connection wants to do SSL])
{
ssl = SSL_new(ctx);
SSL_set_fd()
SSL_set_cipher_list();   SSL_set_connect_state();
SSL_connect();
do SSL_read(), SSL_write()
SSL_shutdown();
close FD;
SSL_free();
CRYPTO_cleanup_all_ex_data();
  }
 Any ideas would be appreciated. Thanks,
David



Some cleanups occur on each connection and others only when the
application
shuts down.

You should *not* call CRYPTO_cleanup_all_ex_data() on every SSL
connection
because later SSL connections may use it and end up not freeing data
correctly.

This is especially an issue if connections use compression (OpenSSL
compiled
against zlib) as it is by default in some linux distributions.

Steve.
--
Dr Stephen N. Henson. OpenSSL project core developer.
Commercial tech support now available see: http://www.openssl.org
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org





--

"The most likely way for the world to be destroyed, most experts agree

RE: Unable to load CRL

2010-01-11 Thread Dave Thompson
> From: owner-openssl-us...@openssl.org On Behalf Of 
> Radha krishna Meduri -X (radmedur - HCL at Cisco)
> Sent: Friday, 08 January, 2010 01:13

> #include "openssl/ssl.h"
> #include "stdio.h"
> 
Aside: it's conventional and sometimes better to 
use < > format for system/std headers like stdio.h. 

> FILE* m_pfCRLFile=0;
> const char* m_pszURL;
> 
> const char* m_pszCRLFile = "test_pem.crl";
> 
> printf("systhesized file name= %s\n", m_pszCRLFile);
> 
Aside: IAYM 'synthesized' but I don't see how. Maybe 
this is leftover from other more complicated code.

> m_pfCRLFile = fopen( m_pszCRLFile , "wb");
> 
> if( !m_pfCRLFile )
> {
>  printf("Unable to open file %s for writing", m_pszCRLFile);
>  exit(0);
> }
> 
You open for writing, which empties the file, but then ...

> X509_CRL *pCRL=0, *pTempCRL = 0;
> 
> pCRL = d2i_X509_CRL_fp( m_pfCRLFile, &pTempCRL );
> 
.. try to read. That can't work.

Also: you don't need to use both the &pTempCRL argument 
and the return value pCRL. Either one is sufficient.

> if( !pCRL )
> {
> printf("Unable to read using d2i_X509_CRL_fp\n");
> pCRL = PEM_read_X509_CRL(m_pfCRLFile, &pTempCRL, NULL, 0);

Ditto, and ditto.

> }
> 
> if( !pCRL )
> {
> printf("Unable to read CRL file\n" );
> exit(0);
> }


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: trying to understand ECDHE operations

2010-01-11 Thread Dave Thompson
> From: owner-openssl-us...@openssl.org On Behalf Of Michael D
> Sent: Friday, 08 January, 2010 08:53

> Based on the old message snippet below, two questions:
> 1) Are the session keys then used by the symmetric cipher 
> going forward?
> Or is there another step used to get those keys.  

Session keys are for symmetric data cipher AND data HMAC.
(TLS actually calls the HMAC secret parameter "secret" 
although RFC 2104 and IME most other usage calls it "key".)

> For example, if I am using 192 bit ECC, and using AES-128, what do
> I use for the 128 bit key?   
> 
Two specified 128bit chunks of TLSPRF(master,otherstuff) 
where master = TLSPRF(premaster,otherstuff).

See RFC 4346 8.1 and 6.3 (and 5) as modified by 4492 5.10.

> If I used AES 256, would I need a larger number of bits in 
> the ECC curve?
> 
You don't NEED it. TLS key derivation generates enough 
key material, regardless of the size of premaster.
However, premaster must contain enough entropy to support 
the desired security; per Kerckhoff (sp?) everything else 
is knowable by the attacker. 

128bit symmetric is plenty for many years; like Schneier's 
"stake" it's stronger than the rest of your system. 
So AES256 versus 128 should only be needed for interop and/or 
buzzword compliance -- which can be useful, and (here) doesn't 
hurt. If you really want 128bit or more security level, 
you do need to use for keyagreement an EC curve big enough 
to provide that security level. According to NIST in SP 800-57, 
EC of 2N bits is roughly equivalent to symmetric of N bits, 
so even for "full" AES128 you should be using EC>=256.
But you may find other experts with different judgements.

And you must also have enough "good" random-data generation 
involved (in ECDHE transients, or as noted static-ECDH nonces).
I believe in practice this has more often been a weakness 
(and successful attack) than actual cryptography. But it's 
harder to analyze and basically impossible to prove.

> 
> 2) The last part of the Where can I read about how SSL makes session 
> unique with a nonce, how is that done and or where can I read 
> about it?
> 
Much of RFC 4346.


> > Static aka fixed ECDH (or DH) does use the certified key as
> > the
> > server part of keyagreement. Client similarly if client
> > auth 
> > i.e. cert is used, which it usually isn't; but even though
> > that 
> > gives a fixed (EC)DH result, SSL still makes the
> > sessionkeys 
> > unique by adding per-session/handshake nonces.
> > 


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Re-negotiation handshake failed: Not accepted by client!?

2010-01-11 Thread Kyle Hamilton
The most succinct answer is this: the server and client
cryptographically only verify the session that they renegotiate within
at the time of initial negotiation, and never verify that they're the
same two parties that the session was between at the time of
renegotiation.

This allows for a MITM to initiate a session with a server, inject
data, then connect the client that has connected to the MITM.  For the
attack to work, additional attacks (such as IP network
untrustworthiness, IP rerouting, or DNS spoofing) are necessary.
Unfortunately, network untrustworthiness is something that's more
common than before.  Many places offer free wifi, and the installation
of a proxy to perform this attack is only a little more complex than
child's play.

The worst thing about this attack is that it provides no means for
either the client or server to detect it.  The client will receive the
server's correct certificate, the same way it expects to.  The server
will receive either the client's correct certificate or no certificate
(as the client decides) the same way it expects to.  There is no way
to identify this attack at the TLS protocol level.  Applications can
mitigate the effect of the attack in several ways; the most important
example is webservers, which could (for example) identify any Content
(defined as "the portion of data which is separated from the headers
by the sequence "\r\n\r\n", and which starts immediately after the
last \n of the separator") sent from the client to the server that
looks like the start of an HTTP request, after the header has already
been transmitted, and denying it.  (There is no mechanism defined in
HTML for any method where a form POST begins its data with (^POST )
or (^GET ), so any POST data from a client which contains those
strings -- along with other HTTP method strings -- is not valid, and
should be given a 400 Bad Request -- preferably with a Location:
header so that it is correctly redirected to a place which describes:
how the error was detected, what it means, and what the client should
do (change to another network) -- within the cover of the 'true'
session between the 'true' client and server.  In the preferable
implementation, it would redirect to a location which would accept
post data, send it to /dev/null, and then print out the information
for the client.

But I'm not an HTTP/HTML guru, and I have not evaluated the security
of this.  (Seriously, I didn't think of this until I started writing
this email.  But the reason for accepting POST data, then voiding it,
is to provide a mechanism for the semantics of the Location: redirect
to still function.  It states that when posting to a location, if a
client receives a Location header, it should post the data to the
Location as well.)



On Mon, Jan 11, 2010 at 5:59 AM, Steffen DETTMER
 wrote:
> Hi all!
>
> I miss something around the Re-negotiation flaw and fail to
> understand why it is a flaw in TLS. I hope I miss just a small
> piece. Could anyone please enlight me?
>
> * Kyle Hamilton wrote on Thu, Jan 07, 2010 at 16:22 -0800:
>> It is also, though, undeniably a flaw in the TLS specification
>> that's amplified by the clients of the libraries that implement
>> it -- because the clients don't understand the concept of
>> "security veil", the TLS implementations tend to provide a raw
>> stream of bytes (akin to a read()/write() pair) without the
>> application necessarily being aware of the change.
>
> Could it be considered that a miss-assumption about SSL/TLS
> capabilities caused this situation?

Nobody thought of this attack until late 2009, so it was mis-assumed
that the protocol was as secure as it was thought to be (since
1995/1998/2001+).

> I think since TLS should be considered a layer, its payload
> should not make any assumptions to it (or vice versa). But in the
> moment some application `looks to the TLS state' and tries to
> associate this information to some data in some buffer, I think
> it makes a mistake.

No, it doesn't.  The reason why is inherent with authentication,
authorization, and accountability: data which is accepted from an
unauthenticated source MUST BE considered potentially hazardous as a
matter of course. (It's rather telling that Microsoft changed the
meaning of unauthenticated connections to its RPC server in Windows NT
4.0 Service Pack 3.  Prior to NT4SP3, unauthenticated data was
automatically mapped into the only realm that existed that could hold
it: the Everyone group.  NT4SP3 created "Unauthenticated Users", and
provided a means for "Unauthenticated Users" to be excluded from
"Everyone" (which essentially turned "Everyone" into "Authenticated
Users" without having to change the Everyone SID on all the objects in
the system).

Any system that uses TLS is automatically attempting to impose some
form of security on the communication (be it 'security from the
sysadmin who runs the network, without any regard for whoever is at
the other end' or 'a bank imposing its policies on the

RE: memory growing when using SSL connections

2010-01-11 Thread Kevin Regan
I had a similar issue that was fixed by moving the CRYPTO_cleanup_all_ex_data 
command to the end of the application (as has been suggested).  Have you tried 
running Valgrind with the application?  This may tell you where the memory leak 
is occurring.

--Kevin

p.s.  You will need to compile OpenSSL with the -DPURIFY option (and you may 
need to grab a recent patch that I mailed to openssl-dev to get -DPURIFY 
working properly).

--Kevin

-Original Message-
From: owner-openssl-us...@openssl.org [mailto:owner-openssl-us...@openssl.org] 
On Behalf Of David
Sent: Monday, January 11, 2010 11:48 AM
To: openssl-users@openssl.org
Subject: Re: memory growing when using SSL connections

Hi Jeremy,

I did try removing the CRYPTO_cleanup_all_ex_data() call, based on Dr. 
Henson's response, but I still have the same problem.

I  bypassed the SSL calls and used pure telnet to my server and there 
were no signs of the application growing.

Incidentally, I don't see this problem when running on a Solaris box. 
Perhaps, it has something to do with the AIX environment.

Regards,
David

--
From: "Jeremy Hunt" 
Sent: Sunday, January 10, 2010 6:56 PM
To: 
Subject: Re: memory growing when using SSL connections

> Hi David et al,
>
> On reading the responses so far two new thoughts occur to me:
>
> 1. In view of Dr Henson's response, I wonder if removing the 
> CRYPTO_cleanup_all_ex_data() call in your loop will fix the problem. 
> Perhaps reusing the context structure after calling it may have the 
> reverse effect.
>
> 2. It may not be an SSL problem at all. Can you remove the SSL calls from 
> your application and see if you still get the memory leak? Your underlying 
> telnet application may be the cause.
>
> Good Luck,
>
> Jeremy
>
> Dr. Stephen Henson wrote:
>> [safeTgram (safetgram-in) receive status: NOT encrypted, NOT signed.]
>>
>>
>> On Thu, Jan 07, 2010, David wrote:
>>
>>
>>> Hi,
>>>
>>> I'm using tn3270 sessions running over SSL. I may have up to 124 
>>> sessions activated concurrently, although I plan to get up to 250 
>>> sessions at some point.
>>> Whenever the sessions are stopped and restarted, I notice intermittently 
>>> that memory grows in multiples of 4K bytes.
>>> I'm running on AIX 5.1, 5.2 and 5.3 and using openssl-0.9.8l.  There 
>>> doesn't appear to be an obvious memory leak in either my application or 
>>> the OpenSSL stuff (all memory allocated when the sessions are started 
>>> are freed when the sessions are stopped).
>>> Here's a summary of the code structure:
>>>
>>> SSL_library_init();
>>> meth = TLSv1_client_method();
>>> RAND_seed();
>>> ctx = SSL_CTX_new(meth);
>>>
>>> while ([some telnet connection wants to do SSL])
>>> {
>>> ssl = SSL_new(ctx);
>>> SSL_set_fd()
>>> SSL_set_cipher_list();   SSL_set_connect_state();
>>> SSL_connect();
>>> do SSL_read(), SSL_write()
>>> SSL_shutdown();
>>> close FD;
>>> SSL_free();
>>> CRYPTO_cleanup_all_ex_data();
>>>   }
>>>  Any ideas would be appreciated. Thanks,
>>> David
>>>
>>
>> Some cleanups occur on each connection and others only when the 
>> application
>> shuts down.
>>
>> You should *not* call CRYPTO_cleanup_all_ex_data() on every SSL 
>> connection
>> because later SSL connections may use it and end up not freeing data
>> correctly.
>>
>> This is especially an issue if connections use compression (OpenSSL 
>> compiled
>> against zlib) as it is by default in some linux distributions.
>>
>> Steve.
>> --
>> Dr Stephen N. Henson. OpenSSL project core developer.
>> Commercial tech support now available see: http://www.openssl.org
>> __
>> OpenSSL Project http://www.openssl.org
>> User Support Mailing Listopenssl-users@openssl.org
>> Automated List Manager   majord...@openssl.org
>>
>>
>
>
> -- 
>
> "The most likely way for the world to be destroyed, most experts agree, is 
> by accident. That's where we come in; we're computer professionals. We 
> cause accidents." -- Nathaniel Borenstein, co-creator of MIME
> __
> OpenSSL Project http://www.openssl.org
> User Support Mailing Listopenssl-users@openssl.org
> Automated List Manager   majord...@openssl.org 

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automate

Re: memory growing when using SSL connections

2010-01-11 Thread David

Hi Jeremy,

   I did try removing the CRYPTO_cleanup_all_ex_data() call, based on Dr. 
Henson's response, but I still have the same problem.


   I  bypassed the SSL calls and used pure telnet to my server and there 
were no signs of the application growing.


   Incidentally, I don't see this problem when running on a Solaris box. 
Perhaps, it has something to do with the AIX environment.


Regards,
David

--
From: "Jeremy Hunt" 
Sent: Sunday, January 10, 2010 6:56 PM
To: 
Subject: Re: memory growing when using SSL connections


Hi David et al,

On reading the responses so far two new thoughts occur to me:

1. In view of Dr Henson's response, I wonder if removing the 
CRYPTO_cleanup_all_ex_data() call in your loop will fix the problem. 
Perhaps reusing the context structure after calling it may have the 
reverse effect.


2. It may not be an SSL problem at all. Can you remove the SSL calls from 
your application and see if you still get the memory leak? Your underlying 
telnet application may be the cause.


Good Luck,

Jeremy

Dr. Stephen Henson wrote:

[safeTgram (safetgram-in) receive status: NOT encrypted, NOT signed.]


On Thu, Jan 07, 2010, David wrote:



Hi,

I'm using tn3270 sessions running over SSL. I may have up to 124 
sessions activated concurrently, although I plan to get up to 250 
sessions at some point.
Whenever the sessions are stopped and restarted, I notice intermittently 
that memory grows in multiples of 4K bytes.
I'm running on AIX 5.1, 5.2 and 5.3 and using openssl-0.9.8l.  There 
doesn't appear to be an obvious memory leak in either my application or 
the OpenSSL stuff (all memory allocated when the sessions are started 
are freed when the sessions are stopped).

Here's a summary of the code structure:

SSL_library_init();
meth = TLSv1_client_method();
RAND_seed();
ctx = SSL_CTX_new(meth);

while ([some telnet connection wants to do SSL])
{
ssl = SSL_new(ctx);
SSL_set_fd()
SSL_set_cipher_list();   SSL_set_connect_state();
SSL_connect();
do SSL_read(), SSL_write()
SSL_shutdown();
close FD;
SSL_free();
CRYPTO_cleanup_all_ex_data();
  }
 Any ideas would be appreciated. Thanks,
David



Some cleanups occur on each connection and others only when the 
application

shuts down.

You should *not* call CRYPTO_cleanup_all_ex_data() on every SSL 
connection

because later SSL connections may use it and end up not freeing data
correctly.

This is especially an issue if connections use compression (OpenSSL 
compiled

against zlib) as it is by default in some linux distributions.

Steve.
--
Dr Stephen N. Henson. OpenSSL project core developer.
Commercial tech support now available see: http://www.openssl.org
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org





--

"The most likely way for the world to be destroyed, most experts agree, is 
by accident. That's where we come in; we're computer professionals. We 
cause accidents." -- Nathaniel Borenstein, co-creator of MIME

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org 


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Re-negotiation handshake failed: Not accepted by client!?

2010-01-11 Thread David Schwartz

Steffan Dettmer write:

> Could it be considered that a miss-assumption about SSL/TLS
> capabilities caused this situation?

Only with hindsight.

 
> I think since TLS should be considered a layer, its payload
> should not make any assumptions to it (or vice versa). But in the
> moment some application `looks to the TLS state' and tries to
> associate this information to some data in some buffer, I think
> it makes a mistake.

Well then TLS is basically useless. A secure connection whose properties I 
cannot trust is not particularly useful. If I receive "foo" over the connection 
and cannot ever know where the middle "o" came from, what can I do with the 
"foo"? Anser -- nothing.


> When using HTTP over IPSec, I think no one ever had the idea to
> open or block URLs based on the currently used IPSec
> certificate...

I'm not sure I get the point of your analogy.

> Am I wrong when I think that those level-mixing causes the
> trouble? If a user (by browsers default configuration) first
> accepts some sillyCA or www.malicious.com but then later does not
> accept it any longer and expects the trust that initially was
> given to be taken back in retroperspective and finds this failing
> and unsafe (impossible), is this really a TLS weakness?

No, that's not. Because in that case the client's behavior is objectively 
unreasonable. But looking to the state of the current connection to decide what 
privileges to give it is part of TLS's intended use.

 
> It seems it is, so what do I miss / where is my mistake in
> thinking?

The mistake is in thinking that any security protocol is useful as a security 
measure on end A if the security parameters can be changed by end B at any time 
with no notification to higher levels on end A.
 
> Now I ask myself what happens if I connect via HTTPS and read the
> crypto information as displayed by my browser and decide to
> accept it - but after a renegiotation different algorithms are
> used. As far as I understand, I would get absolutely no notice
> about that. I could find myself suddenly using a 40 bit export
> grade or even a NULL chipher to a different peer (key) without
> any notice! If I understand correctly, even if I re-verify the
> contents of the browsers security information pane right before
> pressing a SUBMIT button, even then the data could be transferred
> with different parameters if a re-negotiation happens at the
> `right' time!

That could be argued to be a bug. Ideally, a renegotiation should not be 
permitted to reduce the security parameters unless, at absolute minimum, the 
intention to renegotiate is confirmed at both ends using at least the security 
level already negotiated.
 
> If this would be true, this means the information firefox shows
> up when clicking the lock icon does not tell anything about the
> data I will sent; at most it can tell about the past, how the
> page was loaded, but not reliable, because maybe it changed for
> the last part of the page.
> 
> Where is my mistaking in thinking?

Correct, and to the extent TLS permits a renegotiation to reduce the security 
parameters without confirming the intention to reduce those parameters at the 
current level, TLS is broken. If the two endpoints negotiate a particular level 
of security, no attacker should be able to reduce that level of security within 
the connection without having to break the current level of security.

That is, if the two ends negotiate 1,024-bit RSA and 256-bit AES, then an 
attacker should not be able to renegotiate a lower (or different) security 
within that connection without having to break either 1,024-bit RSA, 256-bit 
AES, or one of the hard algorithms inside TLS itself (such as SHA1). TLS 
permitted an attacker to do this, and so was deemed broken.

DS



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Using a CRL in an OpenSSL program

2010-01-11 Thread ronald braswell
Hi,

I have published a CRL using openssl.   How do I use it in a custom OpenSSL
server to reject certificates in the CRL.   Is it as simple as registering a
file in the CTX or do I have to read in the CRL and examine it every time a
client certificate is presented?

Thanks,

Ron Braswell


RE: Unable to load CRL

2010-01-11 Thread Radha krishna Meduri -X (radmedur - HCL at Cisco)

Hi Shane

Thankyou. This is working perfectly but why my code was failing?

I used  d2i_X509_CRL_fp api instead of d2i_X509_CRL_bio. Any idea what
is difference? Am I doing any wrong in my program?

Thanks
Radhakrishna.

-Original Message-
From: owner-openssl-us...@openssl.org
[mailto:owner-openssl-us...@openssl.org] On Behalf Of Shane Steidley
Sent: Saturday, January 09, 2010 2:32 AM
To: openssl-users@openssl.org
Subject: RE: Unable to load CRL

This is straight out of the openssl verify program, and seems to be
exactly what you need:

static X509_CRL *load_crl(char *infile, int format) { X509_CRL *x=NULL;
BIO *in=NULL;

in=BIO_new(BIO_s_file());
if (in == NULL)
{
ERR_print_errors(bio_err);
goto end;
}

if (infile == NULL)
BIO_set_fp(in,stdin,BIO_NOCLOSE);
else
{
if (BIO_read_filename(in,infile) <= 0)
{
perror(infile);
goto end;
}
}
if (format == FORMAT_ASN1)
x=d2i_X509_CRL_bio(in,NULL);
else if (format == FORMAT_PEM)
x=PEM_read_bio_X509_CRL(in,NULL,NULL,NULL);
else {
BIO_printf(bio_err,"bad input format specified for input crl\n"); goto
end; } if (x == NULL) { BIO_printf(bio_err,"unable to load CRL\n");
ERR_print_errors(bio_err); goto end; }

end:
BIO_free(in);
return(x);
}
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Re-negotiation handshake failed: Not accepted by client!?

2010-01-11 Thread Steffen DETTMER
Hi all!

I miss something around the Re-negotiation flaw and fail to
understand why it is a flaw in TLS. I hope I miss just a small
piece. Could anyone please enlight me?

* Kyle Hamilton wrote on Thu, Jan 07, 2010 at 16:22 -0800:
> It is also, though, undeniably a flaw in the TLS specification
> that's amplified by the clients of the libraries that implement
> it -- because the clients don't understand the concept of
> "security veil", the TLS implementations tend to provide a raw
> stream of bytes (akin to a read()/write() pair) without the
> application necessarily being aware of the change.

Could it be considered that a miss-assumption about SSL/TLS
capabilities caused this situation?

I think since TLS should be considered a layer, its payload
should not make any assumptions to it (or vice versa). But in the
moment some application `looks to the TLS state' and tries to
associate this information to some data in some buffer, I think
it makes a mistake.

When using HTTP over IPSec, I think no one ever had the idea to
open or block URLs based on the currently used IPSec
certificate...

Am I wrong when I think that those level-mixing causes the
trouble? If a user (by browsers default configuration) first
accepts some sillyCA or www.malicious.com but then later does not
accept it any longer and expects the trust that initially was
given to be taken back in retroperspective and finds this failing
and unsafe (impossible), is this really a TLS weakness?

It seems it is, so what do I miss / where is my mistake in
thinking?

I also wondered a lot about the Extended Validation attack from
last year; I had assumed that in `EV mode' a browsers tab is
completely isolated against all others and second no other
connectivity is possible as with the locked EV parameters, but as
it turned out this is not the case. Everything can change but the
green indicator remains. Strange...

Now I ask myself what happens if I connect via HTTPS and read the
crypto information as displayed by my browser and decide to
accept it - but after a renegiotation different algorithms are
used. As far as I understand, I would get absolutely no notice
about that. I could find myself suddenly using a 40 bit export
grade or even a NULL chipher to a different peer (key) without
any notice! If I understand correctly, even if I re-verify the
contents of the browsers security information pane right before
pressing a SUBMIT button, even then the data could be transferred
with different parameters if a re-negotiation happens at the
`right' time!

If this would be true, this means the information firefox shows
up when clicking the lock icon does not tell anything about the
data I will sent; at most it can tell about the past, how the
page was loaded, but not reliable, because maybe it changed for
the last part of the page.

Where is my mistaking in thinking?

oki,

Steffen

-- 

































--[end of message]->8===


 
About Ingenico: Ingenico is a leading provider of payment solutions, with over 
15 million terminals deployed in more than 125 countries. Its 2,850 employees 
worldwide support retailers, banks and service providers to optimize and secure 
their electronic payments solutions, develop their offer of services and 
increase their point of sales revenue. More information on 
http://www.ingenico.com/.
 This message may contain confidential and/or privileged information. If you 
are not the addressee or authorized to receive this for the addressee, you must 
not use, copy, disclose or take any action based on this message or any 
information herein. If you have received this message in error, please advise 
the sender immediately by reply e-mail and delete this message. Thank you for 
your cooperation.
 P Please consider the environment before printing this e-mail
 
 
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Remove RSA from EVP_PKEY structure

2010-01-11 Thread rale77



rale77 wrote:
> 
> Hello, 
> 
> How can I remove RSA structure form EVP_PKEY previosly added to EPP_PKEY
> with EVP_PKEY_assign_RSA function? I have one RSA object named rsa  and
> EVP_PKEY object named evp and their relation is : 
> rsa = evp->pkey.rsa 
> How to remove their bound and then delete EVP_PKEY (with
> EVP_PKEY_free(evp))  without deleting rsa. 
> 
>  
> 


if someone know please give me the code :)
-- 
View this message in context: 
http://old.nabble.com/Remove-RSA-from-EVP_PKEY-structure-tp27108392p27108407.html
Sent from the OpenSSL - User mailing list archive at Nabble.com.
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: How to encrypt .doc file using rsautl command using openssl on windows ?

2010-01-11 Thread Shane Steidley
Symmetric encryption would be a better idea for a .doc file.  You can always
generate a random number for symmetric encryption and encrypt that number
with rsa if you need to store the random number.  I generally use gpg for
symmetric encryption.

On Mon, Jan 11, 2010 at 4:45 AM, Cristian Thiago Moecke <
cont...@cristiantm.com.br> wrote:
>
> The error is because of the size of the data you are trying to encrypt.
Probably the txt file is smaller than your key, so you can encrypt it. But
the doc probably isn´t.
> The data you are encrypting must be smaller than the key. If you want to
encrypt larger things, you should split the file befor encyprting. I don´t
think openssl can do that for you 9not sure, but I don´t think it will).
Anyway, you should not use RSA for large files. RSA is too slow for that.
>
> 2010/1/11 
>>
>> Hi,
>>   I am trying to encrypt doc file using rsautl but it is giving me error
as follows
>> OpenSSL> rsautl -encrypt -inkey public.pem -pubin -in Mydoc.doc -out
Myfile.ssl
>> Loading 'screen' into random state - done
>> RSA operation error
>> 1184:error:0406D06E:rsa routines:RSA_padding_add_PKCS1_type_2:data too
large for
>>  key size:.\crypto\rsa\rsa_pk1.c:151:
>> error in rsautl
>>
>>  Why it is giving me error though same command works me well for .txt
file ..
>>  Waiting for your reply ..
>>  Thanks in ad
>>  Bhagyashri Katole
>>  C-DAC, Pune
>> __
>> OpenSSL Project http://www.openssl.org
>> User Support Mailing Listopenssl-users@openssl.org
>> Automated List Manager   majord...@openssl.org
>


Re: Remove RSA from EVP_PKEY structure

2010-01-11 Thread Dr. Stephen Henson
On Mon, Jan 11, 2010, rale77 wrote:

> 
> Hello, 
> 
> How can I remove RSA structure form EVP_PKEY previosly added to EPP_PKEY
> with EVP_PKEY_assign_RSA function? I have one RSA object named rsa  and
> EVP_PKEY object named evp and their relation is : 
> rsa = evp->pkey.rsa 
> How to remove their bound and then delete EVP_PKEY (with EVP_PKEY_free(evp)) 
> without deleting rsa. 
> 

If you use EVP_PKET_set1_RSA() instead of EVP_PKEY_assign_RSA() the reference
count of the added RSA structure is incremented and you can free up the
EVP_PKEY structure later without freeing up the reference RSA structure.

Steve.
--
Dr Stephen N. Henson. OpenSSL project core developer.
Commercial tech support now available see: http://www.openssl.org
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: How to encrypt .doc file using rsautl command using openssl on windows ?

2010-01-11 Thread Cristian Thiago Moecke
The error is because of the size of the data you are trying to encrypt.
Probably the txt file is smaller than your key, so you can encrypt it. But
the doc probably isn´t.

The data you are encrypting must be smaller than the key. If you want to
encrypt larger things, you should split the file befor encyprting. I don´t
think openssl can do that for you 9not sure, but I don´t think it will).
Anyway, you should not use RSA for large files. RSA is too slow for that.

2010/1/11 

> Hi,
>   I am trying to encrypt doc file using rsautl but it is giving me error as
> follows
> OpenSSL> rsautl -encrypt -inkey public.pem -pubin -in Mydoc.doc -out
> Myfile.ssl
> Loading 'screen' into random state - done
> RSA operation error
> 1184:error:0406D06E:rsa routines:RSA_padding_add_PKCS1_type_2:data too
> large for
>  key size:.\crypto\rsa\rsa_pk1.c:151:
> error in rsautl
>
>  Why it is giving me error though same command works me well for .txt file
> ..
>  Waiting for your reply ..
>  Thanks in ad
>  Bhagyashri Katole
>  C-DAC, Pune
> __
> OpenSSL Project http://www.openssl.org
> User Support Mailing Listopenssl-users@openssl.org
> Automated List Manager   majord...@openssl.org
>


Remove RSA from EVP_PKEY structure

2010-01-11 Thread rale77

Hello, 

How can I remove RSA structure form EVP_PKEY previosly added to EPP_PKEY
with EVP_PKEY_assign_RSA function? I have one RSA object named rsa  and
EVP_PKEY object named evp and their relation is : 
rsa = evp->pkey.rsa 
How to remove their bound and then delete EVP_PKEY (with EVP_PKEY_free(evp)) 
without deleting rsa. 

 
-- 
View this message in context: 
http://old.nabble.com/Remove-RSA-from-EVP_PKEY-structure-tp27108392p27108392.html
Sent from the OpenSSL - User mailing list archive at Nabble.com.
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


How to encrypt .doc file using rsautl command using openssl on windows ?

2010-01-11 Thread bhagyashri . bijwe
Hi,
   I am trying to encrypt doc file using rsautl but it is giving me error as 
follows 
OpenSSL> rsautl -encrypt -inkey public.pem -pubin -in Mydoc.doc -out Myfile.ssl
Loading 'screen' into random state - done
RSA operation error
1184:error:0406D06E:rsa routines:RSA_padding_add_PKCS1_type_2:data too large for
 key size:.\crypto\rsa\rsa_pk1.c:151:
error in rsautl

  Why it is giving me error though same command works me well for .txt file ..
 Waiting for your reply ..
  Thanks in ad
  Bhagyashri Katole
  C-DAC, Pune  
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: OpenSSL and distributed embedded systems

2010-01-11 Thread Jeremy Hunt

Thomas Taranowski wrote:

*[safeTgram (safetgram-in) receive status: NOT encrypted, NOT signed.]*

I think this question may be more general than OpenSSL, and will 
expose my SSL n00biness, but I'm not sure where to turn. 

I'm working on integrating the use of openssl on an embedded target 
which I have widely distributed in the field.  The issue I have is 
that each target is nestled within someone elses private network, and 
I have no control over the IP address, nor domain name assignment, yet 
I still want to be able to setup secure web communications between the 
target and client.  What I want is to use a single certificate request 
file, and have a single private key for each of my deployed servers, 
each of which will have a different domain name and IP address. 

From what I understand, using the same certificate and server private 
key is not possible, so I have to generate and get signed a 
certificate for each and every one of the thousand units I have 
deployed.  To compound the difficulty, since these are small embedded 
targets, the certificate and key needs to be compiled into the target 
code at build time, so I have to make 1000 different builds, one for 
each target.  This just seems wrong. 


Can someone help me get my learn on?


Thomas Taranowski
Certified netburner consultant
baringforge.com 


Hi Thomas,

You do not have to compile anything unique if you have some reliable 
persistence storage and a unique id for each device, that is unique to 
the device and not derived from its location. If these requirements are 
satisfied then a solution to this problem, as stated, is to have a 
certificate authority ready to sign certificate requests. Each device 
could have a canned library to pick up the unique identifier, add any 
other relevant information which may be derived from its location, and 
use this to create a certificate request to submit to the certificate 
authority to sign. The certificate authority can be one specified 
location that is independent from the location of the devices. The 
signed certificate can then stored locally and used for later SSL 
communications for the lifetime of the certificate.


Some issues to be aware of:
1. Secure storage, I assume that you want authentication of the device 
to prove it is that device.
2. Certificate revocation lists. Ask yourself if you want to be able to 
mark some certificates as invalid in the future?

3. Uptime of and access to the certificate authority.
4. Are you concerned about some devices impersonating others? How 
important is authentication in your scheme? From your comments about 
reusing a certificate and private key, it seems like this is not 
important to you.


Lastly, is it SSL you want, or something like Kerberos or IPsec? What 
are your requirements? SSL or the others I mentioned may be too heavy 
duty or too high level for your application.


Regards,

Jeremy
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: memory growing when using SSL connections

2010-01-11 Thread Jeremy Hunt

Hi David et al,

On reading the responses so far two new thoughts occur to me:

1. In view of Dr Henson's response, I wonder if removing the 
CRYPTO_cleanup_all_ex_data() call in your loop will fix the problem. 
Perhaps reusing the context structure after calling it may have the 
reverse effect.


2. It may not be an SSL problem at all. Can you remove the SSL calls 
from your application and see if you still get the memory leak? Your 
underlying telnet application may be the cause.


Good Luck,

Jeremy

Dr. Stephen Henson wrote:

[safeTgram (safetgram-in) receive status: NOT encrypted, NOT signed.]


On Thu, Jan 07, 2010, David wrote:

  

Hi,

I'm using tn3270 sessions running over SSL. I may have up to 124 sessions activated concurrently, although I plan to get up to 250 sessions at some point. 

Whenever the sessions are stopped and restarted, I notice intermittently that memory grows in multiples of 4K bytes. 

I'm running on AIX 5.1, 5.2 and 5.3 and using openssl-0.9.8l.  
There doesn't appear to be an obvious memory leak in either my application or the OpenSSL stuff (all memory allocated when the sessions are started are freed when the sessions are stopped).

Here's a summary of the code structure:

SSL_library_init();
meth = TLSv1_client_method();
RAND_seed();
ctx = SSL_CTX_new(meth);

while ([some telnet connection wants to do SSL])
{
ssl = SSL_new(ctx);
SSL_set_fd()
SSL_set_cipher_list();   
SSL_set_connect_state();

SSL_connect();
do SSL_read(), SSL_write()
SSL_shutdown();
close FD;
SSL_free();
CRYPTO_cleanup_all_ex_data();
  }
 
Any ideas would be appreciated. 
Thanks,

David



Some cleanups occur on each connection and others only when the application
shuts down.

You should *not* call CRYPTO_cleanup_all_ex_data() on every SSL connection
because later SSL connections may use it and end up not freeing data
correctly.

This is especially an issue if connections use compression (OpenSSL compiled
against zlib) as it is by default in some linux distributions.

Steve.
--
Dr Stephen N. Henson. OpenSSL project core developer.
Commercial tech support now available see: http://www.openssl.org
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org

  



--

"The most likely way for the world to be destroyed, most experts agree, 
is by accident. That's where we come in; we're computer professionals. 
We cause accidents." -- Nathaniel Borenstein, co-creator of MIME

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org