How to remove prior FIPS build option

2010-01-14 Thread Charles Belov

I attempted to build openssl using the FreeBSD port of openssl.

Options are set using make config as follows:

Options for openssl 0.9.8l_2
[ ] I386  Use optimzed assembler for 80386 


[X] SSE2  Use runtime SSE2 detection
[X] ZLIB  Build with zlib compression

and the Makefile shows

PORTVERSION=0.9.8l
PORTREVISION=   2

Whe I tried to make this a few days ago, I believe there were two 
additional options:  FIPS and SCTP.  I tried selecting SCTP, it didn't 
work, then I tried selecting FIPS, and got the error:


(after making all in crypto/pqueue...)

making all in fips...
make: don't know how to make /usr/local/ssl/fips-1.0/lib/fipscanister.o. 
Stop

*** Error code 2

Stop in /var/build/ports/security/openssl/work/openssl-0.9.8l/fips.
*** Error code 1

Stop in /var/build/ports/security/openssl/work/openssl-0.9.8l.
*** Error code 1

Stop in /ports/security/openssl.
*** Error code 1

thus killing the make.  I set it aside at that time, then came back to 
it today.  Even after doing the make config I continue to get the 
fips-related errors.


I see from the FreeBSD ports Web site that there was in fact a Makefile 
revision 1.161 yesterday to remove FIPS and SCTP support.  So I'm 
guessing that this is why I no longer see FIPS and SCTP as options.  But 
it also seems that make is holding on to my prior setting of the FIPS 
option.


So, my question is, how do I obliterate this obsolete option, so that I 
can make openssl without the FIPS error?


Thank you,
Charles Belov

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RANDFILE in openssl.conf

2010-01-14 Thread Eisenacher, Patrick
Hi,

specifying RANDFILE in openssl.conf within a section as per 
http://www.openssl.org/docs/apps/ca.html does not work. Instead, the randfile 
is created in the directory pointed to by the environment variable HOME.

Only when I move the RANDFILE directive in front of the first section in 
openssl.conf, ie. as the very first statement, the directive is respected.

If this is expected behaviour, the documentation should be fixed. Otherwise, 
this looks like a bug.

My environment is openssl v0.9.8k on cygwin.

Thanks,
Patrick Eisenacher
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: How to remove prior FIPS build option

2010-01-14 Thread Kyle Hamilton
You must download the openssl-fips.1.2.0.tar.gz package, and follow
the instructions in the companion Security Policy *precisely*.  That
is the only package that can build a fipscanister.o.

Once the fipscanister.o exists and is installed properly, then you can
build with the fips option.  Not before.

And to fix the fips problem in your source tree: 'make clean'

-Kyle H

On Wed, Jan 13, 2010 at 6:16 PM, Charles Belov docor...@sonic.net wrote:
 I attempted to build openssl using the FreeBSD port of openssl.

 Options are set using make config as follows:

 Options for openssl 0.9.8l_2
 [ ] I386  Use optimzed assembler for 80386
 [X] SSE2  Use runtime SSE2 detection
 [X] ZLIB  Build with zlib compression

 and the Makefile shows

 PORTVERSION=    0.9.8l
 PORTREVISION=   2

 Whe I tried to make this a few days ago, I believe there were two
 additional options:  FIPS and SCTP.  I tried selecting SCTP, it didn't work,
 then I tried selecting FIPS, and got the error:

 (after making all in crypto/pqueue...)

 making all in fips...
 make: don't know how to make /usr/local/ssl/fips-1.0/lib/fipscanister.o.
 Stop
 *** Error code 2

 Stop in /var/build/ports/security/openssl/work/openssl-0.9.8l/fips.
 *** Error code 1

 Stop in /var/build/ports/security/openssl/work/openssl-0.9.8l.
 *** Error code 1

 Stop in /ports/security/openssl.
 *** Error code 1

 thus killing the make.  I set it aside at that time, then came back to it
 today.  Even after doing the make config I continue to get the
 fips-related errors.

 I see from the FreeBSD ports Web site that there was in fact a Makefile
 revision 1.161 yesterday to remove FIPS and SCTP support.  So I'm guessing
 that this is why I no longer see FIPS and SCTP as options.  But it also
 seems that make is holding on to my prior setting of the FIPS option.

 So, my question is, how do I obliterate this obsolete option, so that I can
 make openssl without the FIPS error?

 Thank you,
 Charles Belov

 __
 OpenSSL Project                                 http://www.openssl.org
 User Support Mailing List                    openssl-us...@openssl.org
 Automated List Manager                           majord...@openssl.org

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: impact of client certificates to re-negotiation attack

2010-01-14 Thread Kyle Hamilton
On Wed, Jan 13, 2010 at 6:34 AM, Steffen DETTMER
steffen.dett...@ingenico.com wrote:
 * aerow...@gmail.com wrote on Tue, Jan 12, 2010 at 12:29 -0800:
 On Tue, Jan 12, 2010 at 3:12 AM, Steffen DETTMER
 The problem is this:

 The attacker makes a connection to a TLS-enabled server,
 sending no certificate.  It sends a command that, for whatever
 reason, needs additional privilege (in Apache's case, Directory
 and Location clauses can require additional security above what
 the default VirtualHost configuration negotiates).  Then, it
 just starts proxying packets between the client and the server.

 Yeah, strange idea that. I read that it is common to do so, but I
 wonder why such a feature was implemented into Apache... Did this
 renegiotation-to-increase-security-retroactively idea ever pass
 any security approval?
 A pitty that such an attack exists, but maybe this just prooves
 `never design a crypto system' - sometimes security issues are
 really surprising :)

It was (wrongly) assumed that SSL certificate authentication could be
plugged in at precisely the same level as the other authentication
systems already in place, Basic and Digest.  See below.

 One way to deal with this would be to enforce that all access
 to resources on the same webserver require the same security
 cloak -- that is, have the server demand a certificate during
 the initial negotiation, so that *all* data that comes in is
 authenticated as being from that identity.

 Yes, isn't this how TLS is intended to be used and the `add a
 certificate based on a directory' just some hack because the
 user interfaces are as they are (and that are passwords and
 BasicAuth when it comes to HTTP/HTTPS)?

TLS is authentication-agnostic.  The server does not need to identify
itself, it's only through common usage that there's any requirement
for servers to have certificates at all.  So the first part isn't
precisely true -- it's not how TLS is intended to be used, it's the
best practice for using TLS.

Require certificate based on the URI requested (restated from your
'add a certificate based on a directory', because there are many
servers that don't actually operate on directories, instead
virtualizing their URI spaces) is, indeed, a hack, for the reason you
express.  However, this hack was based on an assumed property of SSL
(and later TLS) that was never investigated until last year:  there is
no way for a man in the middle to attack in the presence of mutual
authentication.

 I thought this data injection attack fails when client
 certificates would be used correctly.

 It does, in the event that the server configuration does not allow for
 non-client-certificated connections in any manner, in any way, for any
 purpose.  THAT is when client certificates are used correctly.
 [...]
 If you get rid of the 'allow non-authenticated SSL clients'
 part of the policy on that server, you do away with the
 problem.
 [...]
 In fact, if this [client-] certificate is presented during the initial
 negotiation, it is impossible to perform this MITM attack.

 ok. Thank you for clarifying that.
 So as you wrote in the other mail, it is the `common practice'
 that lead to the problem.
 The fix IETF is preparing maybe is less a bugfix than a new
 feature - protecting a bit more, even if authentication is used
 wrongly.

It's definitely a bugfix, as it is fixing a property of the protocol
which was originally intended -- that there can be no change in either
of the endpoints without collusion (in the case of servers, a shared
session cache across a server farm is 'collusion'; in the case of
clients, it's typically a lot more difficult to collude, but it's
still possible).

 In Twitter's case, they don't require certificate
 authentication.  This means that they will never be able to
 prevent this attack without disabling renegotiation until the
 Secure Renegotiation Indicator draft standard is implemented on
 both the servers and the clients.

 And yes: the problem is caused by TLS misuse, but the reason
 for the misuse isn't on the server side.

 Yes, the MITM was never able to decrypt the password, but fooled
 the server to do so and to publish the result. Well... strictly
 speaking I think the bug is that the result (the password) was
 published, which I think is a server but not a TLS bug (which
 just happend because no one had the idea for this exploit in the
 past).

The MITM couldn't decrypt the session, but relied on a property of the
server to cause it to publish the Basic-encoded username/password
combination.  This is a side effect of the request, but this side
effect can be completely devastating.

 Since summary there are many aspects that work
 hand-in-hand: TLS does not differentiate negiotation and
 subsequent re-negiotations, HTTP `assumes' always talking to the
 same peer and replayable passwords or clear text password as with
 BasicAuth, best probably is to improve TLS AND stop using
 BasicAuth in favor of Digest Authentication AND make a 

Re: can TLS be used securely or it is flawed by design not allowing to use it securely

2010-01-14 Thread Kyle Hamilton
On Wed, Jan 13, 2010 at 5:58 AM, Steffen DETTMER
steffen.dett...@ingenico.com wrote:
 Hi,

 thank you very much for all your explanation and to give me one
 more free training :)

Hey, like I said, I believe this information needs to be free to all. :)

 * Kyle Hamilton wrote on Tue, Jan 12, 2010 at 13:33 -0800:
  Isn't it a bug in the application when it does not allow me (its
  user) to configure it? As far as I know there is no way to tell
  Firefox i.e. not to accept 40 bit.

 about:config, search for 'ssl' and 'tls'.  By default, Firefox
 3.0+ disable 40- and 56-bit ciphers, and I know that either
 Firefox 3.0 or 3.5 disabled SSLv2 by default.  SSLv3 and TLSv1
 do not use those ciphers.

 Ohh great, thanks for this information. I checked that also my
 Firefox 3 and Firefox 2 has 40 and 56 bit ciphers disabled and
 have `security.enable_ssl2 = false'.

It is arguably a bug when an application doesn't allow its TLS
configuration to be modified.  It's DEFINITELY a bug when an
application doesn't allow you to include the certificate chain
necessary to validate the certificate you present to the peer.

 There is currently no way for even an ideal TLS implementation to
 detect this issue.  This is why the IETF is working on the Secure
 Renegotiation Indication TLS extension, which is close to finally
 being released.

  Like having some OpenSSL callback be called reliably on (right
  after?) each renegiotation - where a webserver could force to
  shutdown the connection if the new parameters are not acceptable?

 Yes.  Please see SSL_CTX_set_info_callback(3ssl).

 hum, now I'm confused, I think your last both answers contradict
 each other...
 If an application can use e.g. SSL_CTX_set_info_callback to
 reliably avoid this, I have to read more on what the IETF is working
 on. If there are webservers `trusting' peers without certificates
 (allowing pre-injection) what should stop people to ignore
 whatever extension as well...

What SSL_CTX_set_info_callback() does is tell you *when* a
renegotiation occurs.  It doesn't tell you what happened before.

In 0.9.8l, Mr Laurie pushed a version that disabled renegotiation
entirely by default.

Regarding what should stop people from ignoring whatever SRI
extension, it would require the violation of at least 3 MUSTs and
MUST NOTs.  At that point, whatever's being used between the peers
isn't TLS, and it will be very easy to detect that it's malicious.
(Remember: Secure Renegotiation Indicator is a TLS extension, which
means it must be sent by the Client before it can be acknowledged by
the Server.)

 (well, of course in case of the renegiotation attack the main
 point probably is just that no one had this nice idea before :-))

I think that this is just the point when someone decided to exploit a
(in hindsight) glaring flaw, and we've got an entire ecosystem of
developers feverishly trying to solve the problems caused by their
reliance on the function which has that flaw.

  Someone could expect whenever a browers window or `tab' is
  operating in some extended validation mode.

As I think I mentioned, nobody ever actually mapped out the precise
semantics of how the green bar is supposed to work.  That is EV's
biggest Achilles's Heel... nobody knows what it means, the same way
nobody knew what the lock meant.

 I could imagine that the hyped success of SSL/TLS lead to
 weaknesses, because today someone can often hear `we are based on
 SSL/TLS and thus are secure'. Also interesting is when
 specifications require minimum RSA key lengths but don't tell
 anything about certification policies (requirements to CSPs) or
 require AES256 but no certificates (DH)... Which, BTW, in case of
 an MITM has the funny effect that it is cryptographically ensured
 that only the attacker can decrypt the traffic lol

You cannot call an application that uses SSL/TLS secure any more
than you can call a network that has a firewall secure.

It's possible to negotiate an ephemeral key to encrypt the data, then
go back and renegotiate to keep the contents of the certificate
private from anyone except the entity to which it is offered.  This
is, I think, enough to keep the EU privacy directives happy, but I'm
not certain.  (Such a case would require the set_info_callback and
then a control on the channel to initiate a renegotiation, this second
one to require a certificate and proof of ownership of the
certificate.)

And yes, the point is that only the attacker can decrypt the traffic
-- until the client sends its negotiation, and the attacker proxies
it, and the ChangeCipherSpec gets sent.  At that point, the attacker
doesn't know the new keys.

 I think this is a server (configuration) bug but no TLS bug.
 How can someone assume it would be safe for preinjection when
 accepting anonymous connections?

...because they didn't realize that the prior session isn't
cryptographically bound to the new session, it's a complete wiping of
the slate.  It is certainly an application-design issue 

Re: PKI with openssl online

2010-01-14 Thread Edgar Ricardo Gonzalez Lazaro
May be you should try to write an applet which generates a key pair and a
PKCS#10 request file which is send to a server delegated to certificate the
public key without direct access to a command interpreter this brings a
level of isolation of the private key.  may be a active X with access to
a CSP but this constrains the app to windows (ugly thing).


2010/1/14 Abbass Marouni abbas...@hotmail.com


 Bonsoir,

 I am a n ew user of the Openssl.
 I have a project, in which I am asked to implement an online Certificate
 Authority.
 we will be using website hosted in a free server.(Geocities,...).

 so my Qs are the following:

 -is it possible to implement the CA online using a Geocities website.
 -I need a simple website where users can send commands, and a certficate
 will be generated for them.


 Thanks.

 *


 Abbass Marouni*



 --
 Windows Live: Make it easier for your friends to see what you’re up to on
 Facebook.http://www.microsoft.com/middleeast/windows/windowslive/see-it-in-action/social-network-basics.aspx?ocid=PID23461::T:WLMTAGL:ON:WL:en-xm:SI_SB_2:092009




-- 
Hay que darle un sentido a la vida por el hecho mismo de que la vida carece
de sentido.