Re: ssl_read() hangs after wakeup from sleep in OSX 10.5.8

2009-10-29 Thread Parimal Das
Hi

The c-client library/API does its own socket I/O for non-SSL sessions, but
in SSL the socket I/O is delegated to OpenSSL.

When c-client does its own socket I/O, it sets a timeout (normally 15
seconds) on a select() call prior to doing any read() or write() calls.
Thus, c-client never does a read() or write() that would block.

If the select() timeout hits, there is a callback to the application to
enquire whether the application wants to continue waiting.  If the
application chooses to continue waiting, the select() is restarted with an
updated timeout.  Otherwise, the socket is closed and the application
receives a corresponding I/O error.

The net effect is that a non-SSL I/O can wait forever as long as the
application consents.  c-client does not unilaterally disconnect.

My problem is that this doesn't happen with SSL sessions because the socket
I/O has been delegated to OpenSSL.  There is no obvious way to instruct
OpenSSL to timeout its socket I/O, much less do the mechanism described
above.

So, the questions are:
 (1) Is there a way to set a timeout for OpenSSL's socket I/O (given that it
has been delegated to OpenSSL)?  If so, how?
 (2) If the answer is "yes", is there a way to do the "query" type timeout
described above?  If so, how?
 (3) If the answer to either (1) or (2) is "no", then how would we go about
altering the OpenSSL consumer (which, in this case, is c-client) so that
OpenSSL uses the consumer's socket I/O code instead of OpenSSL's socket I/O
code?  I'm hoping that you will tell me that there's some callback function
pointer that can be passed.

-Parimal





-- 
--
Warm Regards,

Parimal Das


Re: Is full-duplex socket use possible with OpenSSL?

2009-10-29 Thread Darryl Miles

Ger Hobbelt wrote:

It is presumed that every SSL_write() requires a flush (at TCP level this
mechanism is called a "Push").  This basically means the data needs to flush
to the reading API at the far end on exactly the byte boundary (or more)
data than you sent.  This mean you have a guarantee to not starve the
receiving side of data that the sending API has sent/committed.  This is
true at both the TCP and SSL levels.

If you think about it the SSL level could not make the guarantee easily if
the lower level did not also provide that guarantee.


 the guarantee at the lower level is NONAGLE, which is /not/ the
default in TCP stacks as it can result in suboptimal network usage by
transmitting overly small packets on the wire.


Huh... Nagle is to do with how a TCP stack handles when to send a first 
transmission of data, it comes into play only when the difference 
between the Congestion Window minus the amount of un-acknowledged data 
is less than 1*MSS.


http://en.wikipedia.org/wiki/Nagle's_algorithm

The congestion window is the active window that is in use by a 
connection for the sending side of the connection.  The CWnd (as often 
termed) is between 1*MSS and the negotiated maximum window size (that 
was made at the start of the connection).


http://en.wikipedia.org/wiki/Congestion_window

The CWnd starts off small and due to the "Slow Start Algorithm" opens up 
towards maximum window size for every successfully transmitted segment 
of data (that didn't require retransmission).


http://en.wikipedia.org/wiki/Slow-start

This is a simplistic view (on Slow Start) since many factors such as VJ 
fast recovery and SACK found in all modern stacks impact Cwnd.



In short NAGLE is to do with reducing latency (at the cost of 
bandwidth).  This has nothing to do with ensuring a flush of 
application-data so that is appears via SSL_read() at the far end.





So with all that said on what Nagle is, I can tell you Nagle doesn't 
have anything to do with the TCP Push flag and its meaning.


Here is a possible useful reference lookup the section on "Data 
Delivery" at the page:


http://en.wikipedia.org/wiki/Transmission_Control_Protocol


In short the TCP Push function is to do with flushing the data at the 
receiving side to the application immediately, so that is maybe read().





Anyway, using NONAGLE (telnet is **NO**nagle, default socket using
applications use the default(!) NAGLE) on the TX side should, assuming


I am asserting that the TCP setsockopt for TCP_NODELAY is completely 
unnecessary and potentially bad advice for a cure to getting a flush of 
application data sent with SSL_write() at the receiver via socket 
descriptor wakeup mechanism and SSL_read().





(For when you pay attention to detail: note that the TCP-level NONAGLE
behaviour still is timeout based, which often is okay as the timeout
is relatively small, but if you have an extreme case where messages
must be flushed /immediately/ onto the wire while you're using a TCP
stream (so no timeout whatsoever), then you enter the non-portable
zone of IP stack cajoling.


Erm NONAGLE does not have a timeout of its own.  So I think this is 
a little bit misleading to say it is timeout based.  It is based on 
either receiving an ACK packet for a sufficient amount of 
un-acknowledged data or is based on the standard retransmission timer 
that TCP uses.  i.e. no ACK was received before the retransmission timer 
expires, so the TCP stack goes into retransmission mode.  Neither of 
these things require NAGLE and the timeout in use is a required part of 
a TCP protocol stack, where as NAGLE is optional.


The NAGLE logic only comes into play for freshly written/enqueued data 
(e.g. application calls write() on socket), the TCP stack has to decide 
if it should "send it now" or "queue it up".  That is all NAGLE is.


In short NAGLE is to do with reducing latency (at the cost of bandwidth).

"sent it now" means we could be sending only 1 byte of new TCP data but 
with TCP/IP overhead we might have 40 bytes of header to go with it. 
Not very efficient use of bandwidth, so this is the bandwidth cost, but 
we get lower latency as we sent that 1 byte right away.


"queue it up" means we don't sent it now, but stick it in the internal 
kernel write buffer.  This data will get looked at for transmission when 
either an ACK comes back for previously un-acknowledged data or when the 
standard retransmission timer expires (no ACK comes back within the time 
limit).  The trade here is that by waiting for one of those 2 events we 
delay the conveyance of the application data until later.  This 
increases the observable latency for that data.



You can see why the main use of turning nagle off is when the 
application is a human user using the network socket interactively.  The 
 bandwidth cost is the price worth paying for increased productivity; 
humans hate latency.  But if a robot was using telnet it would be 
efficient and be able to prepare the 

RE: "Client Hello" from HP Insight Manager crashes application

2009-10-29 Thread Dave Thompson
> From: owner-openssl-us...@openssl.org On Behalf Of Josue Andrade Gomes
> Sent: Thursday, 29 October, 2009 14:23

> Shortly: HP Insight Manager (a management tool) crashes my server SSL
> application.
> Operating system: Windows 2003 Server
> OpenSSL version: 0.9.8k
> Post-mortem debugger points the crash ocurring in a call to
> CRYPTO_malloc() inside SSLv3_client_method()
> (wich is weird since I never call this function).
> 
Even if you did, it doesn't allocate anything; it just returns 
some static data that is later used to dispatch other calls.

I'd bet the traceback is wrong, which has two common causes:
- the stack and/or regs is clobbered so the trace values are bogus
- you're not using the same executable (and symbols if separate) 
as was running, so the debugger decodes it wrongly (note that 
a rebuild, even from exact same source, might not be good enough)

Check the latter first. If you (can) run the program (here server) 
under an interactive (not postmortem) debugger (and get the fault), 
you can usually be sure it's using the right executable.

Another possibility is that it faulted in code for which symbols 
are missing, and these happened to be the closest available ones 
(or if the debugger is really confused, just some available ones).
Does your debugger show symbol+offset and is the offset(s) large?
Can you get absolute numbers instead and compare them to your 
link map or equivalent to see if they are really in the routines?

A final possibility is that you (or whoever) compiled with gcc 
with -fomit-frame-pointer (the default config on at least many 
platforms) or something analogous on another compiler, and the 
resulting not-fully-materialized stack(s) confused the debugger.
But in my experience this usually gives incomplete tracebacks 
with missing entries, not wholly absurd ones.

> The packet that causes the crash:

> Secure Socket Layer
> SSL Record Layer: Handshake Protocol: Client Hello

> Session ID Length: 32
> Session ID: 
> D2D7B8577F842D69A074F3F90ABD2AE9BF979E78DB2E6631...

I see it is specifying sessID. If the server is crashing 
and being restarted presumably any internal cache is lost;
do you have/configure an external cache? If so is it valid, 
and your callback to fetch from it correct?

I see you later say other clients do work. Do they 
(does any of them) resume sessIDs? If not you can use 
openssl s_client to create a test situation.




__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: your mail

2009-10-29 Thread Dr. Stephen Henson
On Mon, Oct 26, 2009, Adam Rosenstein wrote:

> You are correct, I made a paste error in the mail.  The certs were correct
> at the time I tested however (my test script just regenerates things each
> time and I pasted an old ee with a new root ca).
> 
> I just tried openssl-SNAP-20091026.tar.gz and still get Different CRL Scope.
> Here is the EE, ROOT CA, Indirect CRL signer, and Indirect CRL in a P7.
> 

Hmm... I now get the message "certificate revoked" when I verify that chain.
That is using a (not yet committed) change to the verify utility to input CRLs
to the verification context. Due to a limitation in the current CRL lookup
code indirect CRLs don't work when placed in a store.

Steve.
--
Dr Stephen N. Henson. OpenSSL project core developer.
Commercial tech support now available see: http://www.openssl.org
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: "Client Hello" from HP Insight Manager crashes application

2009-10-29 Thread Josue Andrade Gomes
Thanks for the tip. No, I don't call CRYPTO_malloc_init. But I don't
think it is necessary. I'm pretty sure
that I'm not mixing compiler options.
Also, if this was the case it was crashing all the time, right?
SSL connections work fine with any client except this HP Insight Manager thing.

Of course I can give it a try and see what happens.

Thanks a lot.

On Thu, Oct 29, 2009 at 5:40 PM,   wrote:
> CRYPTO_malloc is an internally-used function, to allocate memory.  In any
> event, though, do you do an earlier CRYPTO_malloc_init?
>
> http://openssl.org/support/faq.html#PROG2
>
> -Kyle H
>
> On Thu, Oct 29, 2009 at 11:23 AM, Josue Andrade Gomes
>  wrote:
>>
>> Hi,
>>
>> Shortly: HP Insight Manager (a management tool) crashes my server SSL
>> application.
>> Operating system: Windows 2003 Server
>> OpenSSL version: 0.9.8k
>> Post-mortem debugger points the crash ocurring in a call to
>> CRYPTO_malloc() inside SSLv3_client_method()
>> (wich is weird since I never call this function).
>>
>> Any idea? Have someone seen this already?
>>
>> regards,
>> josue
>>
>> The packet that causes the crash:
>>
>> No.     Time        Source                Destination           Protocol
>> Info
>>  34446 117.798401              SSL      Client Hello
>>
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Is full-duplex socket use possible with OpenSSL?

2009-10-29 Thread Ger Hobbelt
>> There is one added complication in that the protocol is a datagram
>> protocol at a
>> higher level (although it uses TCP).  I am concerned that the whole
>> protocol could
>> block if there is not enough data to encrypt a whole outgoing message
>> but the peer cannot
>> continue until it gets the message.

If you mean that the upper layer protocol is message-oriented rather
than stream-oriented ('datagram' is a Rorschach blot for me that says:
UDP  sorry) and the protocol is constructed such that outgoing
message REQ(A) must have produced [a complete] answer message ANS(A)
before the next outgoing message REQ(B) is sent over the wire, then
you're in fancy land anyway, as that is not a class 101 scenario for
TCP, which is by design stream-oriented.

At the TCP level and given such a 'message interdependency'
requirement, which is the extreme form of what I'm assuming you mean
with the mention of 'datagram protocol', you'll need to ensure both
sender, receiver (and any intermediaries) have their TX (transmit) and
RX (receive) buffers flushed entirely before the next 'message'
exchange (REQ(B)->ANS(B)) can take place.

To get a glimmer of what must be done then (and which might be needed
in your case, when your protocol is less extreme and consequently can
- and will - not wait for response messages for previously sent
messages before the next message goes out) think about old-fashioned
telnet: a keypress is way smaller than a TCP packet can be, so telnet
needed a way to push that ENTER keypress out the door and pronto, so
you, the user, would get some 'interactivity' on your console. The
TCP_NONAGLE socket flag has been introduced to service this need way
back when: given a short timeout, tiny buffer fills are flushed into a
TX packet anyway. The receiver will be able to fetch any byte length
data it actually receives, so when we entice the sender into
transmitting even the small chunks, we're good to go there.

> It is presumed that every SSL_write() requires a flush (at TCP level this
> mechanism is called a "Push").  This basically means the data needs to flush
> to the reading API at the far end on exactly the byte boundary (or more)
> data than you sent.  This mean you have a guarantee to not starve the
> receiving side of data that the sending API has sent/committed.  This is
> true at both the TCP and SSL levels.
>
> If you think about it the SSL level could not make the guarantee easily if
> the lower level did not also provide that guarantee.

 the guarantee at the lower level is NONAGLE, which is /not/ the
default in TCP stacks as it can result in suboptimal network usage by
transmitting overly small packets on the wire.


I haven't done this sort of thing with SSL on top for a few years now,
but from what I hear in this thread SSL_write(len := 1) will pad such
data while crypting on a per-write invocation basis (note those last
few words!) and thus write a full SSL packet into the TX side of the
socket for each write into the TX pipeline (I may be Dutch, but I live
by the German rule: "Vertrauen ist gut, Kontrolle ist besser", and you
should too: trust is good, but making dang sure is so much better ;-)
)
Also, there's the purely emotional and very unchecked goblin at the
back of my brain who mumbles: "oh yeah? no buffering incoming
plaintext on the TX side so the SSL layer doesn't get to do a lot of
otherwise probably superfluous work when the write chain is abused by
the application layer by writing tiny chunks all the time?" Don't take
my goblin for his word, he's a definite a-hole sometimes ;-) , but it
won't hurt to make sure the 'non-buffering' flush/push aspect of your
write-side BIO chain is guaranteed. Does the OpenSSL documentation
explicitly mention this behaviour? should be the authoritative answer
there.


>From my work with BIOs, I seem to recall the SSL BIO encapsulates
SSL_write et al (or was it vice-versa? Heck, that's what I get when
doing this off the top of my head while not having used SSL for the
last half year), so the findings for one expand to the other.
Injecting other BIOs in your chain (base64, etc.) will impact this 'is
all data flushed throughout == non-buffering TX behaviour' aspect.


Anyway, using NONAGLE (telnet is **NO**nagle, default socket using
applications use the default(!) NAGLE) on the TX side should, assuming
the SSL/BIO chain flushes as well, ensure your outgoing REQ(n) gets
out the door and on the wire. Which leaves the receiver side: as the
transmitter can only 'flush' like that with SSL in the chain when the
flush is on whole SSL message boundary only (indeed resulting in some
SSL 'packets' ending up containing only a single (encrypted) content
byte as a principle), so the receiver should be okay in depleting its
RX buffers as well as the SSL layer there can, theoretically, process
every byte received, thus spitting out the content (plaintext) bytes
to the very last one which was pushed on the wire.
Hence your application layer can then ge

Re: TLS trust of a chain of certificates up to a root CA.Certificate Sign extenstion not set

2009-10-29 Thread aerowolf

If a certificate does not have the standard keyUsage: signature, then that 
certificate cannot sign any message, at all.  No email, no client signature of 
TLS authentication parameters, nothing.

-Kyle H

On Wed, Oct 28, 2009 at 9:10 AM, Mourad Cherfaoui (mcherfao) 
 wrote:

Thanks Steve,

Yes, the keyUsage is present but the sign bit is not set. As a background on 
this, the user does not want his CA to set the sign bit for non-root 
certificates.

I am not sure I understand why the client is broken? Did you mean that the sign 
bit can be omitted if the client sends the entire chain of certificates (except 
maybe the root) AND the server has the certificates chain as well? Thanks.

Mourad.

Here is a snippet of the extensions:

           X509v3 Key Usage: critical
           Digital Signature, Key Encipherment
           X509v3 CRL Distribution Points:

-Original Message-
From: owner-openssl-us...@openssl.org [mailto:owner-openssl-us...@openssl.org] 
On Behalf Of Dr. Stephen Henson
Sent: Wednesday, October 28, 2009 5:00 AM
To: openssl-users@openssl.org
Subject: Re: TLS trust of a chain of certificates up to a root CA.Certificate 
Sign extenstion not set

On Tue, Oct 27, 2009, Mourad Cherfaoui wrote:



Hi,   I have a chain of certificates C->B->A->RootCA. The TLS client
only presents C during the TLS handshake. RootCA has the Certificate
Sign extension set but not B and A.    The TLS server fails the TLS
handshake because of the absence of the Certificate Sign extension in
B and A.    My first question: if the TLS server has the entire chain
of certificates
B->A->RootCA in its truststore, is it correct to assume that the
B->A->Certificate
Sign extension is not required in B and A? My second question: by
default the TLS server will fail the TLS handshake because of the
absence of the Certificate Sign extension. Is there a recommended way
to disables the check for this extension in the TLS handshake?    Thanks,   
Mourad.





The client is broken then the standard requires that the entire chain be 
presented with the possible exception of the root.

What do you mean by "Certificate Sign extension"? Do you mean the keyUsage 
extension is present but doesn't set the certificate sign bit? If so the certificate is 
broken.

Steve.
--
Dr Stephen N. Henson. OpenSSL project core developer.
Commercial tech support now available see: http://www.openssl.org 
__
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-us...@openssl.org
Automated List Manager                           majord...@openssl.org
__
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-us...@openssl.org
Automated List Manager                           majord...@openssl.org





smime.p7s
Description: S/MIME Cryptographic Signature


Re: "Client Hello" from HP Insight Manager crashes application

2009-10-29 Thread aerowolf

CRYPTO_malloc is an internally-used function, to allocate memory.  In any 
event, though, do you do an earlier CRYPTO_malloc_init?

http://openssl.org/support/faq.html#PROG2

-Kyle H

On Thu, Oct 29, 2009 at 11:23 AM, Josue Andrade Gomes 
 wrote:

Hi,

Shortly: HP Insight Manager (a management tool) crashes my server SSL
application.
Operating system: Windows 2003 Server
OpenSSL version: 0.9.8k
Post-mortem debugger points the crash ocurring in a call to
CRYPTO_malloc() inside SSLv3_client_method()
(wich is weird since I never call this function).

Any idea? Have someone seen this already?

regards,
josue

The packet that causes the crash:

No.     Time        Source                Destination           Protocol Info
 34446 117.798401              SSL      Client Hello

Frame 34446 (164 bytes on wire, 164 bytes captured)
   Arrival Time: Oct 29, 2009 08:03:37.301438000
   [Time delta from previous captured frame: 0.000602000 seconds]
   [Time delta from previous displayed frame: 0.000602000 seconds]
   [Time since reference or first frame: 117.798401000 seconds]
   Frame Number: 34446
   Frame Length: 164 bytes
   Capture Length: 164 bytes
   [Frame is marked: False]
   [Protocols in frame: eth:ip:tcp:ssl]
   [Coloring Rule Name: TCP]
   [Coloring Rule String: tcp]
Ethernet II, Src: HewlettP_68:24:67 (00:0b:cd:68:24:67), Dst:
Vmware_9a:54:7d (00:50:56:9a:54:7d)
   Destination: Vmware_9a:54:7d (00:50:56:9a:54:7d)
       Address: Vmware_9a:54:7d (00:50:56:9a:54:7d)
        ...0     = IG bit: Individual address (unicast)
        ..0.     = LG bit: Globally unique
address (factory default)
   Source: HewlettP_68:24:67 (00:0b:cd:68:24:67)
       Address: HewlettP_68:24:67 (00:0b:cd:68:24:67)
        ...0     = IG bit: Individual address (unicast)
        ..0.     = LG bit: Globally unique
address (factory default)
   Type: IP (0x0800)
Internet Protocol, Src: , Dst: 
   Version: 4
   Header length: 20 bytes
   Differentiated Services Field: 0x00 (DSCP 0x00: Default; ECN: 0x00)
        00.. = Differentiated Services Codepoint: Default (0x00)
        ..0. = ECN-Capable Transport (ECT): 0
        ...0 = ECN-CE: 0
   Total Length: 150
   Identification: 0x65e7 (26087)
   Flags: 0x04 (Don't Fragment)
       0... = Reserved bit: Not set
       .1.. = Don't fragment: Set
       ..0. = More fragments: Not set
   Fragment offset: 0
   Time to live: 128
   Protocol: TCP (0x06)
   Header checksum: 0x3a0b [correct]
       [Good: True]
       [Bad : False]
   Source: 
   Destination:
Transmission Control Protocol, Src Port: cas-mapi (3682), Dst Port:
https (443), Seq: 1, Ack: 1, Len: 110
   Source port: cas-mapi (3682)
   Destination port: https (443)
   [Stream index: 64]
   Sequence number: 1    (relative sequence number)
   [Next sequence number: 111    (relative sequence number)]
   Acknowledgement number: 1    (relative ack number)
   Header length: 20 bytes
   Flags: 0x18 (PSH, ACK)
       0...  = Congestion Window Reduced (CWR): Not set
       .0..  = ECN-Echo: Not set
       ..0.  = Urgent: Not set
       ...1  = Acknowledgement: Set
        1... = Push: Set
        .0.. = Reset: Not set
        ..0. = Syn: Not set
        ...0 = Fin: Not set
   Window size: 65535
   Checksum: 0xc791 [validation disabled]
       [Good Checksum: False]
       [Bad Checksum: False]
   [SEQ/ACK analysis]
       [Number of bytes in flight: 110]
Secure Socket Layer
   SSL Record Layer: Handshake Protocol: Client Hello
       Content Type: Handshake (22)
       Version: TLS 1.0 (0x0301)
       Length: 105
       Handshake Protocol: Client Hello
           Handshake Type: Client Hello (1)
           Length: 101
           Version: TLS 1.0 (0x0301)
           Random
               gmt_unix_time: Oct 29, 2009 08:03:37.0
               random_bytes:
E2103A68F025B061AF0FFB9DDF45E62EA52724CDE47FEE79...
           Session ID Length: 32
           Session ID: D2D7B8577F842D69A074F3F90ABD2AE9BF979E78DB2E6631...
           Cipher Suites Length: 30
           Cipher Suites (15 suites)
               Cipher Suite: TLS_RSA_WITH_RC4_128_MD5 (0x0004)
               Cipher Suite: TLS_RSA_WITH_RC4_128_SHA (0x0005)
               Cipher Suite: TLS_RSA_WITH_AES_128_CBC_SHA (0x002f)
               Cipher Suite: TLS_DHE_RSA_WITH_AES_128_CBC_SHA (0x0033)
               Cipher Suite: TLS_DHE_DSS_WITH_AES_128_CBC_SHA (0x0032)
               Cipher Suite: TLS_RSA_WITH_3DES_EDE_CBC_SHA (0x000a)
               Cipher Suite: TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA (0x0016)
               Cipher Suite: TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA (0x0013)
               Cipher Suite: TLS_RSA_WITH_DES_CBC_SHA (0x0009)
               Cipher Suite: TLS_DHE_RSA_WITH_DES_CBC_SHA (0x0015)
               Cipher Suite: TLS_DHE_DSS_WITH_DES_CBC_SHA (0x0012)
               Cipher Suite: TLS_RSA_EXPORT_WITH_RC4_40_MD5 (0x0003)
               Cipher Suite: TLS_RSA_EX

"Client Hello" from HP Insight Manager crashes application

2009-10-29 Thread Josue Andrade Gomes
Hi,

Shortly: HP Insight Manager (a management tool) crashes my server SSL
application.
Operating system: Windows 2003 Server
OpenSSL version: 0.9.8k
Post-mortem debugger points the crash ocurring in a call to
CRYPTO_malloc() inside SSLv3_client_method()
(wich is weird since I never call this function).

Any idea? Have someone seen this already?

regards,
josue

The packet that causes the crash:

No. TimeSourceDestination   Protocol Info
  34446 117.798401  SSL  Client Hello

Frame 34446 (164 bytes on wire, 164 bytes captured)
Arrival Time: Oct 29, 2009 08:03:37.301438000
[Time delta from previous captured frame: 0.000602000 seconds]
[Time delta from previous displayed frame: 0.000602000 seconds]
[Time since reference or first frame: 117.798401000 seconds]
Frame Number: 34446
Frame Length: 164 bytes
Capture Length: 164 bytes
[Frame is marked: False]
[Protocols in frame: eth:ip:tcp:ssl]
[Coloring Rule Name: TCP]
[Coloring Rule String: tcp]
Ethernet II, Src: HewlettP_68:24:67 (00:0b:cd:68:24:67), Dst:
Vmware_9a:54:7d (00:50:56:9a:54:7d)
Destination: Vmware_9a:54:7d (00:50:56:9a:54:7d)
Address: Vmware_9a:54:7d (00:50:56:9a:54:7d)
 ...0     = IG bit: Individual address (unicast)
 ..0.     = LG bit: Globally unique
address (factory default)
Source: HewlettP_68:24:67 (00:0b:cd:68:24:67)
Address: HewlettP_68:24:67 (00:0b:cd:68:24:67)
 ...0     = IG bit: Individual address (unicast)
 ..0.     = LG bit: Globally unique
address (factory default)
Type: IP (0x0800)
Internet Protocol, Src: , Dst: 
Version: 4
Header length: 20 bytes
Differentiated Services Field: 0x00 (DSCP 0x00: Default; ECN: 0x00)
 00.. = Differentiated Services Codepoint: Default (0x00)
 ..0. = ECN-Capable Transport (ECT): 0
 ...0 = ECN-CE: 0
Total Length: 150
Identification: 0x65e7 (26087)
Flags: 0x04 (Don't Fragment)
0... = Reserved bit: Not set
.1.. = Don't fragment: Set
..0. = More fragments: Not set
Fragment offset: 0
Time to live: 128
Protocol: TCP (0x06)
Header checksum: 0x3a0b [correct]
[Good: True]
[Bad : False]
Source: 
Destination:
Transmission Control Protocol, Src Port: cas-mapi (3682), Dst Port:
https (443), Seq: 1, Ack: 1, Len: 110
Source port: cas-mapi (3682)
Destination port: https (443)
[Stream index: 64]
Sequence number: 1(relative sequence number)
[Next sequence number: 111(relative sequence number)]
Acknowledgement number: 1(relative ack number)
Header length: 20 bytes
Flags: 0x18 (PSH, ACK)
0...  = Congestion Window Reduced (CWR): Not set
.0..  = ECN-Echo: Not set
..0.  = Urgent: Not set
...1  = Acknowledgement: Set
 1... = Push: Set
 .0.. = Reset: Not set
 ..0. = Syn: Not set
 ...0 = Fin: Not set
Window size: 65535
Checksum: 0xc791 [validation disabled]
[Good Checksum: False]
[Bad Checksum: False]
[SEQ/ACK analysis]
[Number of bytes in flight: 110]
Secure Socket Layer
SSL Record Layer: Handshake Protocol: Client Hello
Content Type: Handshake (22)
Version: TLS 1.0 (0x0301)
Length: 105
Handshake Protocol: Client Hello
Handshake Type: Client Hello (1)
Length: 101
Version: TLS 1.0 (0x0301)
Random
gmt_unix_time: Oct 29, 2009 08:03:37.0
random_bytes:
E2103A68F025B061AF0FFB9DDF45E62EA52724CDE47FEE79...
Session ID Length: 32
Session ID: D2D7B8577F842D69A074F3F90ABD2AE9BF979E78DB2E6631...
Cipher Suites Length: 30
Cipher Suites (15 suites)
Cipher Suite: TLS_RSA_WITH_RC4_128_MD5 (0x0004)
Cipher Suite: TLS_RSA_WITH_RC4_128_SHA (0x0005)
Cipher Suite: TLS_RSA_WITH_AES_128_CBC_SHA (0x002f)
Cipher Suite: TLS_DHE_RSA_WITH_AES_128_CBC_SHA (0x0033)
Cipher Suite: TLS_DHE_DSS_WITH_AES_128_CBC_SHA (0x0032)
Cipher Suite: TLS_RSA_WITH_3DES_EDE_CBC_SHA (0x000a)
Cipher Suite: TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA (0x0016)
Cipher Suite: TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA (0x0013)
Cipher Suite: TLS_RSA_WITH_DES_CBC_SHA (0x0009)
Cipher Suite: TLS_DHE_RSA_WITH_DES_CBC_SHA (0x0015)
Cipher Suite: TLS_DHE_DSS_WITH_DES_CBC_SHA (0x0012)
Cipher Suite: TLS_RSA_EXPORT_WITH_RC4_40_MD5 (0x0003)
Cipher Suite: TLS_RSA_EXPORT_WITH_DES40_CBC_SHA (0x0008)
Cipher Suite: TLS_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA (0x0014)
Cipher Suite: TLS_DHE_DSS_EXPORT

Re: TLS Alert read:fatal:unknown CA

2009-10-29 Thread Kyle Hamilton
Radius needs to be set up to trust that CA.  That isn't an openssl
question, that's a radius question.

-Kyle H

On Wed, Oct 28, 2009 at 6:25 AM, ashokgda  wrote:
>
> Hi,
>
> I am using radius server for authinticating my ThinClient Laptop for
> WirelessAP in TLS security mode.
> But my radius server is saying unknown ca.
>
> my radius tls config looks like:
>  tls {
> rsa_key_exchange = no
> dh_key_exchange = yes
> rsa_key_length = 512
> dh_key_length = 512
> verify_depth = 0
> pem_file_type = yes
> private_key_file = "/etc/pki/tls/misc/server_key.pem"
> certificate_file = "/etc/pki/tls/misc/server_cert.pem"
> CA_file = "/etc/pki/CA/cacert.pem"
> private_key_password = "hello"
> dh_file = "/etc/raddb/certs/dh"
> random_file = "/etc/raddb/certs/random"
> fragment_size = 1024
> include_length = yes
> check_crl = no
> cipher_list = "DEFAULT"
> make_cert_command = "/etc/raddb/certs/bootstrap"
>
> In my client Laptop when i entered the "Enter Certificate passphrase" EAP is
> failed.
> I am entering the same "hello" as my cert phasephrase which i gave when
> created the pkcs12 cert Export time.
>
> ./CA.pl -newca
> openssl req -new -keyout server_key.pem -out server_req.pem -days 730
> openssl ca -policy policy_anything -out server_cert.pem -infiles
> server_req.pem
> openssl req -new -keyout client_key.pem -out client_req.pem -days 730
> openssl ca -policy policy_anything -out client_cert.pem -infiles
> client_req.pem
> openssl pkcs12 -export -in client_cert.pem -inkey client_key.pem -out
> client-cert.p12 -clcerts
> [For all passphrase i used "hello" only]
>
> I verified the cacert.pem, client_cert.pem and server_cert.pem all are ok.
> ==
> [r...@gda misc]# openssl x509 -text -in /etc/pki/CA/cacert.pem
> ==
> Certificate:
>   Data:
>       Version: 3 (0x2)
>       Serial Number:
>           c6:44:66:76:3a:ed:a0:19
>       Signature Algorithm: sha1WithRSAEncryption
>       Issuer: C=IN, ST=TamilNadu, O=GDATECH, OU=Software, CN=Thin
> Client/emailaddress=r.as...@gdatech.co.in
>       Validity
>           Not Before: Oct 23 09:00:53 2009 GMT
>           Not After : Oct 22 09:00:53 2012 GMT
>       Subject: C=IN, ST=TamilNadu, O=GDATECH, OU=Software, CN=Thin
> Client/emailaddress=r.as...@gdatech.co.in
>       Subject Public Key Info:
>           Public Key Algorithm: rsaEncryption
>           RSA Public Key: (1024 bit)
>               Modulus (1024 bit):
>                   00:b9:05:83:e8:96:f7:10:c8:51:23:48:2f:a2:e7:
>                   ac:f5:bd:89:bb:63:97:7c:d4:29:df:25:df:04:0e:
>                   c3:f8:08:8a:41:cf:3b:db:e8:ab:d1:b1:5b:c8:2b:
>                   2a:b7:1c:1b:59:60:ff:be:28:84:45:9f:05:dc:77:
>                   4d:fc:da:82:08:81:2f:a7:6f:07:fb:67:da:37:fb:
>                   f8:e6:db:ee:2a:a0:86:53:f7:19:a1:35:64:3e:5d:
>                   13:0f:a7:dd:40:b9:80:aa:67:67:b6:3b:58:77:23:
>                   6c:e7:52:b4:80:d2:db:e5:13:1a:ac:e2:b1:f4:6d:
>                   41:c9:73:22:bd:eb:44:cb:83
>               Exponent: 65537 (0x10001)
>       X509v3 extensions:
>           X509v3 Subject Key Identifier:
>               A0:BE:BF:A8:AB:6B:63:27:A7:78:FF:C6:67:71:A8:84:BA:E3:C7:A4
>           X509v3 Authority Key Identifier:
>
> keyid:A0:BE:BF:A8:AB:6B:63:27:A7:78:FF:C6:67:71:A8:84:BA:E3:C7:A4
>               DirName:/C=IN/ST=TamilNadu/O=GDATECH/OU=Software/CN=Thin
> Client/emailaddress=r.as...@gdatech.co.in
>               serial:C6:44:66:76:3A:ED:A0:19
>
>           X509v3 Basic Constraints:
>               CA:TRUE
>   Signature Algorithm: sha1WithRSAEncryption
>       01:6e:02:e8:63:3d:27:bc:3e:df:51:6a:ce:cf:1f:08:c4:ef:
>       8d:f0:2a:1a:0b:a0:4b:54:a2:ef:b3:e6:6c:4d:73:72:a3:2b:
>       46:ff:9d:5f:2e:2a:c6:9b:3f:c7:53:27:24:39:bb:d3:d5:ed:
>       12:15:08:c4:52:72:ba:a2:5a:60:f9:f6:b7:76:b1:87:f8:07:
>       38:62:cc:d6:b1:32:86:c2:81:33:7b:f3:63:1b:51:58:9f:85:
>       e2:c9:6d:0a:c6:69:f6:1d:42:05:7f:e8:86:2f:00:3c:0c:19:
>       a3:97:39:9f:5f:2a:8b:65:63:9a:fd:37:a9:09:52:7e:20:da:
>       4c:ae
> -BEGIN CERTIFICATE-
> MIIDbzCCAtigAwIBAgIJAMZEZnY67aAZMA0GCSqGSIb3DQEBBQUAMIGCMQswCQYD
> VQQGEwJJTjESMBAGA1UECBMJVGFtaWxOYWR1MRAwDgYDVQQKEwdHREFURUNIMREw
> DwYDVQQLEwhTb2Z0d2FyZTEUMBIGA1UEAxMLVGhpbiBDbGllbnQxJDAiBgkqhkiG
> 9w0BCQEWFXIuYXNob2tAZ2RhdGVjaC5jby5pbjAeFw0wOTEwMjMwOTAwNTNaFw0x
> MjEwMjIwOTAwNTNaMIGCMQswCQYDVQQGEwJJTjESMBAGA1UECBMJVGFtaWxOYWR1
> MRAwDgYDVQQKEwdHREFURUNIMREwDwYDVQQLEwhTb2Z0d2FyZTEUMBIGA1UEAxML
> VGhpbiBDbGllbnQxJDAiBgkqhkiG9w0BCQEWFXIuYXNob2tAZ2RhdGVjaC5jby5p
> bjCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAuQWD6Jb3EMhRI0gvoues9b2J
> u2OXfNQp3yXfBA7D+AiKQc872+ir0bFbyCsqtxwbWWD/viiERZ8F3HdN/NqCCIEv
> p28H+2faN/v45tvuKqCGU/cZoTVkPl0TD6fdQLmAqmdntjtYdyNs51K0gNLb5RMa
> rOKx9G1ByXMivetEy4MCAwEAAaOB6jCB5zAdBgNVHQ4EFgQUoL6/qKtrYyeneP/G
> Z3GohLrjx6QwgbcGA1UdIwSBrzCBrIAUoL6/qKtrYyeneP/GZ3GohLrjx6ShgYik
> gYUwgYIxCzAJBgNVBAYTAklOMRIwEAYDVQQIEwlUYW1pbE5hZHUxEDAOBgNVBAoT
> B0

Re: "djgppbin/perl.exe" not found, etc. error

2009-10-29 Thread Ersin Akinci
Jeff,

Thanks for the suggestion.  Unfortunately, I can't use a prebuilt Win32 binary 
because I'm literally building on an MS-DOS host for an MS-DOS target using 
DJGPP.  No Visual Studio here!

No worries though, I managed to get OpenSSL to compile with a bit of fiddling.  
I'm providing my notes here for the future reference of anyone who might be 
searching the mailing list archives for help (this is for compiling OpenSSL 
0.9.8k on MS-DOS 7.1 using DJGPP).

If you want to compile using the default DJGPP instructions (i.e., if you use 
"./Configure no-threads DJGPP) then you'll need to edit the lines in the top 
level Makefile that read "perl = " and "ranlib = " and put quotes around the 
path name or something like that.  I still got errors for ranlib, but the 
compile kept going where it used to choke on not being able to find perl, so I 
must've done something right.  Come to think of it, removing the path name 
altogether would probably be the best since perl and ranlib should be in your 
DJGPP path anyway.

I, however, ended up reconfiguring OpenSSL with the following line: 
"./Configure no-hw-xxx no-hw no-threads no-zlib 386 no-sse", and that made the 
whole thing run smoothly.  In fact, I noticed that the ranlib line was removed 
from the Makefile and the perl line didn't have a pathname given, just "perl" 
(i.e., "perl = perl", relying on perl already being in the system's path).  
Please note that you will need a whole lot of DJGPP's packages to compile 
OpenSSL, not just the few ZIP's that the Zip Picker provides you with.  I 
recommend going to a DJGPP mirror and downloading all the ZIP's available in 
the v2gnu directory in addition to the ones mandated by the Zip Picker.  Many 
of the standard Linux utilities that would be covered by "coreutils" have been 
reassigned to ZIP packages with other names (e.g., fileutils, shellutils).  
Google is your friend =).

While I'm at it, here are a few more hints:
-You must use a version of DOS that is Long File Name (LFN) compatible.  For 
MS-DOS, that means 7.1 (the version that came with Windows 9x), but there are 
also other alternatives that I have not tested, such as PC-DOS and FreeDOS.  
You'll need the TSR utility DOSLFN while in DOS mode to activate LFN.  Without 
it, none of the scripts will work because they all point to non-existant file 
names.
-There's a set of Makefiles out there for compiling OpenSSL with DJGPP but 
without using the bash shell, I believe it's being distributed by the maker of 
Wattcp-32?  Don't bother using it, the bash shell works fine, and I believe 
those Makefiles are for very old versions of OpenSSL anyway (the readme says 
0.9.4+).  Get the bash shell from DJGPP's ftp site and simply run the 
executable in DOS.  Before running it, however, be sure to set your environment 
variables PATH and WATT_ROOT, which should respectively point to your DJGPP bin 
and Wattcp-32 source + compiled library root directory (NB: I made the mistake 
of not compiling Wattcp-32 first, be sure to do this before starting on 
OpenSSL, you need both the Wattcp-32 source and library) in *nix-style format.  
So if DJGPP is in C:\DJGPP, use the line "set PATH=/djgpp/bin" (and set 
WATT_ROOT similarly).  Don't use the standard DOS-style format, because once 
you're in bash the shell expects forward slashes.  You may or may not need to 
set the DJGPP variable, but I did for safety's sake using the same *nix-style 
format.
-I found bash to be very flaky, especially with globbing (tab completion) and 
dealing with non-existant file names.  Be sure to use "ls" frequently and only 
enter full path names.
-In the above mentioned set of Makefiles, it mentioned that zlib was necessary 
and that it could be found on DJGPP's site.  FWIW, the Configure script for 
OpenSSL 0.9.8k (and other versions, I assume) lets you disable zlib support 
with the "no-zlib" option, but I'm not sure exactly what capabilities, if any, 
were lost by this.  Regardless, I couldn't find a precompiled version of zlib 
for DOS/DJGPP anywhere on DJGPP's site or anywhere else.  Maybe it's in one of 
DJGPP's strangely named ZIP archives?
-Ignore any instructions that tell you to enter "/dev/env/DJGPP", or something 
like that.  I'm not sure how it works, but I believe that DJGPP somehow sets 
/dev/env through the DJGPP.ENV file and that /dev/env/DJGPP is magically 
supposed to point to DJGPP's root dir.  I got a lot of errors fiddling around 
with this and I found that it was simply easier to use absolute path names.  
Case in point, OpenSSL's instructions for DJGPP tell you to use Configure with 
"--prefix=/dev/env/DJGPP".  This was confusing and I thought that maybe it was 
a special custom usage, but it's not, the prefix option does exactly what it 
does for other configure scripts.  Simply point it to wherever you want OpenSSL 
to install.
-Don't forget to run "make depend" before running "make" if you use a lot of 
arguments with the Configure script like I did!  The compile

Error Running Command

2009-10-29 Thread Jamesy281

Hi There,

I am completely new to open SSl and have hit a snag that I could use some
help with.
I am trying generate a CA and a signed self certificate for a netgear FXV538
VPN.
Using the following 3 commands listed in the instructions for the firewall
adding the approriate path names

1.openssl genrsa –des3 –out ca.key 1024
2.openssl req –new –x509 –days 365 –key ca.key –out ca.crt
3.openssl x509 –req –days 182 –in host.csr –CA ca.crt –CAkey ca.key
–Cacreateserial –out host.crt

I recive this error on processing the final command:
D:\OpenSSL\bin>openssl x509 -req -days 365 -in d:\cafiles\ca\.csr -CA
d:\cafiles\ca\ca.crt -CAkey d:\cafiles\ca\ca.key -out d:\cafiles\ca\VPN.crt
unknown option ûreq

I would appreciate a pointer as to what I have done wrong.



-- 
View this message in context: 
http://www.nabble.com/Error-Running-Command-tp26110195p26110195.html
Sent from the OpenSSL - User mailing list archive at Nabble.com.


Re: Is full-duplex socket use possible with OpenSSL?

2009-10-29 Thread Darryl Miles

Mark wrote:

There is one added complication in that the protocol is a datagram
protocol at a
higher level (although it uses TCP).  I am concerned that the whole
protocol could
block if there is not enough data to encrypt a whole outgoing message
but the peer cannot
continue until it gets the message.


SSL_write() can be for any length the API datatype allows (I think it is 
currently a C data type of 'int').  If you use SSL_write() with 1 byte 
lengths you will get an encoded SSL protocol packet sent over the write 
with a single byte of application-data payload.  This would not be very 
efficient use of SSL (since you'd have many bytes of SSL overhead per 
byte of application-data).



The sending side is allowed to merge more application-data together 
under such circumstances that forward flow-control is not allowing the 
fresh new data we are currently holding to be sent in the "first attempt 
at transmission" to happen immediately AND the user makes an API call to 
write more data.   What is not allowed is for the stack to hold onto the 
data (possibly forever) in the hope that the user will make an API call 
to write more data.


I've tried to choose my words carefully in the above paragraph, so that 
the words equally apply to TCP as SSL.  In the case of SSL since it is 
done over a reliable streaming-transport there no such thing as a "first 
attempt at transmission" since it is reliable; there is only a single 
action to commit data into the TCP socket.  But it is possible for the 
TCP socket to not be accepting data just yet (due to flow-control).  It 
would be that conceptual boundary this that relates to.


Also one difference between TCP and SSL is that TCP has octet-boundary 
sequences/acknowledgments but in SSL all data is wrap up into 
packetized-chunks.  This means TCP other optimizations it can make with 
regards to retransmissions make it more efficient.  Those things don't 
apply to SSL.





If you use larger writes (to SSL_write()) then this is chunked up into 
the largest possible packets the protocol allows and those are sent over 
the wire.


It is presumed that every SSL_write() requires a flush (at TCP level 
this mechanism is called a "Push").  This basically means the data needs 
to flush to the reading API at the far end on exactly the byte boundary 
(or more) data than you sent.  This mean you have a guarantee to not 
starve the receiving side of data that the sending API has 
sent/committed.  This is true at both the TCP and SSL levels.


If you think about it the SSL level could not make the guarantee easily 
if the lower level did not also provide that guarantee.




Providing you use non-blocking APIs there is no way things can block 
(meaning now way for your application to no be in control at all times 
to make a decision), this means socket<>SSL is using non-blocking it 
also means the SSL<>your_datagram_protocol is using non-blocking paradigm.


The only issue you then need to look at is starvation (imagine if the 
receiving side was in a loop to keep reading until there was no more 
data, but due to the CPU time need to do the data processing in that 
loop it was possible for the sending side to keep the receiving side 
stocked full of data).  If you just looped until you had no more data 
from SSL_read() (before servicing the SSL_write() side) then the 
SSL_write would be starved.


So you might want to only loop like this a limited number of times, or 
automatically break out of trying to decode/process more data in order 
to service the other half a little bit.




Now there is another issue which isn't really a blocking one, it is more 
a "deadlock".  This is where due to your IO pump design and the 
interaction between the upper levels of your application and the 
datagram/SSL levels you ended up designing your application such that 
the same thread is used to both service the IO pump and the upper levels 
of the application (the data processing).  This is possible but requires 
careful design.  For whatever reason the upper levels stalled/blocked 
waiting for IO, and this means your thread of execution lost control and 
starved the IO pump part from doing its work (because its the same thread).


Everything that happens on the IO pump part needs to be non-blocking, if 
you use the same thread to service the upper levels of your application 
then you must know for sure they are non-blocking.  Otherwise you are 
best separating the threads here the IO pump and the upper levels.


Often this is best because it frees up the constriction about what you 
can do in an upper level, it does not matter any more what you do there, 
call/use whatever library you want without regard for blocking behavior. 
 You can also use a single IO pump thread to manage multiple 
connections if you want (and performance allows) then you need to think 
about per 'SSL *' IO starvation, i.e. make sure you service everyone a 
little bit as you go round-robin.




Darryl
_

Re: ssl_read() hangs after wakeup from sleep in OSX 10.5.8

2009-10-29 Thread Graham Swallow
google: TCP OPTION KEEPALIVE
http://tldp.org/HOWTO/html_single/TCP-Keepalive-HOWTO/

You would be best with an application level timeout,
which would send an application enquiry (heartbeat)
from the laptop to the server.

Waking up from sleeping, the application would send the next heartbeat.
(any data has the same effect, the problem was that the laptop was silent).

That would get an immediate response, from the remote OS, that the TCP
connection was dismantled ages ago, whilst the laptop was comatose.

That would appear to the SSL BIO as select_says_fd_is_readable
and when it reads from the socket-fd, an error eof.
Then SSL knows and lets you know too.

If you dont want to change the application protocol,
you can have the OS do exactly that for that socket.
No DATA.data will appear on the application side of the socket-fd,
just hidden TCP control messages, on the network side.
That will also keep any NAT router happy.

In an ideal world, the wake-from-sleep should do that for each TCP
connection. Maybe it leaves it for each app to figure it out on its own.

Graham

2009/10/29 Parimal Das 

>
> When it has downloaded some 2MB data, I closed my laptop lid (OSX induced
> sleep)
> After 5 minutes when i open my laptop, the process hangs at the same place
> as before.
>
>


Re: TLS trust of a chain of certificates up to a root CA.Certificate Sign extenstion not set

2009-10-29 Thread Dr. Stephen Henson
On Thu, Oct 29, 2009, Joe Orton wrote:

> On Wed, Oct 28, 2009 at 06:51:02PM +0100, Dr. Stephen Henson wrote:
> > On Wed, Oct 28, 2009, Mourad Cherfaoui (mcherfao) wrote:
> > > I am not sure I understand why the client is broken? Did you mean that the
> > > sign bit can be omitted if the client sends the entire chain of 
> > > certificates
> > > (except maybe the root) AND the server has the certificates chain as well?
> > > Thanks.
> > 
> > My comment about it being broken (or more likely misconfigured) was nothing 
> > to
> > do with the keyUsage extension. The SSL/TLS standards do not allow a client 
> > to
> > just present the EE certificate: the whole chain has to be presented with
> > the possible exception of the root.
> 
> Well, per the BUGS section in SSL_CTX_set_client_cert_cb it is nigh-on 
> impossible for a client author to DTRT with OpenSSL because of the 
> limitations of the API.
> 

Hmm... seems to be a little out of date. It is possible to add certs to the
store and set them to an appropriate trust value to avoid them being
acceptable as server roots. Though we should really have a callback which can
return the whole chain too.

Steve.
--
Dr Stephen N. Henson. OpenSSL project core developer.
Commercial tech support now available see: http://www.openssl.org
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Is full-duplex socket use possible with OpenSSL?

2009-10-29 Thread Mark
Hi David, 

> > There is one added complication in that the protocol is a datagram
> > protocol at a
> > higher level (although it uses TCP).  I am concerned that the whole
> > protocol could
> > block if there is not enough data to encrypt a whole 
> outgoing message
> > but the peer cannot
> > continue until it gets the message.
> 
> What do you mean by "not enough data to encrypt a whole 
> outgoing message"?
> The only way it can block is if each side is waiting for the 
> other, and if
> that happens, the application protocol is broken anyway. 
> There is no way
> this logic can cause one side to internally block.

I may be making a wrong assumption but if the cypher used is a block
cypher does it not wait until a full block of data is ready before it
can encrypt and send the data?  If a message does not consist of enough
data to fill a block, could there be unencrypted data left in a buffer
somewhere?  The peer would see that a whole message has not been
received
an wait for the rest of it ... which never comes.

Mark.



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Running SSL server without temporary DH parameters.

2009-10-29 Thread Victor Duchovni
On Thu, Oct 29, 2009 at 11:33:13AM +0300, Victor B. Wagner wrote:

> > Yes, of course, in a strictly technical sense. From a user perspective,
> > however, both are the same sort of thing, something one needs to configure
> > to enable kEDH or kEECDH ciphers. When neither set of parameters is
> > provided, one gets just
> 
> We are talking about server. User, who configures server is technican.

Yes, they usually understand something about Apache, or IMAP, ... but
understanding OpenSSL cipher-suite specs is almost invariably outside
their expertise. This is direct personal experience with decently
skilled staff.

> > - kRSA, kECDHr, kECDHe (no forward secrecy)
> > - kPSK (no significant adoption)
> 
> There is also GOST. And it is what I'm concerned with.
> 
> > > Question is - should we make user immediately aware of this restriction
> > > during parsing the configuration?
> > 
> > Not sure what you mean by "during parsing".
> 
> It means "before server starts and begins to listen on the socket".
> 
> > > If user specifies DSA key only it is fatal.
> > > If user specifies RSA key only half of otherwise available suites
> > > are left.
> > 
> > It is OK if the user cipher selection string designates more ciphers
> > (e.g. DEFAULT) than actually compatible with the available with the
> > available certificates and/or parameters, so long as a non-empty
> > set of usable cipher-suites remains.
> 
> But above you are talking about lost forward secrecy.

Yes, but this is standard operating procedure for SSL. If the product
supports DH parameters, and the users want forward secrecy (sometimes,
they don't) they'll in some cases turn it on, and in some cases they'll
forget, but it would be worse to insist that the cipherlist not match
any ciphers that are not supported by the other parameters.

> It depends on application, I think. For some applications it is good
> reason, for some - it is bad.

Yes, of course, but on the whole, the design rationale is sensible.

> > If you want a "lint" tool for configurations, by all means, that would
> > be a good idea. But, making cipherlists incompatible with libraries
> > that don't support every cipher on the list, is bad, one can't use
> > a single list whose elements include features from "future" releases
> 
> Why one would use such a setup for production system?

Because the configuration is rarely changed, and cipherlists are copied
cargo-cult style from release to release. I am not there at the deployment
of every SSL server, I just recommend a decently future-proof cipherlist,
and it tends to get enshrined as the "golden" cipherlist for a few years,
until someone asks me again. :-(

-- 
Viktor.
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Is full-duplex socket use possible with OpenSSL?

2009-10-29 Thread David Schwartz

Mark Williams wrote:

> There is one added complication in that the protocol is a datagram
> protocol at a
> higher level (although it uses TCP).  I am concerned that the whole
> protocol could
> block if there is not enough data to encrypt a whole outgoing message
> but the peer cannot
> continue until it gets the message.

What do you mean by "not enough data to encrypt a whole outgoing message"?
The only way it can block is if each side is waiting for the other, and if
that happens, the application protocol is broken anyway. There is no way
this logic can cause one side to internally block.

The 'cork' logic only stops us from reading if we have already read data the
application has not processed yet. If the application does not process read
data, then it is broken, but we are not. The write queue logic only stops us
from accepting data from the application to send if we have unsent data. If
the other side does not read this data, then it is broken but we are not.

In fact, any application layered on top of TCP is broken if it cannot handle
a TCP implementation that permits only a single byte to be in flight at a
time. If it *ever* allows each side to insist on writing before reading at
the same time, it is broken.

On the off chance you do have to deal with a broken TCP-using application
(and you do all too often), just make sure your queues, in both directions
on both sides, are larger than the largest protocol data unit. (More
precisely, the amount of data both sides might try to write before reading
any data.)

DS



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: ssl_read() hangs after wakeup from sleep in OSX 10.5.8

2009-10-29 Thread David Schwartz

Parimal Das wrote:

> Please suggest. 
> 1. What i should include in this code to correct this hang?

It depends on what your code should do in this case. Do you want to wait a
limited amount of time for the other side to reply? Or do you want to wait
possibly forever? Your current code specifically elects to wait possibly
forever, but then you complain when it waits possibly forever. If the user
should interrupt it, then this is sensible. If not, then it's not.

> 2. How to set read/write timeouts? 

Well, 'alarm' is the easiest way in toy code like this. You can reset the
'alarm' every time you receive a certain amount of data. It depends how you
want to handle cases like where the other side 'dribbles' data at you.
Decide how long is reasonable to wait, and code that.

But this is another reason that blocking socket operations are difficult to
use. In what realistic situation do you want to wait forever? So you make a
blocking operation and then, surprise, have to work around the annoying fact
that it blocks.

DS



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Is full-duplex socket use possible with OpenSSL?

2009-10-29 Thread Mark
> Mark Williams wrote:
> 
> > > 2) Let the two threads read and write to your own two
> > > independent queues and
> > > service the application side of the SSL connection with your
> > > own code to and from the read and write queues.
> 
> > Won't I still need to combine the reading and writing to 
> the SSL object
> > into a
> > single thread for this?  This is the bit I am having difficulty
> > visualising.
> 
> The data pump thread is more or less like this:
> 
> While (connection_is_alive)
> {
>  If (connection_is_not_corked_in)
>  {
>   SSL_read into temporary buffer.
>   If we got data:
>   {
>If a read thread is blocked, unblock it.
>If the receive queue is too full, set the 'corked_in' flag.
>   }
>   If we got a fatal error, mark the connection dead.
>  }
>  If(send queue not empty)
>  {
>   Try to send some data using SSL_write
>   Put back what we didn't send
>  }
>  If we made no forward progress, block (see notes)
> }
> Tear down the connection
> 
> The read thread acquires the queue mutex, blocks on the 
> condvar for data if
> desired, pulls data off the queue, and clears the corked_in 
> flag if it was
> set (assuming the queue is still not full), and signals the 
> data pump thread if it uncorked.
> 
> The write thread acquires the mutex, checks if the send queue is full,
> blocks on the condvar if it is, and signals the data pump 
> thread if the queu was empty.
> 
> The only trick left is the blocking logic in the data pump 
> thread. This is the hard part:
> 
> 1) If you have no outbound data pending, and the connection 
> is corked, block
> only on an internal signal. (Since you don't want to do I/O either way
> anyway.)
> 
> 2) If you have outbound data pending and the connection is 
> corked, block as
> directed by SSL_write. If it said WANT_READ, block on read. If it said
> WANT_WRITE, block on write.
> 
> 3) If you have no outbound data pending (and hence, did not 
> call SSL_write),
> and the connection is uncorked, block as directed in SSL_read.
> 
> 4) If you have outbound data pending, and the connection is 
> uncorked, block
> on the logical OR of the SSL_read result and the SSL_write 
> result (block for
> read on the socket if either one returned WANT_READ, block 
> for write if either returned WANT_WRITE).
> 
> Note that your data pump threads needs to block on a 'select' 
> or 'poll' type
> function but be unblocked when signaled. If necessary, add 
> one end of a pipe
> to the select/poll set and have you read/write threads write 
> a byte to that pipe to unblock the data pump thread.
> 
> This is from memory, but it should be basically correct.
> 
> By the way, I think only the logic in 4 is not obviously 
> correct. Here's the proof it's safe:
> 1) If nothing changed, and we block on the OR of both 
> operations, we will
> only unblock if one of those operations can make forward 
> progress. (We only
> unblock on X if one operation Xould make forward progress on 
> X, and nothing has changed since then.)
> 2) If something changed, then we already made some forward progress.
> So either way, we make forward progress in each pass of the 
> loop, which is the best you can hope for.

Thanks.  This will take me some time to digest.

There is one added complication in that the protocol is a datagram
protocol at a
higher level (although it uses TCP).  I am concerned that the whole
protocol could
block if there is not enough data to encrypt a whole outgoing message
but the peer cannot
continue until it gets the message.

Mark.

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: TLS trust of a chain of certificates up to a root CA.Certificate Sign extenstion not set

2009-10-29 Thread Joe Orton
On Wed, Oct 28, 2009 at 06:51:02PM +0100, Dr. Stephen Henson wrote:
> On Wed, Oct 28, 2009, Mourad Cherfaoui (mcherfao) wrote:
> > I am not sure I understand why the client is broken? Did you mean that the
> > sign bit can be omitted if the client sends the entire chain of certificates
> > (except maybe the root) AND the server has the certificates chain as well?
> > Thanks.
> 
> My comment about it being broken (or more likely misconfigured) was nothing to
> do with the keyUsage extension. The SSL/TLS standards do not allow a client to
> just present the EE certificate: the whole chain has to be presented with
> the possible exception of the root.

Well, per the BUGS section in SSL_CTX_set_client_cert_cb it is nigh-on 
impossible for a client author to DTRT with OpenSSL because of the 
limitations of the API.

Regards, Joe
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: ssl_read() hangs after wakeup from sleep in OSX 10.5.8

2009-10-29 Thread Parimal Das
Hello,

Here is my test code. I am downloading a file with https connection.
This is compiled as  $g++ -lssl -lcrypto sslShow.cpp.  on OS X 10.5.8
Using default OS X libs (libcrypto 0.9.7  and libssl 0.9.7)

When it has downloaded some 2MB data, I closed my laptop lid (OSX induced
sleep)
After 5 minutes when i open my laptop, the process hangs at the same place
as before.

I have reproduced the same with latest 0.9.8k version also.

Please suggest.
1. What i should include in this code to correct this hang?
2. How to set read/write timeouts?

Thanks a lot guys.
(the Test Code & Call Trace is as follows )

CALL TRACE===
Call graph:
811 Thread_2507
  811 start
811 main
  811 BIO_read
811 ssl_read
  811 ssl3_read_internal
811 ssl3_read_bytes
  811 ssl3_read_n
811 BIO_read
  811 read$UNIX2003
811 read$UNIX2003

MY TEST CODE =
#include 
#include 
#include 
#include 
#define MAX_PACKET_SIZE 1

int main() {

BIO * bio;
SSL * ssl;
SSL_CTX * ctx;

/* Initializing OpenSSL */
SSL_load_error_strings();

ERR_load_BIO_strings();
OpenSSL_add_all_algorithms();
SSL_library_init(); //mandatory and missing from some examples

ctx = SSL_CTX_new(SSLv23_client_method());

if (ctx == NULL) {
std::cout << "Ctx is null" << std::endl;
ERR_print_errors_fp(stderr);
}

//using a store from examples
if(! SSL_CTX_load_verify_locations(ctx,
"/Users/pd/workspace/openssl/TrustStore.pem", NULL))
{/* Handle failed load here */
std::cout << "Faild load verify locations" << std::endl;
}

bio = BIO_new_ssl_connect(ctx);
BIO_get_ssl(bio, & ssl);
SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY);

//replace with your own test server
BIO_set_conn_hostname(bio, "www.myDomain.com:https");

if(BIO_do_connect(bio) <= 0) {
std::cout<<"Failed connection" << std::endl;

} else {
std::cout<<"Connected" << std::endl;
}

if(SSL_get_verify_result(ssl) != X509_V_OK)
{
/* Handle the failed verification */
std::cout << "Failed get verify result " << std::endl;

fprintf(stderr, "Certificate verification error: %i\n",
SSL_get_verify_result(ssl));
//do not exit here (but some more verification would not hurt)
because if you are using a self-signed certificate you will receive 18

//18 X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT which is not an
error}

char *write_buf = "GET /downloads/goodApp.exe / HTTP/1.0\n\n";

if(BIO_write(bio, write_buf, strlen(write_buf)) <=
0){
if(! BIO_should_retry(bio)){

/* Handle failed write here */

}
/* Do something to handle the retry */

std::cout << "Failed write" << std::endl;
}

char buf[MAX_PACKET_SIZE];
int p;
char r[1024];

FILE *fp;
fp = fopen("something.abc", "a+");

for(;;){
p = BIO_read(bio, r, 1023);
if(p <= 0) break;
r[p] = 0;
fprintf(fp, "%s", r);
}

fclose(fp);

std::cout << "Done reading" << std::endl;

/* To free it from memory, use this line */
ERR_print_errors_fp(stderr);
BIO_free_all(bio);
}

return 0;
}


On Thu, Oct 29, 2009 at 4:57 PM, David Schwartz wrote:

>
> Parimal Das wrote:
>
> > Its the second case Darry,
> > Here the 'sleep' is Operating System Sleep mode induced by closing the
> lid
> of laptop.
> > After opening the laptop, when the system wakes up,
> > My application is always hanging at the same place.
>
> Bug is in your code. It is doing what you asked it do -- waiting up to
> forever for data from the other side. The other side will never send
> anything because it has long forgotten about the connection. Your
> application will never send anything because it is blocked in a read
> function. TCP and UDP will do the same thing if you call 'read' or 'recv'
> and block for data that will never arrive.
>
> DS
>
>
>
> __
> OpenSSL Project http://www.openssl.org
> User Support Mailing Listopenssl-users@openssl.org
> Automated List Manager   majord...@openssl.org
>



-- 
--
Warm Regards,
Parimal Das


RE: Is full-duplex socket use possible with OpenSSL?

2009-10-29 Thread David Schwartz

Mark Williams wrote:

> > 2) Let the two threads read and write to your own two
> > independent queues and
> > service the application side of the SSL connection with your
> > own code to and from the read and write queues.

> Won't I still need to combine the reading and writing to the SSL object
> into a
> single thread for this?  This is the bit I am having difficulty
> visualising.

The data pump thread is more or less like this:

While (connection_is_alive)
{
 If (connection_is_not_corked_in)
 {
  SSL_read into temporary buffer.
  If we got data:
  {
   If a read thread is blocked, unblock it.
   If the receive queue is too full, set the 'corked_in' flag.
  }
  If we got a fatal error, mark the connection dead.
 }
 If(send queue not empty)
 {
  Try to send some data using SSL_write
  Put back what we didn't send
 }
 If we made no forward progress, block (see notes)
}
Tear down the connection

The read thread acquires the queue mutex, blocks on the condvar for data if
desired, pulls data off the queue, and clears the corked_in flag if it was
set (assuming the queue is still not full), and signals the data pump thread
if it uncorked.

The write thread acquires the mutex, checks if the send queue is full,
blocks on the condvar if it is, and signals the data pump thread if the queu
was empty.

The only trick left is the blocking logic in the data pump thread. This is
the hard part:

1) If you have no outbound data pending, and the connection is corked, block
only on an internal signal. (Since you don't want to do I/O either way
anyway.)

2) If you have outbound data pending and the connection is corked, block as
directed by SSL_write. If it said WANT_READ, block on read. If it said
WANT_WRITE, block on write.

3) If you have no outbound data pending (and hence, did not call SSL_write),
and the connection is uncorked, block as directed in SSL_read.

4) If you have outbound data pending, and the connection is uncorked, block
on the logical OR of the SSL_read result and the SSL_write result (block for
read on the socket if either one returned WANT_READ, block for write if
either returned WANT_WRITE).

Note that your data pump threads needs to block on a 'select' or 'poll' type
function but be unblocked when signaled. If necessary, add one end of a pipe
to the select/poll set and have you read/write threads write a byte to that
pipe to unblock the data pump thread.

This is from memory, but it should be basically correct.

By the way, I think only the logic in 4 is not obviously correct. Here's the
proof it's safe:
1) If nothing changed, and we block on the OR of both operations, we will
only unblock if one of those operations can make forward progress. (We only
unblock on X if one operation Xould make forward progress on X, and nothing
has changed since then.)
2) If something changed, then we already made some forward progress.
So either way, we make forward progress in each pass of the loop, which is
the best you can hope for.

DS



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Generating sect163k1 key pairs

2009-10-29 Thread Jeffrey Walton
Hi Doug,

> After extracting the private key from the testkey.pem file and putting it into
> the vendor's tool file format, the vendor tool generated digest ends up 
> looking
> like:
> E39C9EEB4A60BFAF93235B376E9E54883C127BC40300
> F4760E34AC2ECB484B2DFF06E87113C9F1F9F99F0200
Ah! Now I see where the question of padding originated. I can't
explain it other than to speculate: perhaps the vendor's hardware can
be used for 163 and and others such as 193 and 233. And maybe the
programmer dutifully dumps the latched value, even though the tail is
not used for a 163 curve.

> I realize that these will be different as they are seeded by different random
> numbers
ECDSA uses a random, per message value (usually 'k' in the
literature). So two signatures on the same message using the same key
will always be different. If the signatures are not different,
something is most likely broken.

> However, digests produced by the vendor's tool consistently have data
> that appears to be a X-Y coordinate...
> E39C9EEB4A60BFAF93235B376E9E54883C127BC40300
> F4760E34AC2ECB484B2DFF06E87113C9F1F9F99F0200
If the values are the output of the signature function, I believe that
would make them R and S, which are residues of Q. In earlier versions
of DSA, Q is the 160 bit value. (The new and improved DSA, specified
in FIPS 186-3, increases the size of Q (et al)).

Here's another guess: The values almost look like byte reversed ASN.1
encodings. But it appears the length octets are wrong (I did not run
them through a decoder). Or maybe some bastard BER-ish style: Write
the ASN.1 tag (the 0x03 for the first, 0x02 for the second), discard
the length octets, and then lay out the content octets.

Personally, I prefer IEEE formatting - it is always 40 bytes.

Jeff

On Wed, Oct 28, 2009 at 5:32 PM, Doug Bailey  wrote:
> Thanks much for the explanations on how this data is laid out.
>
> My first attempts at using the key I generated on my hardware platform were
> unsuccessful.
>
> Stepping back, I thought I would use openssl to create a sect163k1 encrypted
> SHA1 digest of my test file and then verify it.  I have been able to do this
> successfully executing the following commands:
>
> sudo openssl ecparam -genkey -name sect163k1 -out testkey.pem
> openssl ec -in testkey.pem -pubout -out testkeypub.pem
> openssl dgst -ecdsa-with-SHA1 -sign testkey.pem -out testdigest lockex.bin
> openssl dgst -ecdsa-with-SHA1  -verify testkeypub.pem -signature testdigest 
> lockex.bin
>
> At this point I tried to use the openssl generated key to generate an 
> encrypted
> digest of my test file using a tool provided by my hardware vendor.  (A
> derivative of the Miracl ecsign program.)
>
> After extracting the private key from the testkey.pem file and putting it into
> the vendor's tool file format, the vendor tool generated digest ends up 
> looking
> like:
> E39C9EEB4A60BFAF93235B376E9E54883C127BC40300
> F4760E34AC2ECB484B2DFF06E87113C9F1F9F99F0200
>
> The digest generated by openssl looks like:
> $ hexdump testdigest
> 000 2e30 1502 8101 6c91 034a 1613 8b89 a2b9
> 010 d691 d3d0 dd7d 2c7b 023e 0315 24c9 9a3c
> 020 8042 342c cf41 cec6 057b a830 f1fc 0349
>
> I realize that these will be different as they are seeded by different random
> numbers.  However, digests produced by the vendor's tool consistently have 
> data
> that appears to be a X-Y coordinate (i.e. 0's at the same place in the digest:
> halfway through and at the end) while the digest produced by openssl is truly
> random.
>
> Am I misreading this or is this significant?  Is the digest generated by 
> openssl
> encoded in some sort of format or is it truly random as I expect?
>
> Thanks
> Doug Bailey
>
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Is full-duplex socket use possible with OpenSSL?

2009-10-29 Thread Mark
> Mark Williams wrote:
> 
> > I can think of one.  In the near future I will need to add 
> SSL support
> > to a
> > legacy application which uses two threads to read/write from/to a
> > socket.
> > If SSL supported this it would make my life much easier.  As the
> > situation
> > stands I am not sure how to tackle this project.
> 
> There are two obvious, simple ways:
> 
> 1) Have another application that does the SSL work, you can even use
> existing ssl proxies. Then you don't have to change the IO in 
> your pump.

The client wants the whole thing contained in one library so I don't
think this
one is an option.
 
> 2) Let the two threads read and write to your own two 
> independent queues and
> service the application side of the SSL connection with your 
> own code to and from the read and write queues.

Won't I still need to combine the reading and writing to the SSL object
into a
single thread for this?  This is the bit I am having difficulty
visualising.

Are there any samples around that do this?

Mark.

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Subject Issuer Mismatch Bug!!

2009-10-29 Thread David Schwartz

Daniel Marschall:

> Hello.
> 
> I am not searching bugs in my code. I have a certificate and a CRL.
> And the functionality -issuer_checks is buggy. My cert and CRL have
> exactky the same DN as issuer.

What is the bug then? All you've reported so far is:

1) When you compare using exact string compares, you get nonsensical
results.

2) When you enable informational messages, you get accurate informational
messages.

DS



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Is full-duplex socket use possible with OpenSSL?

2009-10-29 Thread David Schwartz

Mark Williams wrote:

> I can think of one.  In the near future I will need to add SSL support
> to a
> legacy application which uses two threads to read/write from/to a
> socket.
> If SSL supported this it would make my life much easier.  As the
> situation
> stands I am not sure how to tackle this project.

There are two obvious, simple ways:

1) Have another application that does the SSL work, you can even use
existing ssl proxies. Then you don't have to change the IO in your pump.

2) Let the two threads read and write to your own two independent queues and
service the application side of the SSL connection with your own code to and
from the read and write queues.

DS



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: ssl_read() hangs after wakeup from sleep in OSX 10.5.8

2009-10-29 Thread David Schwartz

Parimal Das wrote:

> Its the second case Darry, 
> Here the 'sleep' is Operating System Sleep mode induced by closing the lid
of laptop.
> After opening the laptop, when the system wakes up,
> My application is always hanging at the same place.

Bug is in your code. It is doing what you asked it do -- waiting up to
forever for data from the other side. The other side will never send
anything because it has long forgotten about the connection. Your
application will never send anything because it is blocked in a read
function. TCP and UDP will do the same thing if you call 'read' or 'recv'
and block for data that will never arrive.

DS



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: TLS trust of a chain of certificates up to a root CA. Certificate Sign extenstion not set

2009-10-29 Thread Eisenacher, Patrick
Hi Mourad,

-Original Message-
> From: On Behalf Of Mourad Cherfaoui
> Sent: Wednesday, October 28, 2009 6:23 AM
> To: openssl-users@openssl.org
> Subject: TLS trust of a chain of certificates up to a root CA. Certificate
> Sign extenstion not set

> I have a chain of certificates C->B->A->RootCA. The TLS client only presents C
> during the TLS handshake. RootCA has the Certificate Sign extension set but 
> not
> B and A.

SSL requests the client to send all certificates necessary to verify its 
identity. Only when the client knows that the server has access to part of its 
certificate chain, it is allowed to send less.

If you have control over the client, you should configure it to send its whole 
certificate chain. If you don't have control over the client, you need to add 
all certificates of the client's certificate chain that the client doesn't send 
to the server's truststore.

This is your first requirement. Fix this first, then move on to the extension 
topic.

> The TLS server fails the TLS handshake because of the absence of the
> Certificate Sign extension in B and A.

If the server can't build the client's certificate chain to a cert in its 
truststore, verification of the client's identity fails, and as such the 
handshake.

> My first question: if the TLS server has the entire chain of certificates
> B->A->RootCA in its truststore, is it correct to assume that the Certificate
> Sign extension is not required in B and A?

No, your assumption is wrong.

> My second question: by default the
> TLS server will fail the TLS handshake because of the absence of the
> Certificate Sign extension. Is there a recommended way to disables the check
> for this extension in the TLS handshake?

Once the server has constructed the client's certificate chain to one of its 
own trusted certificates, it starts verifying the certificate chain according 
to some verification profile. OpenSSL uses the PKIX profile has specified in 
RFC 5280. PKIX requires CA certificates to have the keyUsage extension and it 
should be marked critical. As such the extension needs to have at least the 
value keyCertSign. Check the RFC for more info about the keyUsage extension.

4 options come to mind how to solve your problem:

- fix your configuration & your certificates. This is the recommended way.

- use X.509v1 certificates - they don't contain extensions

- don't use SSL with client authentication, but then your client will probably 
fail to verify the server's identity as well, if the server is certified by a 
CA whose certificate is missing the keyUsage extension.

- configure the server to move on with the handshake even in the case of a 
failed verification of the client's identity, i.e. request client 
authentication but grant access to anonymous clients as well


HTH,
Patrick Eisenacher
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Running SSL server without temporary DH parameters.

2009-10-29 Thread Victor B. Wagner
On 2009.10.28 at 14:56:54 -0400, Victor Duchovni wrote:

> On Wed, Oct 28, 2009 at 09:09:59PM +0300, Victor B. Wagner wrote:
> 
> > > > But for some setups, especially in OpenSSL 1.0, which supports EC
> > > > ciphersuites, dh parameters are not neccessary.
> > > 
> > > This is not entirely accurately, one still needs to designate an ECDH
> > > curve for ECDHE ciphers. Postfix code for this:
> > 
> > curve is not DH parameters. It is quite different dataset.
> > (and often expressed as just OID, not actual curve data).
> 
> Yes, of course, in a strictly technical sense. From a user perspective,
> however, both are the same sort of thing, something one needs to configure
> to enable kEDH or kEECDH ciphers. When neither set of parameters is provided,
> one gets just

We are talking about server. User, who configures server is technican.

>   - kRSA, kECDHr, kECDHe (no forward secrecy)
>   - kPSK (no significant adoption)

There is also GOST. And it is what I'm concerned with.

> > Question is - should we make user immediately aware of this restriction
> > during parsing the configuration?
> 
> Not sure what you mean by "during parsing".

It means "before server starts and begins to listen on the socket".

> > If user specifies DSA key only it is fatal.
> > If user specifies RSA key only half of otherwise available suites
> > are left.
> 
> It is OK if the user cipher selection string designates more ciphers
> (e.g. DEFAULT) than actually compatible with the available with the
> available certificates and/or parameters, so long as a non-empty
> set of usable cipher-suites remains.

But above you are talking about lost forward secrecy.

> I strong disagree. Most users (I've talked to users trying to select
> appropriate cipher lists) have no idea what the various ciphers are,
> and are much better off with either DEFAULT or DEFAULT:!SSLv2:!EXPORT:!LOW
> than anything they are likely to come up with on their own.

If users use DEFAULT, it means that they haven't EXPLICITELY specified
kEDH ciphersuites. They've specified "anyhing that appropriate".
Unfortunately, it is quite hard to distinguish between these two
situatuion with current libssl API.


> In both of the "sensible" cipherlists above, there are a lot more ciphers
> than typically compatible with the rest of the configuration. This is fine.
> 
> > And obvoisly a fatal configuration error if some of these ciphersuites
> > were explicitely specified by user in the cipher list.
> 
> This would be bad. It is specifically documented that unknown cipherlist
> elements are ignored, and for good reason.

It depends on application, I think. For some applications it is good
reason, for some - it is bad.

> If you want a "lint" tool for configurations, by all means, that would
> be a good idea. But, making cipherlists incompatible with libraries
> that don't support every cipher on the list, is bad, one can't use
> a single list whose elements include features from "future" releases

Why one would use such a setup for production system?

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


"djgppbin/perl.exe" not found, etc. error

2009-10-29 Thread Ersin Akinci
Hi all,

I'm trying to compile OpenSSL 0.9.8k in MS-DOS 7.1 with DJGPP and I
keep getting errors stating that various utilities cannot be found
under the "djgppbin" directory.  All of my environment variables are
correctly set and ./Configure runs fine, but this strange error keeps
coming up.  Perhaps the scripts are parsing slashes incorrectly (i.e.,
should be "djgpp/bin", not "djgppbin")?

Virtually the same error happened to someone else, but that person
also had something else wrong with their set-up, so the "djgppbin"
error was never resolved:
http://marc.info/?l=openssl-users&m=120945340207568&w=2

I can't just copy and paste my exact error because it's on a
non-networked DOS computer, but here's the relevant portion from the
other guy's e-mail:

> I have also the fallowing error messages :
>  c:\djgpp\tmp/dj40: DJGPPbin/ranlib.exe: command not found
> + gcc -o openssl.exe -DMONOLITH -I.. -I../include -DOPENSSL_SYSNAME_MSDOS
> -DOPENSSL_NO_KRB5 -IWATT_ROOT/inc -DTERMIOS -DL_ENDIAN -fomit-frame-pointer
> -O2 -Wall openssl.o verify.o asn1pars.o req.o dgst.o dh.o dhparam.o enc.o
> passwd.o gendh.o errstr.o ca.o pkcs7.o crl2p7.o crl.o rsa.o rsautl.o dsa.o
> dsaparam.o x509.o genrsa.o gendsa.o s_server.o s_client.o speed.o s_time.o
> apps.o s_cb.o s_socket.o app_rand.o version.o sess_id.o ciphers.o nseq.o
> pkcs12.o pkcs8.o spkac.o smime.o rand.o engine.o ocsp.o prime.o ../libssl.a
> ../libcrypto.a -LWATT_ROOT/lib -lwatt
>  c:\djgpp\tmp/dj50: DJGPPbin/perl.exe: command not found
> make.exe[1]: [openssl.exe] Error 127 (ignored)
> + gcc -o bntest.exe -I.. -I../include -DOPENSSL_SYSNAME_MSDOS
> -DOPENSSL_NO_KRB5 -IWATT_ROOT/inc -DTERMIOS -DL_ENDIAN -fomit-frame-pointer
> -O2 -Wall bntest.o ../libssl.a ../libcrypto.a -LWATT_ROOT/lib -lwatt
>  + gcc -o ectest.exe -I.. -I../include -DOPENSSL_SYSNAME_MSDOS
> -DOPENSSL_NO_KRB5 -IWATT_ROOT/inc -DTERMIOS -DL_ENDIAN -fomit-frame-pointer
> -O2 -Wall ectest.o ../libssl.a ../libcrypto.a -LWATT_ROOT/lib -lwatt

Etc.

Could someone help me out?

Thanks,
Ersin
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org