I have written a program that needs to run on both Solaris 2.6 adn HP-UX
11.0.  Here is what I do:

        1.      I create the sockets in both the client and server ( using socket() )
        2.      I do the accept and connect and then the SSL_connect and SSL_accept
between the
                client and the server
        3.      I set the socket ( outside of the SSL structure ) to be non-blocking
                ( using fcntl( F_NDELAY ) ) because I don't know the size of messages 
that
                I am writing through the socket connection and so need to loop and read
until 0
                bytes are read.
        4.      I loop and call SSL_read() on the server when the client writes to the
socket, and
                when there is no more to read SSL_read should returns a -1 and errno
should be
                set to EWOULDBLOCK, which indicates I have all of the message.

Well, number 4 works fine on Solaris 2.6, but when I do this on HP-UX 11.0
 both client and server programs are built with gcc 2.95.2 ) when there is
no more to SSL_read from the socket, it returns 0
instead of -1 which in the non-SSL world means that the other side
disconnected, which of course screws me all up.

Has anyone seen or heard of these kind of differences between HP-UX 11.0 and
Solaris 2.6 and if so, can anyone help me figure out if I'm misusing SSL
 setting it to F_NDELAY outside the SSL structure ) or if SSL_read and
SSL_write behave differently from normal read() and write().

Thanks,
Neil Kessler

______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       [EMAIL PROTECTED]
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to