large upload issue

2011-12-09 Thread MK
Hi!  I'm new to ssl and am having some problems.  I'm working on an
http server; the interface is in perl and the internals are in perl and
C; the SSL module is in C.

Everything works fine except for large file uploads (using
multipart/form-data), where I lose a *variable* fraction of a percent
of the bytes (eg, 1-10 bytes for 20 MB) *in the middle* of the transfer.
Ie, the bytes read do not match the content-length, but while the
multipart boundaries at beginning and end are intact, the file written
out is too short.

The only errors I receive from openssl are WANT_READ or WANT_WRITE,
which I handle like EAGAIN (the socket is non-block).   The code which
handles the upload is identical for both SSL and non-SSL connections,
except for the read function below, but there is no such problem with
non-SSL transfers.

The read function uses some of the perl API and is intended to provide
the same functionality as perl's sysread (this is why the rest of the
code is identical to the non-SSL upload):

SV *sysread (SV *objref, SV *buf, int len) {
// retrieve SSL object from perl
HV *self = (HV*)SvRV(objref);
SV **field = hv_fetch(self, ssl, 3, 0);

if (!field) return newSV(0);

SSL *ssl = (SSL*)SvIV(*field);

// set up buffer and read
unsigned char data[len];
ERR_clear_error();
int bytes = SSL_read(ssl, data, len);

// error handling
if (bytes  0) {
int err = SSL_get_error(ssl, bytes);
if (err == SSL_ERROR_WANT_READ 
|| err == SSL_ERROR_WANT_WRITE) err = EAGAIN;
else err *= -1; 
// the error is made negative to prevent collision with EAGAIN
hv_store(self, readerr, 3, newSViv (err), 0);
   return newSV (0);// perl undef
}

// return buffer contents to perl
sv_setpvn(buf, data, bytes);
return newSViv(bytes);
}

As stated, the only error which actually occurs is the WANT_READ or
WANT_WRITE.

I can also post the ctx setup*, etc, tho again, everything works fine
except for large uploads.   Large downloads are fine.  My test
client is firefox 7 over a slow wireless connection; the loss is less
on local loopback but still occurs. What have I missed about this?

Thanks -- MK

* I use SSL_set_fd and not a BIO.

-- 
Enthusiasm is not the enemy of the intellect. (said of Irving Howe)
The angel of history[...]is turned toward the past. (Walter Benjamin)

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: large upload issue

2011-12-09 Thread Michael S. Zick
On Fri December 9 2011, MK wrote:
 Hi!  I'm new to ssl and am having some problems.  I'm working on an
 http server; the interface is in perl and the internals are in perl and
 C; the SSL module is in C.
 
 Everything works fine except for large file uploads (using
 multipart/form-data), where I lose a *variable* fraction of a percent
 of the bytes (eg, 1-10 bytes for 20 MB) *in the middle* of the transfer.
 Ie, the bytes read do not match the content-length, but while the
 multipart boundaries at beginning and end are intact, the file written
 out is too short.
 
 The only errors I receive from openssl are WANT_READ or WANT_WRITE,
 which I handle like EAGAIN (the socket is non-block).   The code which
 handles the upload is identical for both SSL and non-SSL connections,
 except for the read function below, but there is no such problem with
 non-SSL transfers.
 
 The read function uses some of the perl API and is intended to provide
 the same functionality as perl's sysread (this is why the rest of the
 code is identical to the non-SSL upload):
 
 SV *sysread (SV *objref, SV *buf, int len) {
 // retrieve SSL object from perl
 HV *self = (HV*)SvRV(objref);
 SV **field = hv_fetch(self, ssl, 3, 0);
 
 if (!field) return newSV(0);
 
 SSL *ssl = (SSL*)SvIV(*field);
 
 // set up buffer and read
 unsigned char data[len];
 ERR_clear_error();
 int bytes = SSL_read(ssl, data, len);
 
 // error handling
 if (bytes  0) {
 int err = SSL_get_error(ssl, bytes);
 if (err == SSL_ERROR_WANT_READ 
 || err == SSL_ERROR_WANT_WRITE) err = EAGAIN;
 else err *= -1; 
 // the error is made negative to prevent collision with EAGAIN
 hv_store(self, readerr, 3, newSViv (err), 0);
return newSV (0);// perl undef
 }
 
 // return buffer contents to perl
 sv_setpvn(buf, data, bytes);
 return newSViv(bytes);
 }
 
 As stated, the only error which actually occurs is the WANT_READ or
 WANT_WRITE.
 
 I can also post the ctx setup*, etc, tho again, everything works fine
 except for large uploads.   Large downloads are fine.  My test
 client is firefox 7 over a slow wireless connection; the loss is less
 on local loopback but still occurs. What have I missed about this?


Evidently your connection is doing a renegotiation during the transfer.
You missed:
http://stackoverflow.com/questions/3952104/how-to-handle-openssl-ssl-error-want-read-want-write-on-non-blocking-sockets

Among a few other zillion posts that google can find on the subject.

Mike 
 Thanks -- MK
 
 * I use SSL_set_fd and not a BIO.
 


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: large upload issue

2011-12-09 Thread Jakob Bohm
Hi, nice code, I spot a few questionable details, but only Warn#5 might 
cause missing bytes.


On 12/9/2011 1:28 PM, MK wrote:

Hi!  I'm new to ssl and am having some problems.  I'm working on an
http server; the interface is in perl and the internals are in perl and
C; the SSL module is in C.

Everything works fine except for large file uploads (using
multipart/form-data), where I lose a *variable* fraction of a percent
of the bytes (eg, 1-10 bytes for 20 MB) *in the middle* of the transfer.
Ie, the bytes read do not match the content-length, but while the
multipart boundaries at beginning and end are intact, the file written
out is too short.

The only errors I receive from openssl are WANT_READ or WANT_WRITE,
which I handle like EAGAIN (the socket is non-block).   The code which
handles the upload is identical for both SSL and non-SSL connections,
except for the read function below, but there is no such problem with
non-SSL transfers.

The read function uses some of the perl API and is intended to provide
the same functionality as perl's sysread (this is why the rest of the
code is identical to the non-SSL upload):

SV *sysread (SV *objref, SV *buf, int len) {
// retrieve SSL object from perl
 HV *self = (HV*)SvRV(objref);
 SV **field = hv_fetch(self, ssl, 3, 0);

 if (!field) return newSV(0);
Warn#1: It is probably more efficient to return PL_sv_undef, avoiding an 
allocation in

a potential memory full situation


 SSL *ssl = (SSL*)SvIV(*field);

// set up buffer and read
 unsigned char data[len];

Bug #2: must be allocated as [len+1] because of Bug#7 below.

Warn#3: It is probably more efficient to do
   SvGrow(buf, len + 1);
   unsigned char *data = SvPV_nolen(buf);


 ERR_clear_error();
 int bytes = SSL_read(ssl, data, len);

// error handling
 if (bytes  0) {
 int err = SSL_get_error(ssl, bytes);
 if (err == SSL_ERROR_WANT_READ
 || err == SSL_ERROR_WANT_WRITE) err = EAGAIN;
 else err *= -1;
Warn#4: The calling perl code may need to distinguish between 
SSL_ERROR_WANT_READ
   and SSL_ERROR_WANT_WRITE, because the needed select() call 
will be different
Warn#5: Remember to ensure the perl code passes the exact same 
parameters on retry!

 // the error is made negative to prevent collision with EAGAIN
 hv_store(self, readerr, 3, newSViv (err), 0);
return newSV (0);// perl undef
Warn#6: It is probably more efficient to return PL_sv_undef, avoiding an 
allocation in

a potential memory full situation

 }
Bug#7: Perl requires a 0 after the end of a string, even if it holds 
binary data, so add this line


data[len] = 0;

// return buffer contents to perl
 sv_setpvn(buf, data, bytes);

Bug#8: Note that if bytes==0 (a valid situation), then sv_setpvn() will
act like sv_setpvn(buf, data, strlen(data))
So in addition to Bug#7 above, bytes==0 could turn into
a variable number of random bytes getting put in buf.

Warn#9:If you did the change in Warn#3 above, change sv_setpvn() to

   SvCUR_set(buf, bytes);

 return newSViv(bytes);
}

As stated, the only error which actually occurs is the WANT_READ or
WANT_WRITE.

I can also post the ctx setup*, etc, tho again, everything works fine
except for large uploads.   Large downloads are fine.  My test
client is firefox 7 over a slow wireless connection; the loss is less
on local loopback but still occurs. What have I missed about this?

Thanks -- MK

* I use SSL_set_fd and not a BIO.



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: large upload issue

2011-12-09 Thread MK
On Fri, 9 Dec 2011 07:55:07 -0600
Michael S. Zick open...@morethan.org wrote:

 Evidently your connection is doing a renegotiation during the
 transfer. You missed:
 http://stackoverflow.com/questions/3952104/how-to-handle-openssl-ssl-error-want-read-want-write-on-non-blocking-sockets
 
 Among a few other zillion posts that google can find on the subject.

What makes you believe I am not handling this correctly?  If the the
call returns WANT_WRITE or WANT_READ,  it gets called again with
exactly the same parameters, which is exactly what that and all those
other zillion posts recommend.  This is why I set the err to EAGAIN,
because the same thing must be done with a regular non-blocking socket.

I've even tried using a global buffer in place of the stack one, just
to be sure the repeated call really uses exactly the same args, which
is in the man page -- this did not make any difference.

-- 
Enthusiasm is not the enemy of the intellect. (said of Irving Howe)
The angel of history[...]is turned toward the past. (Walter Benjamin)

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: large upload issue

2011-12-09 Thread Michael S. Zick
On Fri December 9 2011, MK wrote:
 On Fri, 9 Dec 2011 07:55:07 -0600
 Michael S. Zick open...@morethan.org wrote:
 
  Evidently your connection is doing a renegotiation during the
  transfer. You missed:
  http://stackoverflow.com/questions/3952104/how-to-handle-openssl-ssl-error-want-read-want-write-on-non-blocking-sockets
  
  Among a few other zillion posts that google can find on the subject.
 
 What makes you believe I am not handling this correctly?  If the the
 call returns WANT_WRITE or WANT_READ,  it gets called again with
 exactly the same parameters, which is exactly what that and all those
 other zillion posts recommend.  This is why I set the err to EAGAIN,
 because the same thing must be done with a regular non-blocking socket.


Because the write action might return __either__ want_read or want_write
and the read action might return __either__ want_read or want_write. 

Just because the most current action was a write does not mean you
can presume the return was want_write - it might be want_read.

The same is true if the most current action was a read.

The OpenSSL layer is a state machine, you can't turn a four state
machine into a two state machine by folding the distinctive returns
into a single return, at least not if you expect it to work.  ;-)

Mike
 I've even tried using a global buffer in place of the stack one, just
 to be sure the repeated call really uses exactly the same args, which
 is in the man page -- this did not make any difference.
 


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


MD5 slower since 1.0.0?

2011-12-09 Thread Marius Peschke
Hi Community,
I am a developer for a networkdevice-manufracturer in germany,
To enhance the En-/Decryption speed on the embedded devices (here i will use 
for the explanation a PowerPc mpc8314), I upgraded Openssl to 1.0.0e from 
0.9.8i.
I was very happy that the AES-speed improved by roughly 10-15% without even 
using the new powerpc assembler optimizations for AES(couldnt get them to work 
properly, but this is a different topic). 
(Measurements made by openssl speed aes-128-cbc aes-256-cbc)
No Crypto-hardware was used.

OpenSSL 1.0.0e 6 Sep 2011
aes-128 cbc  11085.4812129.8212382.7112483.7812414.63
aes-256 cbc   8912.24 9502.18 9735.55 9782.76 9756.72

OpenSSL 0.9.8i 15 Sep 2008
aes-128 cbc   9707.4810430.8210618.1710647.7610549.85
aes-256 cbc   8080.35 8510.50 8635.44 8657.24 8578.82

Sadly I had to experience that my MD5-speed dropped by roughly 20-25% again 
without using assembler optimization.(Measurements made by openssl speed md5)

OpenSSL 1.0.0e 6 Sep 2011
md5714.68 2682.57 9292.6324498.5845192.34

OpenSSL 0.9.8i 15 Sep 2008
md5886.73 3305.3211176.0127484.3945989.39

To identify if the problem of the dropping MD5-speed was only caused on my 
device I also tested the MD5 speed on my Core i5 Ubuntu 11.04. I used ./config 
no-asm no-hw
and then make install.
But even on my Linux the MD5-speed dropped from:

OpenSSL 0.9.8i 15 Sep 2008
built on: Fri Dec  9 10:18:39 CET 2011
options:bn(64,64) md2(int) rc4(ptr,int) des(idx,cisc,16,int) aes(partial) 
idea(int) blowfish(ptr2) 
compiler: gcc -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -m64 
-DL_ENDIAN -DTERMIO -O3 -Wall -DMD32_REG_T=int
md5  51627.43k   154049.41k   347742.01k   505215.23k   581588.16k

to: 
OpenSSL 1.0.0e 6 Sep 2011
built on: Fri Dec  9 10:07:25 CET 2011
options:bn(64,64) rc4(ptr,int) des(idx,cisc,16,int) aes(partial) idea(int) 
blowfish(idx) 
compiler: gcc -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H 
-Wa,--noexecstack -m64 -DL_ENDIAN -DTERMIO -O3 -Wall -DMD32_REG_T=int
md5  29098.42k97613.06k   260005.76k   450710.35k   575437.31k

I tested also with OpenSSL 1.0.0 29 Mar 2010 and OpenSSL 0.9.8r 8 Feb 2011. 
The 1.x.x Version was always the slower one.

So now I wonder why the speed of MD5 dropped so much.

Hopefully someone can give my inside on this.

Sincerly,
Marius Peschke


I dont know if attachments work so I will post the table with as a text with 
tabs.
Also a link to the ods-file http://www.sendspace.com/file/jd8mx1

aes-128-cbc aes-256-cbc 

Networkdevice   16  64  256 1024
819216  64  256 1024
8192
PPC-1.0.0e  11064.4912140.6912422.8912503.41
12411.118921.2  9560.9  9761.63 9715.21 
9727.82
PPC-0.9.8i  9760.35 10440.1110626.1 10661.79
10605.188064.89 8521.63 8646.02 8664.67 
8529.88
-13.3616110078  -16.2889088333  -16.9092141049  -17.2730845383  
-17.0287538731  -10.617751761   -12.1956714854  -12.9031623799  -12.1244086618  
-14.0440428236
Linux   

x86-64-1.0.0e   213323.84   222858.19   223810.7225289.93   
225425.21   165721.82   169828.22   170073.13   170700.12   
171155.26
x86-64-0.9.8i   190333.68   207542.29   208195.9209964.18   
210956.33   153256.83   160208.5161017.75   161599.53   
163773.8
-12.0788711698  -7.3796526  -7.5000516341   -7.2992212291   
-6.8587086247   -8.1333993402   -6.0045003854   -5.6238396077   -5.6315695968   
-4.507106753

MD4 MD5 

16  64  256 1024
819216  64  256 1024
8192
PPC-1.0.0e  892.37  3268.94 11430.0130524.14
58415.32712.89  2677.96 9119.8  24131.56
45450.03
PPC-0.9.8i  1301.05 4803.64 15950.7 37656.63
61056.64977.39  3629.45 12093.4328810.73
47161.25
31.4115522078   31.9486889109   28.3416401788   18.9408611445   
4.326015974727.0618688548   26.2158178236   24.5888056573   16.2410671302   
3.6284449628

x86-64-1.0.0e   33839.62

Re: CA chain file print text

2011-12-09 Thread gkout

Hi Steve,

Yes that did the job. Fortunately I only have 3-4 CAs in the chain so the
file size is relatively small.

Thank you for the valuable tip.

Cheers,
George



Dr. Stephen Henson wrote:
 
 On Thu, Dec 08, 2011, gkout wrote:
 
 
 Hello everybody,
 
 Nice to find you. My first post in the forum is about printing the text
 of
 all CA cetificates in a chain file.
 
 openssl x509 -text -noout -in CA_chain_file will not do the job as it
 only
 prints the first cert in the chain and the rest seem to be ignored.
 
 Is this an openssl command limitation? 
 Do I need a script to print all the certificates in the chain?
 
 Thank you all on advance.
 
 
 This is technically a one line script that makes use of a PKCS#7 structure
 and
 may do what you want:
 
 openssl crl2pkcs7 -nocrl -certfile certs.pem \
   | openssl pkcs7 -print_certs -text
 
 Not recommended if you have a GB file of certificates as it stores the lot
 in
 memory.
 
 Steve.
 --
 Dr Stephen N. Henson. OpenSSL project core developer.
 Commercial tech support now available see: http://www.openssl.org
 __
 OpenSSL Project http://www.openssl.org
 User Support Mailing Listopenssl-users@openssl.org
 Automated List Manager   majord...@openssl.org
 
 

-- 
View this message in context: 
http://old.nabble.com/CA-chain-file-print-text-tp32934805p32940592.html
Sent from the OpenSSL - User mailing list archive at Nabble.com.
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: large upload issue

2011-12-09 Thread MK
On Fri, 09 Dec 2011 15:10:47 +0100
Jakob Bohm jb-open...@wisemo.com wrote:

 Hi, nice code, I spot a few questionable details, but only Warn#5
 might cause missing bytes.

   if (!field) return newSV(0);
 Warn#1: It is probably more efficient to return PL_sv_undef, avoiding
 an allocation in a potential memory full situation

Point taken. 

 Bug #2: must be allocated as [len+1] because of Bug#7 below.

WRT to #2 and #7, the caveat I've seen about this has to do with
handing a perl string into a C function that treats it like a C-string.
Eg, length() and SvPV(data, 0) use strlen.  However, if you set a
length manually, = and .= will use that.  Otherwise putting binary data
into a perl string would be near pointless.

Ie, it's not essential to null terminate here as long as you are aware
of the potential for runaway string opts.

However,  after exhausting the other possibilities, I tried this.  Kind
of a hassle because I then had to add chop() here and there.  But lo
and behold, it worked.  Considering each call of the sysread only got
16KB at most -- meaning it is called thousands of times for a 20 MB
upload -- I'm baffled as to why this would cause such a small and
irregular loss.  The POST is not accumulated using perl either, or C
string functions, just pointers.

I don't like baffled, I like to think I know why ;)  Good thing the
perlish are everywhere.  Thanks much -- I have a few more
comments/questions about your comments if you're interested.

 Warn#3: It is probably more efficient to do
 SvGrow(buf, len + 1);
 unsigned char *data = SvPV_nolen(buf);

Good idea; I then have to reset the length to the actual bytes read,
but this will save a copy.  Thanks.

 Warn#4: The calling perl code may need to distinguish between 
 SSL_ERROR_WANT_READ
 and SSL_ERROR_WANT_WRITE, because the needed select()
 call will be different

I haven't actually seen SSL_ERROR_WANT_WRITE happen here; initially
I was testing for them separately.  In the SSL_read man page, it says:

As at any time a re-negotiation is possible, a call to SSL_read() can
also cause write operations! The calling process then must repeat the
call after taking appropriate action to satisfy the needs of SSL_read
(). The action depends on the underlying BIO . When using a
non-blocking socket, *nothing is to be done*...

So, if I were to handle them differently, how would I do it?

 Warn#5: Remember to ensure the perl code passes the exact same 
 parameters on retry!

Yep.  I actually made the data buffer global temporarily to make sure
the address stays the same, no change.  And the length remaining (based
on bytes read and content-length) is used for the len arg; than will
not change if the last call returned nothing.

  // return buffer contents to perl
   sv_setpvn(buf, data, bytes);
 Bug#8: Note that if bytes==0 (a valid situation), then sv_setpvn()
 Bug#will
  act like sv_setpvn(buf, data, strlen(data))
  So in addition to Bug#7 above, bytes==0 could turn into
  a variable number of random bytes getting put in buf.

Good catch, I should change that to prevent the wasted allocation.
However, it would not matter WRT to the collection of data, because
this function returns bytes (via SvIV).  Currently I'm
treating 0 the same way I would with a normal socket -- as an
indication that the client disconnected.  Does not seem to be a problem
thus far (ie, SSL_read never returns 0 during the transfer).  I left
the \0 out of the byte count so this does not get screwed up (but
allocated enough in SvCUR_set).

Anyway, seemingly the problem is solved!  Phew.  But if you think I'm
off base about anything here, I'm listening. :)

More thanks -- MK

-- 
Enthusiasm is not the enemy of the intellect. (said of Irving Howe)
The angel of history[...]is turned toward the past. (Walter Benjamin)

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


s_server option to send certificate chain

2011-12-09 Thread vivek here
Hi every body,
Is there any command line option for configuring s_server to send
certificate chain.

Example: server cert (S)
 S was singned by CA certificate (S_CA).
  Now I want to send S ( by -cert option) as well as S_CA.

Thanks in advance,
Vivek Patra

Senior Engineer,
Alumnus Software Ltd.


Re: s_server option to send certificate chain

2011-12-09 Thread Michael S. Zick
On Fri December 9 2011, vivek here wrote:
 Hi every body,
 Is there any command line option for configuring s_server to send
 certificate chain.
 
 Example: server cert (S)
  S was singned by CA certificate (S_CA).
   Now I want to send S ( by -cert option) as well as S_CA.


The server does not normally send the root certificate;
the root certificate is normally obtained by an out-of-band
method.

This is basically a third-party trust system,
eliminate the third party (by sending the root certificate)
and you have a no-trust system.  ;-)

Mike 
 Thanks in advance,
 Vivek Patra
 
 Senior Engineer,
 Alumnus Software Ltd.
 


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: large upload issue

2011-12-09 Thread MK
On Fri, 9 Dec 2011 09:08:19 -0600
Michael S. Zick open...@morethan.org wrote:
 On Fri December 9 2011, MK wrote:
  What makes you believe I am not handling this correctly?  If the the
  call returns WANT_WRITE or WANT_READ,  it gets called again with
  exactly the same parameters, which is exactly what that and all
  those other zillion posts recommend.  This is why I set the err to
  EAGAIN, because the same thing must be done with a regular
  non-blocking socket.
 
 
 Because the write action might return __either__ want_read or
 want_write and the read action might return __either__ want_read or
 want_write. 
 
 Just because the most current action was a write does not mean you
 can presume the return was want_write - it might be want_read.
 
 The same is true if the most current action was a read.

Yes, but WRT non-blocking sockets (it says non-blocking in the OP),
from the SSL_read man page:

As at any time a re-negotiation is possible, a call to SSL_read() can
also cause write operations! The calling process then must repeat the
call after taking appropriate action to satisfy the needs of SSL_read
(). The action depends on the underlying BIO . When using a
non-blocking socket, *nothing is to be done*...

You just call the read again, regardless of whether it is WANT_READ or
WANT_WRITE.  This is also quoted in the link you posted ;)

The actual problem is solved in one of the other replies, but thanks
for taking an interest.

Sincerely, MK

-- 
Enthusiasm is not the enemy of the intellect. (said of Irving Howe)
The angel of history[...]is turned toward the past. (Walter Benjamin)

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org