Hi, nice code, I spot a few questionable details, but only Warn#5 might cause missing bytes.

On 12/9/2011 1:28 PM, MK wrote:
Hi!  I'm new to ssl and am having some problems.  I'm working on an
http server; the interface is in perl and the internals are in perl and
C; the SSL module is in C.

Everything works fine except for large file uploads (using
"multipart/form-data"), where I lose a *variable* fraction of a percent
of the bytes (eg, 1-10 bytes for 20 MB) *in the middle* of the transfer.
Ie, the bytes read do not match the content-length, but while the
multipart boundaries at beginning and end are intact, the file written
out is too short.

The only errors I receive from openssl are WANT_READ or WANT_WRITE,
which I handle like EAGAIN (the socket is non-block).   The code which
handles the upload is identical for both SSL and non-SSL connections,
except for the read function below, but there is no such problem with
non-SSL transfers.

The read function uses some of the perl API and is intended to provide
the same functionality as perl's sysread (this is why the rest of the
code is identical to the non-SSL upload):

SV *sysread (SV *objref, SV *buf, int len) {
// retrieve SSL object from perl
         HV *self = (HV*)SvRV(objref);
         SV **field = hv_fetch(self, "ssl", 3, 0);

         if (!field) return newSV(0);
Warn#1: It is probably more efficient to return PL_sv_undef, avoiding an allocation in
a potential memory full situation

         SSL *ssl = (SSL*)SvIV(*field);

// set up buffer and read
         unsigned char data[len];
Bug #2: must be allocated as [len+1] because of Bug#7 below.

Warn#3: It is probably more efficient to do
                   SvGrow(buf, len + 1);
                   unsigned char *data = SvPV_nolen(buf);

         ERR_clear_error();
         int bytes = SSL_read(ssl, data, len);

// error handling
         if (bytes<  0) {
                 int err = SSL_get_error(ssl, bytes);
                 if (err == SSL_ERROR_WANT_READ
                     || err == SSL_ERROR_WANT_WRITE) err = EAGAIN;
                 else err *= -1;
Warn#4: The calling perl code may need to distinguish between SSL_ERROR_WANT_READ and SSL_ERROR_WANT_WRITE, because the needed select() call will be different Warn#5: Remember to ensure the perl code passes the exact same parameters on retry!
         // the error is made negative to prevent collision with EAGAIN
                 hv_store(self, "readerr", 3, newSViv (err), 0);
                return newSV (0);    // perl undef
Warn#6: It is probably more efficient to return PL_sv_undef, avoiding an allocation in
a potential memory full situation
         }
Bug#7: Perl requires a 0 after the end of a string, even if it holds binary data, so add this line

                    data[len] = 0;
// return buffer contents to perl
         sv_setpvn(buf, data, bytes);
Bug#8: Note that if bytes==0 (a valid situation), then sv_setpvn() will
    act like sv_setpvn(buf, data, strlen(data))
    So in addition to Bug#7 above, bytes==0 could turn into
    a variable number of random bytes getting put in buf.

Warn#9:If you did the change in Warn#3 above, change sv_setpvn() to

                   SvCUR_set(buf, bytes);
         return newSViv(bytes);
}

As stated, the only error which actually occurs is the WANT_READ or
WANT_WRITE.

I can also post the ctx setup*, etc, tho again, everything works fine
except for large uploads.   Large downloads are fine.  My test
client is firefox 7 over a slow wireless connection; the loss is less
on local loopback but still occurs. What have I missed about this?

Thanks -- MK

* I use SSL_set_fd and not a BIO.


______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to