David Hartman wrote:
...
Index: crypto/aes/aes_cfb.c
===================================================================
RCS file:

/local/cvs/master/pspOpenSSL/Current/source/openssl/crypto/aes/aes_cfb.c
,v

retrieving revision 1.1.1.1
diff -u -b -r1.1.1.1 aes_cfb.c
--- crypto/aes/aes_cfb.c        30 Aug 2005 19:33:35 -0000      1.1.1.1
+++ crypto/aes/aes_cfb.c        29 Dec 2005 23:54:52 -0000
@@ -165,7 +165,7 @@
    int n,rem,num;
    unsigned char ovec[AES_BLOCK_SIZE*2];

-    if (nbits<=0 || nbits>128) return;
+    if (nbits<=0 || nbits>=128) return;

why ?


[[DSH]] I think there is a potential to overrun the array. I attached the
Coverity output as aes_cfb.txt.  My thoughts are as follows.
AES_BLOCK_SIZE = 16.  The totally array size is 16*2 = 32, and n counts
up to 15. nbits can be up to 128. num = 128/8=16. Line 188 gives ovec[15
+ 16 + 1] is ovec[32], so this would put it out of bounds by one
element.

ok, the interesting code is line 181 ff:
        /* shift ovec left... */

let's assume nbits == 128

        rem = nbits%8;
        num = nbits/8;

then we have num == 16 and rem == 0

        if(rem==0)
            memcpy(ivec,ovec+num,AES_BLOCK_SIZE);

as rem is 0 the branch with the memcpy is used

        else
            for(n=0 ; n < AES_BLOCK_SIZE ; ++n)
                ivec[n] = ovec[n+num]<<rem | ovec[n+num+1]>>(8-rem);

and the for-loop is never executed. Do I overlook something ?

Index: crypto/bio/b_print.c
===================================================================
RCS file:

/local/cvs/master/pspOpenSSL/Current/source/openssl/crypto/bio/b_print.c
,v

retrieving revision 1.1.1.1
diff -u -b -r1.1.1.1 b_print.c
--- crypto/bio/b_print.c        30 Aug 2005 19:33:35 -0000      1.1.1.1
+++ crypto/bio/b_print.c        29 Dec 2005 23:54:55 -0000
@@ -741,6 +741,7 @@
                *buffer = OPENSSL_malloc(*maxlen);
                if (*currlen > 0) {
                    assert(*sbuffer != NULL);

I guess this should be "assert(*buffer != NULL)"



[[DSH]] The only problem with assertions is some groups may not compile with
assertions for production-level code.

agree, returning an error would be better. I will look at it.

Index: crypto/ec/ec_lib.c
===================================================================
RCS file:

/local/cvs/master/pspOpenSSL/Current/source/openssl/crypto/ec/ec_lib.c,v

retrieving revision 1.1.1.1
diff -u -b -r1.1.1.1 ec_lib.c
--- crypto/ec/ec_lib.c  30 Aug 2005 19:33:35 -0000      1.1.1.1
+++ crypto/ec/ec_lib.c  29 Dec 2005 23:55:02 -0000
@@ -124,7 +124,7 @@
        {
        if (!group) return;

-       if (group->meth->group_finish != 0)
+       if ((group->meth != NULL) && (group->meth->group_finish != 0))
                group->meth->group_finish(group);

group->meth should never be NULL


[[DSH]] The reason this was flagged as a concern is a few lines down, the
original code was doing a check to see if group->meth was null.  So
group->meth->group_clear_finished was being dereferenced, then the check
for a null group->meth was being made.  See line 150 of the latest
snapshot. I also attached the Coverity output as ec_lib.txt.

agree, the current code is not really consistent here. If we assume
that EC_GROUP::meth cannot be NULL in a valid EC_GROUP object the
check for "group->meth != NULL" is superfluous and misleading and
should be removed. Done.



Index: crypto/ec/ec_mult.c
===================================================================
RCS file:

/local/cvs/master/pspOpenSSL/Current/source/openssl/crypto/ec/ec_mult.c,
v

retrieving revision 1.1.1.1
diff -u -b -r1.1.1.1 ec_mult.c
--- crypto/ec/ec_mult.c 30 Aug 2005 19:33:35 -0000      1.1.1.1
+++ crypto/ec/ec_mult.c 29 Dec 2005 23:55:02 -0000
@@ -436,7 +436,19 @@
                {
                size_t bits;

-               bits = i < num ? BN_num_bits(scalars[i]) :

BN_num_bits(scalar);

+               if (i < num)
+                       {
+                       if (scalars[i] != NULL)
+                               bits = BN_num_bits(scalars[i]);
+                       else goto err;
+                       }

unless the user supplies a NULL pointer for a BIGNUM this
shouldn't be necessary

note: as the user supplied arrary of BIGNUM pointers is not
delimited by a NULL pointer a check for scalars[i] != NULL
isn't very useful



+               else
+                       {
+                       if (scalar != NULL)
+                               bits = BN_num_bits(scalar);
+                       else goto err;
+                       }

this shouldn't be necessary as we have 'i >= num' if and only if
'num_scalar > 0' but this could only happen if 'scalar != NULL'.


[[DSH]] I think Coverity flagged this as an error because the existing code
checked for null scaler at line 377, so it is possible scaler could be
null.

yep

Since num is a passed in value, it is possible for it to be a
non-zero number, so the for loop could be entered and scaler
dereferenced.

however the "i >= num" branch could only be reached if "num_scalar != 0"
and this could only happen if "scalar != NULL" so the second check
"scalar != NULL" shouldn't be necessary as well.

Granted, this is unlikely and would only happen because
of the caller giving invalid inputs.  I attached the coverity output as
ec_mult.txt.

I've backported the changes I've made so far to the 0.9.8 branch,
so please test a recent snapshot.

Thanks,
Nils
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       [email protected]
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to