Re: errer found in file evp_test.c

2009-02-17 Thread dmj2718-09
The problem almost certainly is that your file a.txt ends in a newline.  Try 
getting rid of the newline and see what happens.

  -- David Jacobson


--- On Tue, 2/17/09, xh  wrote:

> From: xh 
> Subject: errer found in file evp_test.c
> To: openssl-dev@openssl.org
> Date: Tuesday, February 17, 2009, 7:40 PM
> Hi everyone,
> 
> I found two errors in file evp_test.c in openssl version
> openssl-0.9.8j.
> 
> 223 if(outl+outl2 != cn)
> I think this line should be:
> if(outl+outl2 != pn)
> 224 {
> 225 fprintf(stderr,"Plaintext length
> mismatch got %d expected %d\n",
> 226 outl+outl2,cn);
> 227 test1_exit(8);
> 228 }
> 229
> 230 if(memcmp(out,plaintext,cn))
> this should be:
> if(memcmp(out,plaintext,pn))
> 231 {
> 232 fprintf(stderr,"Plaintext
> mismatch\n");
> 233 hexdump(stderr,"Got",out,cn);
> 234
> hexdump(stderr,"Expected",plaintext,cn);
> 235 test1_exit(9);
> 236 }
> 
> BTW, I find another strange behaviour:
> take the lines at the beginning of the file evptests.txt
> for example,
> # SHA(1) tests (from shatest.c)
> SHA1:::616263:a9993e364706816aba3e25717850c26c9cd0d89d
> 
> I think we should get the digest message via the following
> command, but I failed.
> #echo 616263 > a.txt
> #cat a.txt
> 616263
> # openssl dgst -sha1 a.txt
> SHA1(a.txt)= 765ecbbdc9e459fee019c275fbdd589d2948a009
> 
> Could you please help me out of this problem?
> 
> Thanks in advance!
> 
> thanks,
> -Derek Wang
> __
> OpenSSL Project
> http://www.openssl.org
> Development Mailing List  
> openssl-dev@openssl.org
> Automated List Manager  
> majord...@openssl.org
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: NULL pointer check before dereferencing

2009-02-16 Thread dmj2718-09
There are two cases.  Sometimes a null pointer is used to indicate that some 
value is not supplied or that the caller does not want some output stored.  In 
that case, the check must be done.

But many times a valid pointer must be supplied.  In that case, I don't think 
it is a necessarily a bug to not check.  There are several reasons I hold this 
position:

1.  On most systems, the hardware is going to check anyway, and the program 
will die of a segfault.  

2.  As a developer or engineer, I'd much rather know exactly what went wrong 
than get some error 3 levels up that says "invalid parameter" and I have no 
idea what was invalid.

3.  A zero pointer is just one about about 4 billion possible invalid 
pointers.  What's so special about this one that we should spend cycles and 
coding time, and testing time, checking for it?

It can be argued that if you are in a situation where you are expected to clear 
sensitive data, then you had better do the test so each level above can clear 
the sensitive data is it responsible for.  However, in most software 
environments, this is kind of weak.  If the caller or user can cause a null 
pointer to be dereferenced, he can probably also cause a non-null invalid 
pointer to be dereferenced and cause a segfault anyway.

  -- David Jacobson


--- On Mon, 2/16/09, Martin Kaiser  wrote:
From: Martin Kaiser 
Subject: NULL pointer check before dereferencing
To: openssl-dev@openssl.org
Date: Monday, February 16, 2009, 3:32 PM

Dear OpenSSL developers,

what is your policy regarding NULL pointer checks? Looking through the
code, I see some functions that receive a pointer parameter and
dereference it without checking for NULL first. Examples are
SSL_accept(SSL *s) or RSA_sign(..., RSA *rsa).

Do you consider such behaviour a bug? Or is it just too obvious that
calling these functions with a NULL argument makes no sense at all?

Best regards,

   Martin
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: FIPS_selftest_rng fails on Solaris10 x86

2009-02-12 Thread dmj2718-09
I don't know why this has degenerated into an argument about use of 
non-FIPS-approved algorithms.  

For some reason I don't understand, there is a line about "Non-approved 
cryptographic operation test".  But that is not what caused the failure.  The 
failure came from the fips_rand_selftest.  

There are four sorts of test for FIPS random number generators.  At one time 
there were a bunch of statistical tests.  But those were removed from the 
requirements several years ago.  There is a test that no two consecutive values 
are the same.  Then there is a known answer test where you have to provide a 
known seed and then get the right sequence of random numbers.  And finally some 
testing labs will require, based on a very legalistic reading of the rules, 
that the "seed" and "seed key" be compaired and a failure reported if they are 
the same.  (In my opinion, the rule was intended to mean that you should not 
intentionally supply the same data for the intitial value of "seed" and 
"seed-key".)  

I don't know which case is causing the reported error.  But, as I said in a 
previous post, the fact that it works on Linux and not on Solaris x86 makes me 
suspect that the code or the test was written with the assumption that Solaris 
is always big-endian, but, in fact, Solaris x86 is little-endian, and that is 
causing it to fail a known-answer test.  (But I haven't looked at the code.)

  -- David Jacobson

--- On Thu, 2/12/09, RussMitch  wrote:
From: RussMitch 
Subject: Re: FIPS_selftest_rng fails on Solaris10 x86
To: openssl-dev@openssl.org
Date: Thursday, February 12, 2009, 11:49 AM

No, the test/fips_test_suite does not run correctly, here's the results:

FIPS-mode test application

1. Non-Approved cryptographic operation test...
a. Included algorithm (D-H)...successful
ERROR:2d072065:lib=45,func=114,reason=101:file=fips_rand_selftest.c:line=364:  
<=
2. Automatic power-up self test...FAILED!  
<=

/Russ


Dr. Stephen Henson wrote:
> 
> On Thu, Feb 12, 2009, RussMitch wrote:
> 
>> 
>> Hello,
>> 
>> I've built openssl-0.9.8j on Solaris10 Update 5 as follows:
>> 
>> ./config fipscanisterbuild
>> make clean
>> make
>> 
> 
> That's against the security policy.
> 
>> Next, I've created a simple program that calls FIPS_mode_set(1)
and links
>> to
>> the libraries in /usr/local/ssl/fips/lib.
>> 
>> The first two tests, FIPS_signature_witness() and
>> FIPS_check_incore_fingerprint() PASS.
>> 
>> The third test, FIPS_selftest_rng FAILS.
>> 
>> I've also tried the exact same procedure on a Fedora Core5 linux
based
>> machine, and all of the tests PASS.
>> 
>> Anyone have an idea of what may be wrong?
>> 
> 
> Does test/fips_test_suite run correctly?
> 
> Steve.
> --
> Dr Stephen N. Henson. Email, S/MIME and PGP keys: see homepage
> OpenSSL project core developer and freelance consultant.
> Homepage: http://www.drh-consultancy.demon.co.uk
> __
> OpenSSL Project http://www.openssl.org
> Development Mailing List   openssl-dev@openssl.org
> Automated List Manager   majord...@openssl.org
> 
> 

-- 
View this message in context:
http://www.nabble.com/FIPS_selftest_rng-fails-on-Solaris10-x86-tp21980325p21983578.html
Sent from the OpenSSL - Dev mailing list archive at Nabble.com.

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: FIPS_selftest_rng fails on Solaris10 x86

2009-02-12 Thread dmj2718-09
This is only conjecture, but it is an educated conjecture.  I've done several 
implementations of  FIPS-approved RNGs, and once had trouble with the RNG test 
failing.

The algorithm we used was the one in FIPS 186-2 appendix 3. This algorithm 
involves taking an SHA-1 hash, and then treating it as a 160-bit integer and 
adding it (mod 2^160) to some other value.  The problem turned out to be that 
the test vectors were for the opposite endianness as the host.  (Sorry, I have 
done both little- and big-endian implementations, and I can't remember which 
one had the trouble.)  Thus we had to treat the hash output as 5 words, and 
byte reverse each word before considering it as 32 bits of the 160-bit integer.

Obviously, swapping bytes of a hash output is a waste of cycles, and does 
nothing for security.  But if you have to do it to pass, you have to do it.

  -- David Jacobson


--- On Thu, 2/12/09, RussMitch  wrote:
From: RussMitch 
Subject: Re: FIPS_selftest_rng fails on Solaris10 x86
To: openssl-dev@openssl.org
Date: Thursday, February 12, 2009, 11:49 AM

No, the test/fips_test_suite does not run correctly, here's the results:

FIPS-mode test application

1. Non-Approved cryptographic operation test...
a. Included algorithm (D-H)...successful
ERROR:2d072065:lib=45,func=114,reason=101:file=fips_rand_selftest.c:line=364:
2. Automatic power-up self test...FAILED!

/Russ


Dr. Stephen Henson wrote:
> 
> On Thu, Feb 12, 2009, RussMitch wrote:
> 
>> 
>> Hello,
>> 
>> I've built openssl-0.9.8j on Solaris10 Update 5 as follows:
>> 
>> ./config fipscanisterbuild
>> make clean
>> make
>> 
> 
> That's against the security policy.
> 
>> Next, I've created a simple program that calls FIPS_mode_set(1)
and links
>> to
>> the libraries in /usr/local/ssl/fips/lib.
>> 
>> The first two tests, FIPS_signature_witness() and
>> FIPS_check_incore_fingerprint() PASS.
>> 
>> The third test, FIPS_selftest_rng FAILS.
>> 
>> I've also tried the exact same procedure on a Fedora Core5 linux
based
>> machine, and all of the tests PASS.
>> 
>> Anyone have an idea of what may be wrong?
>> 
> 
> Does test/fips_test_suite run correctly?
> 
> Steve.
> --
> Dr Stephen N. Henson. Email, S/MIME and PGP keys: see homepage
> OpenSSL project core developer and freelance consultant.
> Homepage: http://www.drh-consultancy.demon.co.uk
> __
> OpenSSL Project http://www.openssl.org
> Development Mailing List   openssl-dev@openssl.org
> Automated List Manager   majord...@openssl.org
> 
> 

-- 
View this message in context:
http://www.nabble.com/FIPS_selftest_rng-fails-on-Solaris10-x86-tp21980325p21983578.html
Sent from the OpenSSL - Dev mailing list archive at Nabble.com.

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: ECC text encryption help in OpenSSL

2008-08-25 Thread dmj2718-09
You don't usually use ECC to encrypt large amounts of text (or any data).  You 
usually use it in key exchange protocols (ECDH, ECMQV) and in signature 
protocols (ECDSA).  

  -- David Jacobson

shizumi <[EMAIL PROTECTED]> wrote: 
hi everybody, 
i need some sample code to encrypt/decrypt text using ECC Cryptography
(OpenSSL) in vc6 . can anyone help me? or can you tell me the steps to
encrypt/decrypt text in OpenSSL. sorry because iam newbie in OpenSSL.
-- 
View this message in context: 
http://www.nabble.com/ECC-text-encryption-help-in-OpenSSL-tp19142079p19142079.html
Sent from the OpenSSL - Dev mailing list archive at Nabble.com.
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]



RE: Static global - bug? (Re: Two valgrind warnings in OpenSSL-possible bug???)

2008-01-24 Thread dmj2718-09
I'm only familiar with Solaris.  In that system the real stuff in a mutex is a 
byte about 12 bytes into the lock structure.  On SPARC the mutex_lock function 
accesses it with an LDSTUB instruction, which is a special atomic instruction 
that loads the old value into a register, and stores 0xff into it. If that byte 
is in the same cache line as some other variable that gets written, and if 
another processor writes that other stuff, the cache line will be owned by that 
other processor, and an uncontended mutex_lock can be terribly expensive.

  -- David Jacobson

David Schwartz <[EMAIL PROTECTED]> wrote: 
> > Locking with no contention is not "pretty expensive", it's darn near
> > free.

> On systems with only one processor and nothing like hyperthreading.

Did you miss the "with no contention" part? An uncontended lock costs about
the same on an SMP system as on an MP system. AFAIK, hyperthreading doesn't
affect the cost of uncontended locks.

An uncontended lock typically results in one atomic operation (which doesn't
actually locked any busses anymore) and possibly one additional cache miss
(assuming the lock was recently last held by another CPU). Other costs are
totally drowned out by these two.

As for a contended lock, it's probably also typically less expensive on a
multi-CPU system (though contention is more likely, so it's kind of a bogus
comparison). Contention will typically result in more context switches on a
single CPU system, and the cost of context switches is enormous compared to
the other costs we're measuring here.

Hyperthreading likely reduces the cost of a contended lock because the other
virtual execution unit gains the use of the execution units the contending
thread is not using and contention across virtual execution units in the
same core is typically less expensive than across the FSB. I can't think of
any obvious reason hyperthreading would have any significant affect on the
cost of contention, unless you're talking about broken spinlocks that don't
properly relax the CPU (stealing execution resources from the virtual core
that's doing useful work). Obviously, broken spinlocks will cause problems
on an HT machine, but they should all be fixed by now.

In any event, he's talking about a situation where he does everything he can
to reduce contention to zero. So the only issues left are what the cost of
uncontended locks is (to decide whether it's worth eliminating the locks
entirely) and what the consequences are if he misses a case (disaster if he
removes the lock, nothing if he doesn't).

DS


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]



Re: [openssl.org #1564] bug: FIPS module can't be built on Solaris

2007-08-07 Thread dmj2718-09
Try using /usr/ucb/echo instead of just echo.

This is what I found on Sun's web site:


The shells csh(1), ksh(1), and sh(1), each have an echo built-in
command, which, by default, will have precedence, and will be invoked
if the user calls echo without a full pathname. /usr/ucb/echo and
csh's echo() have an -n option, but do not understand back-slashed
escape characters. sh's echo(), ksh's echo(), and /usr/bin/echo, on
the other hand, understand the black-slashed escape characters, and
ksh's echo() also understands \a as the audible bell character;
however, these commands do not have an -n option.


  -- David Jacobson

Martin Simmons <[EMAIL PROTECTED]> wrote: > On Tue,  7 Aug 2007 14:57:41 
+0200 (CEST), Jan Pechanec via RT said:
> 
>  building the fips module ends with a tricky error:
> 
> /usr/ccs/bin/ld: illegal option -- n
> usage: ld [-6:abc:d:e:f:h:il:mo:p:rstu:z:B:CD:F:GI:L:M:N:P:Q:R:S:VY:?] 
> file(s)
> [-64]   enforce a 64-bit link-edit
> ...
> ...
> 
> 
>  the problem is that in general Solaris's echo's don't have '-n' so 
> this is a problem in fips-1.0/Makefile:
> 
> fipscanister.o: fips_start.o $(LIBOBJ) $(FIPS_OBJ_LISTS) fips_end.o
> @FIPS_BN_ASM=`for i in $(BN_ASM) ; do echo -n "../crypto/bn/$$i " ; 
> done`; \
> 
>  not sure what is the best fix here, whether test for Solaris and set 
> it to "printf", or to replace it with printf right away, or something 
> different. After the fix the module builds fine. For more information about 
> echo's in Solaris, see:

Have you tried just removing the -n?  With luck the shell's word splitting
will discard the newlines inside the backquotes.

__Martin
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]



Re: ECDSA verify fails when digest is all zeros in 0.9.8e

2007-05-17 Thread dmj2718-09
This is not a problem with the algorithm or the protocol.  It is a bug in the 
implementation.  Digest values that are zero are allowed by the ANSI X9.62 (and 
there is no special case for them) and they work fine in other implementations.

The code is trying to compute u1 * P + u2 * Q, where u1 is the digest value, P 
is the curve's base point and Q is the public key, and u2 is something else we 
don't care about.  It eventually calls ec_wNAF_mul (ec_mult.c:324).  Apparenlty 
this function computes scalar * basepoint + sum (scalars[i]*points[i] over i 
from 0 to num).  scalar is u1, num is 1 and the arrays are one element long 
with scalars[0] being u2 and points[0] being Q.  

In the case in question u1 is zero.  In elliptic curve scalar multiplication 0 
times anything is the infinite point, which is the additive identity.  So lets 
watch for this case is the algorithm proceeds.

In line 356 there is a quick exit: the infinite point is returned if scalar is 
the null pointer and if num is zero.  But 
neither of these are satisified, since scalar is a non-null pointer to zero, 
and num is 1.

Down in line 379 it is gettng the base point P (called the generator).  Then it 
checks for some procomputed values.  I have no idea whether zero is in the 
precomputed set.  Let's follow the "not" branch---line 413.  It sets numblocks 
to 1 and num_scalar to 1, and affectively appends u1 and P to the end of the 
list.  (See the ?: operator in lines 439 and 443.)

In line 443 it calls compute_wNAF, and when i is 1 it is doing u1 * P, with u1 
zero in the case in quesiton.

compute_wNAF is at ec_mult.c:188.and is called with scalar pointing to a zero 
BIGNUM.  But compute_wNAF, either by design or by accident, can't deal with a 
scalar that is zero.  It gets down to line 217.  Since the value is zero, 
scalar->top is zero, and it takes the error branch, 
ECerr(EC_F_COMPUTE_WNAF, ERR_R_INTERNAL_ERR).

So it looks like we need to either fix compute_wNAF to deal with scalar being 
zero, or discard  pairs with scalar pointing to a zero BIGNUM 
before it is called.  (Perhaps making sure zero is in the procomputed list 
would effectively keep the zero valued scalars from getting to compute_wNAF.)  

A simple fix that would work for this case only would be to change line 377 from
if (scalar != NULL)
 to
if (scalar != NULL && !BN_is_zero(scalar))

But don't do that.  That would take care of the scalar arg to ec_wNAF_mul being 
zero, but would not take care of zeros in the *scalars array.A better 
scheme would be to modify the loop body at ec_mult.c:436 to skip over entries 
with a zero multiplier.

Something like 
if (!BN_is_zero(i < num ? scalars[i] : scalar)) {
...
but I'm not sure what to put in the else branch, which has to put something 
into wNAF[i].  I'll leave it to experts who understand this code.



"Victor B. Wagner" <[EMAIL PROTECTED]> wrote: On 2007.05.16 at 12:35:37 -0700, 
[EMAIL PROTECTED] wrote:

>I'm running OpenSSL 0.9.8e.  If I set up an ECDSA verify with
>EC_KEY_new_by_curve_name(NID_X9_62_prime256v1) and call ECDSA_do_verify
>with dgst (first arg) an array of all zeros and dgst=1 (second arg), the
>call fails with error 16.

As far as I understand, El Gamal signature scheme is not supposed to
work when digest is all zeros. GOST signature algorithms (which are
simular to DSA/ECDSA) treat this as
special case, and GOST R 34.10 specify that if digest (interpreted as
BIGNUM) is zero, it should be explicitely set to one. I always wondered
why DSA doesn't have such fallback.

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]



ECDSA verify fails when digest is all zeros in 0.9.8e

2007-05-16 Thread dmj2718-09
I'm running OpenSSL 0.9.8e.  If I set up an ECDSA verify with 
EC_KEY_new_by_curve_name(NID_X9_62_prime256v1) and call ECDSA_do_verify with 
dgst (first arg) an array of all zeros and dgst=1 (second arg), the call fails 
with error 16.

There are two errors in the queue:

lib 16 reason 68 location ec_mult.c:219
lib 42 reason 16 location ecs_ossl.c:420