[openssl.org #1620] [PATCH] - OS390-Unix (EBCDIC) 0.9.8e

2007-12-12 Thread JBYTuna via RT
[PATCH] - OS390-Unix (EBCDIC) 0.9.8e

When an OpenSSL server built on z/OS is using client verification, the
following error is incurred:

0x140890b2 - error:140890B2:SSL routines:SSL3_GET_CLIENT_CERTIFICATE:no
certificate returned

From tracing, we found the correct certificate was being returned.  We found
the code in crypto/x509/x509_vfy.c will not work in an EBCDIC environment, as
the data is in ASCII.  The solution is to translate the ASCII to EBCDIC, prior
to the validation process.

John B. Young

Here's the patch, in diff -u form:

--- old/x509_vfy.c  2007-12-11 10:43:14.0 -0800
+++ new/x509_vfy.c  2007-12-11 10:38:18.0 -0800
@@ -1045,6 +1045,9 @@
 int X509_cmp_time(ASN1_TIME *ctm, time_t *cmp_time)
{
char *str;
+#ifdef CHARSET_EBCDIC
+   char strx[40];
+#endif
ASN1_TIME atm;
long offset;
char buff1[24],buff2[24],*p;
@@ -1052,7 +1055,12 @@
 
p=buff1;
i=ctm-length;
+#ifdef CHARSET_EBCDIC
+   ascii2ebcdic( strx, ctm-data, ctm-length );
+   str=strx;
+#else
str=(char *)ctm-data;
+#endif
if (ctm-type == V_ASN1_UTCTIME)
{
if ((i  11) || (i  17)) return 0;



  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


[openssl.org #1621] [PATCH] - OS390-Unix (EBCDIC) 0.9.7m

2007-12-12 Thread JBYTuna via RT
When an OpenSSL server built on z/OS is using client verification, the
following error is incurred:

0x140890b2 - error:140890B2:SSL routines:SSL3_GET_CLIENT_CERTIFICATE:no
certificate returned

From tracing, we found the correct certificate was being returned.  We found
the code in crypto/x509/x509_vfy.c will not work in an EBCDIC environment, as
the data is in ASCII.  The solution is to translate the ASCII to EBCDIC, prior
to the validation process.

John B. Young

Here's the patch, in diff -u form:

--- old/x509_vfy.c  2007-12-11 10:41:37.0 -0800
+++ new/x509_vfy.c  2007-12-11 10:37:30.0 -0800
@@ -900,6 +900,9 @@
 int X509_cmp_time(ASN1_TIME *ctm, time_t *cmp_time)
{
char *str;
+#ifdef CHARSET_EBCDIC
+   char strx[40];
+#endif
ASN1_TIME atm;
long offset;
char buff1[24],buff2[24],*p;
@@ -907,7 +910,12 @@
 
p=buff1;
i=ctm-length;
+#ifdef CHARSET_EBCDIC
+   ascii2ebcdic( strx, ctm-data, ctm-length );
+   str=strx;
+#else
str=(char *)ctm-data;
+#endif
if (ctm-type == V_ASN1_UTCTIME)
{
if ((i  11) || (i  17)) return 0;



  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: OpenSSL FIPS Object Module v1.2

2007-12-12 Thread Steve Marquess

Kyle Hamilton wrote:

On Dec 2, 2007 4:31 PM, Steve Marquess [EMAIL PROTECTED] wrote:

...big snip...

c) I would like to know where to find the formal specification
documents for what must be met in a module boundary, ...


The module boundary is *the* key concept for FIPS 140-2.  It is also a 
very elusive concept in the context of software libraries -- like 
pornography (I know it when I see it) or the blind men meeting an 
elephant, everyone seems to have some notion of what is is but can't 
clearly articulate it.  I thought I understood it five years ago, now 
it's more of a puzzle to me than ever.  Here's my own personal 
perspective on some of the pieces to that puzzle.  Since the term 
module means something different here than for software engineering I 
capitalize it in the following:


1) The Module boundary concept dates from the time when cryptography 
was done entirely in hardware.  The Module boundary for hardware is the 
metal enclosure for the hardware device (or the circuit board for a 
backplane device).  The buttons and switches on the case and the wires 
leading into it are the interface.  Easy enough to grasp so far.  When 
you start talking about software running on a modern multi-tasking, 
virtual memory computer it gets confusing.


2) There is the Module boundary (the most commonly heard term).  But, 
you will also hear algorithm boundary, physical boundary, and 
logical boundary.  None of those terms are entirely synonymous.  My 
best understanding is that the algorithm boundaries are entirely 
subsumed within the generic Module boundary, the logical boundary is 
kinda sorta the same as the Module boundary for a running process, the 
physical boundary is roughly equivalent to an idealized computer 
representative of the one used for the validation testing.  But, I've 
seen all these terms used in contexts that leave me wondering.


3) For software Modules the integrity test (a digest check) is performed 
over the contents of the Module.  Well, sort of.  There are two 
acceptable techniques that I know of.  One is to create the software as 
a runtime binary file (typically a shared library file for a crypto 
library, or a standalone application program) and call the file itself 
as it resides on disk the crypto Module.  In this case the Module is one 
contiguous string of bits, note that string of bits includes glue data 
for the run-time loader.  The reference digest value itself can't be 
included in that file, of course, so it is placed in a separate file. 
The second technique is to perform the digest over the text (machine 
code) and read-only data segments of the program as mapped into memory 
by the run-time loader.  In this case the Module is two (or more) 
separate contiguous strings of bits in memory (or memory mapped disk 
sectors).  Those reference digest values can be embedded in the runtime 
binary file, outside of the Module boundary.


4) The Module boundary is *not* every location the program counter 
visits during the execution of the code between the call to and return 
from a function in the Module.  That program counter will traverse code 
in the text segment(s) -- definitely in the boundary.  It will also 
traverse code in system (kernel) calls -- definitely *outside* the 
boundary.  It will traverse code in userspace shared libraries, and here 
it gets really fuzzy.  Calls from the Module to standard system library 
functions are ok, those libraries are not considered internal to the 
boundary.  So libc, for example, is not in the boundary.  The malloc 
functions are not, most interesting (because they export the CSPs -- 
secrets -- that are supposed to remain in the boundary).  What exactly 
is a standard system library?  Good question.  Is /usr/lib/libc.so? 
Certainly!  Is /usr/lib/libssl.so, which is provided by many vendors as 
part of the base operating system distribution, such a libary? 
Certainly not!  Is /usr/lib/libncurses.so?  /usr/lib/libm.so? 
/usr/lib/libstdc++.so?  Who knows?  The term cryptographically 
significant will arise in these discussions.  I won't even pretend to 
understand that term, that's one for scripture interpretation.


5) Physical consanguinity within the Module boundary is significant.  By 
this I mean that the sequence of machine instructions and read-only data 
that comprise the code of the Module must always occur in a fixed 
relationship with respect to ascending memory addresses.  When a 
build-time linker (ld) creates an executable binary from an archive 
library containing the object modules, A, B, C, it doesn't much care 
what order (in terms of ascending memory addresses) they appear in the 
output file; A-B-C, B-C-A, etc.  Even though the resulting executables 
are functionally identical that is not acceptable for a FIPS 140-2 
Module, the runtime code must be guaranteed to be in the same sequence 
(this requirement inspired the fipscanister.o monolithic object module).


6) The digest does more than merely 

Re: [openssl.org #1621] [PATCH] - OS390-Unix (EBCDIC) 0.9.7m

2007-12-12 Thread Richard Koenning

JBYTuna via RT wrote:


When an OpenSSL server built on z/OS is using client verification, the
following error is incurred:

0x140890b2 - error:140890B2:SSL routines:SSL3_GET_CLIENT_CERTIFICATE:no
certificate returned


From tracing, we found the correct certificate was being returned.  We found

the code in crypto/x509/x509_vfy.c will not work in an EBCDIC environment, as
the data is in ASCII.  The solution is to translate the ASCII to EBCDIC, prior
to the validation process.

John B. Young

Here's the patch, in diff -u form:


The patch is already contained in #843: EBCDIC patches for 0.9.7c 
(http://rt.openssl.org/Ticket/Display.html?id=843user=guestpass=guest), 
which has been updated to 0.9.7j by Jeremy Grieshop. That patch contains 
also a second ASCII to EBCDIC conversion after the X509_time_adj in the 
region of line 960.

Ciao,
Richard
--
Dr. Richard W. Könning
Fujitsu Siemens Computers GmbH
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


[openssl.org #1623] Bug report: default CA certs file path ignored

2007-12-12 Thread Tomas Mraz via RT
The default CA certs file path is ignored in commands like s_client
because if you don't specify -CApath or -CAfile on s_client command line
the SSL_CTX_load_verify_locations() will return 0 and the code:

if ((!SSL_CTX_load_verify_locations(ctx,CAfile,CApath)) ||
(!SSL_CTX_set_default_verify_paths(ctx)))
will skip the SSL_CTX_set_default_verify_paths() call.

There are two possible ways how to fix this problem - either do not call 
SSL_CTX_load_verify_locations() when CAfile and CApath are both NULL. Or
fix the code of X509_STORE_load_locations() to not to return 0 when both
path and file are NULL. 

The X509_STORE_load_locations() implementation is debatable in more ways
because it will return with 0 immediately when the file lookup load
fails and so it will not attempt to add the dir lookup at all in this
case. In my opinion it should at least try both (of course returning
failure if any load fails).

See also https://bugzilla.redhat.com/show_bug.cgi?id=421011

-- 
Tomas Mraz
No matter how far down the wrong road you've gone, turn back.
  Turkish proverb


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


[openssl.org #1622] BUG: darwin-ppc64 fails due to bug in Configure (patch attached)

2007-12-12 Thread Weissmann Markus via RT
Hi,

I've found a trivial bug in Configure that prevents 64bit builds on  
darwin-ppc (darwin64-ppc): There are just two too many : in one  
line, creating havoc in the build flags. A patch is attached.


Regards,

-Markus

---
Dipl. Inf. (FH) Markus W. Weissmann
http://www.mweissmann.de/
http://www.macports.org/



patch-Configure
Description: Binary data


Re: [openssl.org #1623] Bug report: default CA certs file path ignored

2007-12-12 Thread Tomas Mraz via RT
On Wed, 2007-12-12 at 20:11 +0100, Tomas Mraz via RT wrote:
 The default CA certs file path is ignored in commands like s_client
...
 See also https://bugzilla.redhat.com/show_bug.cgi?id=421011

Oops, should have been
https://bugzilla.redhat.com/show_bug.cgi?id=418771
-- 
Tomas Mraz
No matter how far down the wrong road you've gone, turn back.
  Turkish proverb


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: OpenSSL FIPS Object Module v1.2

2007-12-12 Thread Steve Marquess
Kyle Hamilton wrote:

 I'll go out on a limb here and express my (certainly naive)
 extrapolations/interpolations:

 Module Boundary: That which contains the entire deliverable that
 implements the algorithms required by FIPS 140-2 and the glue to make
  them accessible.  (The physical string of bits.)  This should
 contain the smallest amount of code/data possibly required to fulfill
 the requirements of FIPS 140.

Well, not really.  You're allowed to define an entire application or
even turn-key system (software and/or hardware) as the Module, including
as many cryptographically extraneous components as you want..  Generally
no one wants to do that because the contents are then frozen and
you're unable to make even trivial changes.  With a lead time of many
months for validation approvals that is a non-started for non-trivial
applications or systems.

 Algorithm Boundary: Once you get past the glue which makes it
 possible to interface, algorithms should be completely self-contained
  and not reuse code -- and the algorithm shouldn't be 'steppable',
 meaning you call the function with the data and it gives you the
 transformed data', with no intermediary results.

Hmmm, I haven't heard that one before.  We made no particular effort to
segregate the algorithm code and in fact all the algorithm tests were
performed against the full Module library.

 3) For software Modules the integrity test (a digest check) is
 performed over the contents of the Module.  Well, sort of.  There
 are two acceptable techniques that I know of.  One is to create the
 software as a runtime binary file (typically a shared library file
 for a crypto library, or a standalone application program) and call
 the file itself as it resides on disk the crypto Module.  In this
 case the Module is one contiguous string of bits, note that string
 of bits includes glue data for the run-time loader.  The
 reference digest value itself can't be included in that file, of
 course, so it is placed in a separate file. The second technique is
 to perform the digest over the text (machine code) and read-only
 data segments of the program as mapped into memory by the run-time
 loader.  In this case the Module is two (or more) separate
 contiguous strings of bits in memory (or memory mapped disk
 sectors).  Those reference digest values can be embedded in the
 runtime binary file, outside of the Module boundary.

 The first technique is what OpenSSL uses to distribute the code.
 (The security policy defines the keyed hash that the code must match
 before it can be built.)  The second is what OpenSSL uses to embed
 the code.

I think you mean OpenSSL FIPS Object Module instead of OpenSSL --
they are NOT the same thing.  The FIPS Object Module (fipscanister.o) 
itself isn't very useful as it defines only cryptographic primitives. 
In practice an application will use the higher level OpenSSL APIs which
front-end the low-level API in the Module.

In our case the use of integrity check digests is actually more
complicated still.  In the User Guide you'll find discussions of the
separate *.sha1 files that are used with the special fipsld double
link that joins the monolithic object module Module to the application code.

 Another (incredibly naive) view:

 Cryptographically Significant: anything that Eve or Mallory could
 use in an attack.  This includes things to spy on the running state
 of the cryptographic process, and things to change the running state
 of the cryptographic process.  (This also includes the 'glue'...
 i.e., how to get the data into the structures necessary for the
 modules to operate on.  This would be a prime place to make such an
 attack.)

Umm, I'll have to disagree here.  FIPS 140-2 (at level 1) is not
primarily concerned with malicious attack.  I was told that in person in
my one and only face-to-face meeting with the CMVP.  They are aware that
a user-space process in a modern general purpose computer is subject to
many types of manipulations, and that a pure software implementation
cannot effectively guard against them.  Their primary concern is that it
is cryptographically correct (the algorithm tests) and that the code not
change (the runtime integrity checks).

 By that definition, malloc and friends are cryptographically
 significant.

No again.  I specifically asked about malloc long ago, and was informed
that use of an external malloc library by a Module is perfectly acceptable.

 What's fuzzy, though, is can I implement my own malloc and have it
 be able to be used by the FIPS module and have what I've implemented
 retain its FIPS validation?  Based on what you wrote above, I'm
 pretty sure the answer should be 'no' -- and my software
 implementation should also lose validation if there's anything that
 is LD_PRELOADed that overrides malloc.

Well, if the question is can I do bad things with the express purpose
of subverting and corrupting a validated Module then of course the
answer is no.  But if you ask the equivalent