Kyle Hamilton wrote:
>
> I'll go out on a limb here and express my (certainly naive)
> extrapolations/interpolations:
>
> Module Boundary: That which contains the entire deliverable that
> implements the algorithms required by FIPS 140-2 and the glue to make
>  them accessible.  (The physical string of bits.)  This should
> contain the smallest amount of code/data possibly required to fulfill
> the requirements of FIPS 140.

Well, not really.  You're allowed to define an entire application or
even turn-key system (software and/or hardware) as the Module, including
as many cryptographically extraneous components as you want..  Generally
no one wants to do that because the contents are then "frozen" and
you're unable to make even trivial changes.  With a lead time of many
months for validation approvals that is a non-started for non-trivial
applications or systems.

> Algorithm Boundary: Once you get past the "glue" which makes it
> possible to interface, algorithms should be completely self-contained
>  and not reuse code -- and the algorithm shouldn't be 'steppable',
> meaning you call the function with the data and it gives you the
> transformed data', with no intermediary results.

Hmmm, I haven't heard that one before.  We made no particular effort to
segregate the algorithm code and in fact all the algorithm tests were
performed against the full Module library.
>
>> 3) For software Modules the integrity test (a digest check) is
>> performed over the contents of the Module.  Well, sort of.  There
>> are two acceptable techniques that I know of.  One is to create the
>> software as a runtime binary file (typically a shared library file
>> for a crypto library, or a standalone application program) and call
>> the file itself as it resides on disk the crypto Module.  In this
>> case the Module is one contiguous string of bits, note that string
>> of bits includes "glue" data for the run-time loader.  The
>> reference digest value itself can't be included in that file, of
>> course, so it is placed in a separate file. The second technique is
>> to perform the digest over the text (machine code) and read-only
>> data segments of the program as mapped into memory by the run-time
>> loader.  In this case the Module is two (or more) separate
>> contiguous strings of bits in memory (or memory mapped disk
>> sectors).  Those reference digest values can be embedded in the
>> runtime binary file, outside of the Module boundary.
>
> The first technique is what OpenSSL uses to distribute the code.
> (The security policy defines the keyed hash that the code must match
> before it can be built.)  The second is what OpenSSL uses to embed
> the code.

I think you mean "OpenSSL FIPS Object Module" instead of "OpenSSL" --
they are NOT the same thing.  The FIPS Object Module (fipscanister.o) 
itself isn't very useful as it defines only cryptographic primitives. 
In practice an application will use the higher level OpenSSL APIs which
front-end the low-level API in the Module.

In our case the use of integrity check digests is actually more
complicated still.  In the User Guide you'll find discussions of the
separate *.sha1 files that are used with the special "fipsld" double
link that joins the monolithic object module Module to the application code.
>
> Another (incredibly naive) view:
>
> "Cryptographically Significant": anything that Eve or Mallory could
> use in an attack.  This includes things to spy on the running state
> of the cryptographic process, and things to change the running state
> of the cryptographic process.  (This also includes the 'glue'...
> i.e., how to get the data into the structures necessary for the
> modules to operate on.  This would be a prime place to make such an
> attack.)

Umm, I'll have to disagree here.  FIPS 140-2 (at level 1) is not
primarily concerned with malicious attack.  I was told that in person in
my one and only face-to-face meeting with the CMVP.  They are aware that
a user-space process in a modern general purpose computer is subject to
many types of manipulations, and that a pure software implementation
cannot effectively guard against them.  Their primary concern is that it
is cryptographically correct (the algorithm tests) and that the code not
change (the runtime integrity checks).

> By that definition, malloc and friends are cryptographically
> significant.

No again.  I specifically asked about malloc long ago, and was informed
that use of an external malloc library by a Module is perfectly acceptable.

> What's fuzzy, though, is "can I implement my own malloc and have it
> be able to be used by the FIPS module and have what I've implemented
> retain its FIPS validation?"  Based on what you wrote above, I'm
> pretty sure the answer should be 'no' -- and my software
> implementation should also lose validation if there's anything that
> is LD_PRELOADed that overrides malloc.

Well, if the question is "can I do bad things with the express purpose
of subverting and corrupting a validated Module" then of course the
answer is no.  But if you ask the equivalent question "must I take steps
to prevent such subversion such as not using calls to shared malloc
functions" then the answer is also no.  LD_PRELOAD is one example of
many of ways to subvert a Module.  You are not required to use only an
operating system that doesn't support those capabilities, nor are you
required to configure your Module (as some O/Ses permit) so that
LD_PRELOAD, etc., fail.

-Steve M.

-- 
Steve Marquess
Open Source Software institute
[EMAIL PROTECTED]

______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       openssl-dev@openssl.org
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to