On October 28, 2003 04:51 am, Otto Moerbeek wrote:
> Yeah, ever since I looked into the bn functions I have the feeling more
> bugs are hidden there, and some of these bugs could have been avoided
> by being more strict wrt to representation of BNs. (I also hate the use
> of the letter l as a variable name, but that sure is OT).

openssl code style is not always what you would call an exact science. :-)

> > reasoning would suggest that x->top be zero. Does anyone out there
> > have some historical knowledge (SSLeay?) or some convincing
> > explanations for why the the exceptional 1-word form for '0' should
> > be considered acceptable?
>
> I can imagine that intermediate results of some functions can have
> leading zeroes. But any number that is exposed should be normalized
> (have leading zeroes removed). That includes zero, IMO.

It's not just whether the number is exposed, it's also whether it is 
passed into any other functions in intermediate form. If you don't insist 
that BIGNUMs are consistent when passed *into* functions as well as on 
exit, then you end up needing bn_fix_top()s at both extremes of your code 
paths which is messy and wasteful. What I'm looking at in the code right 
now is macro tricks so that in a debug build; function inputs are 
"screened" rather than "fixed", and all API functions and macros have 
screen macros in their exit paths. I will also be looking for places to 
try and add "noise" to unused words in the bignum data - I hope this 
might trip up any other bugs due to sloppy handling of bn->top.

> Perversity indeed :-) BNs are supposed to be opaque. Any app relying on
> the represenation is already walking on thin ice. From the bn(3) man
> page:
>
>        The basic object in this library is a BIGNUM. It is used
>        to hold a single large integer. This type should be
>        considered opaque and fields should not be modified or
>        accessed directly.

Yeah, but there are two problems: (1) nobody follows that kind of advice 
anyway, partly because nobody takes such warnings seriously and partly 
because our own code examples tend to set bad examples in this respect. 
:-) (2) a lot of direct manipulation of BIGNUM structures happens through 
the use of API *macros*. So with (2), we ideally need to know if our 
exposed macros ever leave BIGNUMs in an inconsistent state too, because 
is the kind of thing that leads to bug reports from people who try 
linking to updated libraries without recompiling.

> > bugs. (The "Principle of most surprise" anyone?) So the census I'm
> > looking for it this: even if noone has any convincing explanations
> > for *why* this alternative 1-word exception is tolerated, does anyone
> > have any convincing arguments against us phasing it out?
>
> Not me. Zero already is boundary case in some algorithms. Having two
> representations of zero only makes things worse. Actually, there are
> already 4 represenations of zero, since you can have +0, and -0. Don't
> know if this has any similar consequences as the top thing.

The 'neg' element is less of a problem IMHO, because "+" is exactly as 
wrong as "-" when talking about zero :-) In essence, any treatment of 
zero is obliged to ignore "sign" because zero has none, and if you were 
to define a unique representation of zero that included the 'neg' 
element, then you would have to use some kind of tristate rather than 
boolean type to handle that. 'top' on the other hand has a fairly clear 
purpose and its "correct" value is uniquely defined for all BIGNUM values 
in an elegant canonical way. The fact we explicitly allow ambiguity in 
the case of zero is just something we do to spite ourselves, AFAICS, as 
there's no other apparent justification for it.

> > All thoughts and feedback would be most welcome.
>
> I like this. The one thing I can suggest is the usage of the new dc(1)
> and bc(1) in OpenBSD -current (I found the BN_add_word bug while
> implementing dc). Dc (and consequently bc) use the crypto/bn routines
> and might be very well used for testing.

Ah, "make test" uses "bc" to compare results of test calculations against 
the internal library. This is normally a good idea because 'bc' is 
typically built using GMP, but if it's using crypto/bn on OpenBSD then 
that might become somewhat surreal :-) BTW: take a check around inside 
test/ if you feel like fiddling, that's where the self-tests are 
organised. FWIW, Ulf wrote some routines some time ago for generating 
random bignum values that "often enough, but not too often" generate 
corner-case values. That led to the discovery of some calculation bugs 
IIRC, and it might be the kind of thing that would be useful when testing 
too?

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       [EMAIL PROTECTED]
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to