Hi,

this refers to OpenSSL 0.9.8g and SNAP-20080215.

After a lot of painful wondering why things do not work as I expected,
I found that if OpenSSL encodes an ASN1_BIT_STRING, the padding field
will be set to the number of the trailing zeroes of the last octet.
This is done by i2c_ASN1_BIT_STRING(...) if ASN1_STRING_FLAG_BITS_LEFT
is not set in the ASN1_STRING flags (and the flags are not XOR 0x07).
This seems to be the default.

X.690 defines the padding field as follows:
"8.6.2.2 The initial octet shall encode, as an unsigned binary integer
with bit 1 as the least significant bit, the number of unused bits in
the final subsequent octet. The number shall be in the range zero to
seven."

Section 8.6.2.4 specifies, that trailing zeros may be omitted if the
BIT STRING is a NamedBitList.

If one sets the BIT LIST with ASN1_STRING_set(...) instead of
ASN1_BIT_STRING_set_bit(...), isn't it most unlikely that it is a
NamedBitList? Is it intended that one has to set explicitly that this
shall not be done? Especially since X.690 says that "... a BER encoder
... *can* add or remove trailing 0 bits from the value." in case of a
NamedBitList - not even "must".

I've attached an example patch solving this issue for me.

Best regards,
Martin
--- crypto/asn1/a_bitstr.c.SNAP-20080215	2008-02-15 16:49:25.559308000 +0100
+++ crypto/asn1/a_bitstr.c	2008-02-15 17:02:51.031027000 +0100
@@ -61,7 +61,11 @@
 #include <openssl/asn1.h>
 
 int ASN1_BIT_STRING_set(ASN1_BIT_STRING *x, unsigned char *d, int len)
-{ return M_ASN1_BIT_STRING_set(x, d, len); }
+	{
+	x->flags &= ~0x07;
+	x->flags |= ASN1_STRING_FLAG_BITS_LEFT;
+	return M_ASN1_BIT_STRING_set(x, d, len);
+	}
 
 int i2c_ASN1_BIT_STRING(ASN1_BIT_STRING *a, unsigned char **pp)
 	{

Reply via email to