I'm doing compatibility testing with the latest (1.0.1e) version of
OpenSSL, and I've noticed a difference that I can't reconcile.  When I send
the "client finished" message, say it's:

14 00 00 0c 37 ef c3 10 b4 76 45 6f 30 b4 45 bc

(that is, the client finished tag 0x14, there bytes of length data, and 12
bytes of verify data, which is 16 bytes total), RFC 5246 (section 6.2.3.1)
says that the MAC should be:

MAC(MAC_write_key, seq_num +
                            TLSCompressed.type +
                            TLSCompressed.version +
                            TLSCompressed.length +
                            TLSCompressed.fragment);

In which case the TLSCompressed.length ought to be 16 (0x10) bytes,
correct?  This matches RFC 2246 and is the behavior of openssl for TLS 1.0,
but for TLS 1.2, it appears that OpenSSL expects the TLSCompressed.length
to be 12 (0x0c).  Is this correct?  I can't find any language in RFC 5246
that indicates that the TLS header should not be included in this length.

I added the following to ssl/t1_enc.c, line 1009 (in the function tls1_mac):

{
  int x = 0;
  printf( "MAC Computation begins.  13 byte header is: " );
  for ( x = 0; x < 13; x++ ) printf( "%.02x ", header[ x ] );
  printf( "\ninput is: " );
  for ( x = 0; x < rec->length + md_size; x++ ) printf( "%.02x ",
rec->input[ x ] );
  printf( "\n" );
}

and I see this on my console (when specifying TLS 1.2):

MAC Computation begins.  13 byte header is: 00 00 00 00 00 00 00 00 16 03
03 00 0c
input is: 14 00 00 0c 37 ef c3 10 b4 76 45 6f 30 b4 45 bc ba 9d 25 73 21 61
e5 a2 f6 64 49 f3 5c bf 73 38

But when specifying TLS 1.0:

MAC Computation begins.  13 byte header is: 00 00 00 00 00 00 00 00 16 03
01 00 10
input is: 14 00 00 0c 22 b8 b0 05 a5 16 1f 14 ed 6b 0f eb 90 c8 61 93 1c 25
86 f7 67 59 4b 1b f9 2d e2 93 5b ef 4d d4

Notice that the last byte of the "expected" MAC header shows as 0x0c for
TLS 1.2, but 0x10 for TLS 1.0.

Is this correct per the specification?

Reply via email to