Recently I had my first direct contact with the Microsoft Outlook
MUA.  Many people have praised its integration with PGP, internal
passwords, scheduling features, and user interface.  I've always
associated Outlook with the numerous "macro exploits" discovered
and successfully exploited for the past few years, and figured I'd
avoid Outlook personally but it's no big deal -- I also don't use
the Windows OS.

I saw a piece of news which increased my interest in Outlook -- allegedly,
microsoft is preparing a version for UNIX as part of a US DoD contract
which specifies UNIX as a messaging platform (for security reasons,
primarily).  I'm sure everyone is familiar with the "Rainbow Books",
published by the National Computer Security [Association?  Administration?].
This is where they specify security levels D1, C1-3, B1-3, and A1, standards
for limiting bandidth on covert channels, password generation guidelines,
etc.  They additionally develop several important definitions -- 
the difference between Mandatory and Discressionary Access Controls,
formal methods for qualifying systems at various security levels,
and IMO the most important, the concept of a "Trusted Computing Base".

Presumably, in modifying Outlook for this contract, Microsoft will 
redesign some parts of the application, in addition to simply making
it work on a new OS.  I also don't believe the military is using PGP as
a component of the field messaging system, so it is not directly relevant
to this particular use.  However, the number of MS Outlook users
in the world is very large, and I would hazard to guess that most
PGP users who use MS Outlook believe they have a fair level of security
as a result.

I do not believe this is the case, at least for PGP signature checking
in Outlook.  The problem is an obvious one when you think about the
Trusted Computing Base concept -- a secure conduit from user through
the application to underlying hardware used to process secure data
and back to the user.  This secure (confidential, integrity-protected)
conduit is necessary for all data, and is accomplished either through
hardware or trusted software (the tcb) or cryptographic methods (such
as transmitted an encrypted, signed message over an insecure method)

What MS Outlook appears to do is display status information about
signature checking on messages in the mail message frame itself,
indistinguishable from ordinary text.  The obvious attack is to send
a user unsigned mail (it could be encrypted, to add additional
legitimacy to the attack) with text at the beginning of the message
simulating the output of signature checking on the recipient's 
computer.  This can be done fairly convincingly -- it is hard to get
the timestamp exactly correct, but few users check the details
thoroughly if the message appears normal..

This problem does not affect only MS Outlook -- it is actually part of
a large class of problems including the universal weakness of all
fielded smartcard and general security authentication systems I have
seen deployed (man in the middle attacks on the user authentication
to the device, or on passing parameters to a trusted signing device
which cannot be inspected by the user on secure hardware prior
to signing).

The solution, of course, is to more rigorously separate tcb-trusted from
crypto-trusted systems and components of systems, and to ensure that the 
gateways between the trusted environment (user I/O devices, at the very
least, unless your users can do sha1 in their heads) and the trusted
cryptography are very carefully designed to take into account possible
incompatibilities in both world's trust models.

For a system like Outlook, this would mean checking signatures in code
which has (guaranteed by the TCB) sole access to certain user I/O
equipment.  For a smartcard/key-management-device, it means allowing
users to verify both the smartcard's integrity and the integrity of
the data presented to the smartcard to be signed without depending 
insecurely on pieces of equipment outside the TCB -- put a display
on the smartcard itself, as well as user input, so PINs are entered
directly into tamper-resistant hardware, after the smartcard
authenticates itself to the user (even more a concern for POS terminals),
and allow the user to browse the details of the transaction to be
signed (perhaps dollar amounts, or text) on secure hardware.

After all, if you're going through all the trouble to implement a
cryptosystem, key infastructure, user training, etc., it's kind of
stupid to allow the whole system to be spoofed by anyone with
a text editor and email account.

Ryan Lackey
[EMAIL PROTECTED]
+41 1 27 42491

(Interesting thought experiment: what piece of email could you send
with a faked signature for maximum humor value, financial reward,
and/or chaos.  Target must be a MS Outlook user or otherwise subject
to this attack.  If you submit entries to "[EMAIL PROTECTED]",
I'll put them on a web page and the best few will get
cypherpunks/coderpunks/cryptography/dbs/dcsb archive CDs when I finish 
making them.)

Reply via email to