First off, you seem to have set forth a design without first setting forth
it's objective. I suppose in this case it's pretty clear what your implied
objectives are, though.

Traditionally, executable or "code" signing is used to certify who compiled
a binary, and to prove that it wasn't tampered with since the time it was
compiled by that party. This is an effective and useful way of proving that
a binary hasn't been tampered with in the process of distribution from the
producer of that binary to the user of that binary (via the Internet, or a
CD/DVD and retail distribution, or whatever). In schemes like this, the
binary is verified only once, when it is installed. OpenBSD already has a
system that is similar to this; signify(1).

But your scheme would go farther than this. You seem to be advocating for a
system that also endeavors to protect binaries from tampering while they
are at "rest" on your system, by reverifying them before every execution.
To me... this would have a dubious benefit at a very, very high performance
cost -- and it goes without saying that you would sign hashes of the
binaries, not the binaries themselves. That's not optional or some kind of
an optimization... it's an intrinsic part of every cryptographic message
signing scheme that I know of. Regardless, verifying a signature with a
public key every time an executable is launched is going to make launching
executables probably *orders of magnitude* slower.

Ok, so you are probably going to say that there are people that would
accept that performance hit in exchange for the putative security
benefits...

So now let's talk about a few of the "at rest" tampering scenarios you may
be trying to protect against:

1. An attacker modifies the disk of the system while it is halted, to add
malware to the system.

In this case, your scheme provides little value because the attacker could
also trivially disable the signed executable verification feature (probably
by simply editing a text file). A more stealthy attacker could modify the
system's signed executable verification function to run a million tight
loops doing nothing, and then always return "OK".

2. An attacker sends a sequence of network packets to the system that
causes it to execute some of the attacker's code (i.e. a "remote exploit").
This presupposes the existence of a remote exploit vulnerability in the
system.

In this case, your scheme does nothing to prevent the attacker's code from
running, because it could only verify executable code immediately before a
new process is launched. If the attacker's code is running in a privileged
process, then it can disable the signed executable verification scheme. If
the attacker's code is not running in a privileged process, then the
attacker probably isn't in any position to tamper with a system binary in
the first place, rendering this scenario irrelevant.

3. A rogue administrator surreptitiously installs malware onto a system he
or she manages in coordination with other people.

Obviously in this case the rogue administrator could mount the same attack
as in #1.

I can't think of any other scenarios right now, but I'd be interested to
hear if there is something I'm not thinking of...

-Joe

Reply via email to