On Sep 3, 11:53 am, Nelson B Bolyard <nel...@bolyard.me> wrote:
> On 2010-08-30 11:04 PDT, Michael Smith wrote:
>
> > On Aug 28, 10:08 am, Nelson Bolyard <nonelsons...@nobolyardspam.me>
> > wrote:
> >> What is the real underlying objective of this?
> >> Is it to authenticate the individual user of the product to the servers?
> >> Is it to ensure that the client applications of the network service are
> >> genuinely those made by your "partner", and not some other client that
> >> has been made by some third party who reverse engineered your protocol?
> >> (e.g. as AOL used to try to ensure that only genuine AOL clients
> >> accessed the AOL Instant Messenger servers?)
> > Yes, the intent is to ensure that the client application is a
> > "legitimate" application, and to prevent others (even if the _user_ is
> > appropriately authenticated with username/password) from accessing the
> > servers.
>
> [snip]
>
> > Any advice you can give would be greatly appreciated!
>
> The "attack" against which you're trying to guard is that someone reverse
> engineers your protocols and creates a substitute client that talks to
> your servers.  Presumably, someone does that by reverse engineering your
> client.  Anyone who can do that can find the private key and the client
> certificate, which will be embedded in your client binary somewhere,
> and aren't very big, and use them in their substitute client, also.
> So, embedding a key and cert in your binary really doesn't offer much
> protection, IMO.
>
> In some sense, the problem is that the info that the attacker must replicate
> is too small and too easily replicated, if it is merely a
> key and cert, or for that matter, if it is merely the static content
> of an executable program file.  I know of a company (:-) whose products
> had a protocol whereby the server asked the client "give me the contents
> of your memory starting at this address for this many bytes", which was
> an address in the code portion of the program, as a means of authenticating
> the client program.  The idea was that this made the entire program file the
> data that must be kept, no tiny subset of it was enough to fool the server.
>  The attackers simply shipped a copy of the original program along with
> their replacement, and their replacement program answered those requests by
> reading the original program file to find the answers.
>
> If you assume that the attacker has full access to every bit of data that
> the server shares with the client, then trying to distinguish between a
> "legitimate" client and a replacement becomes a game of testing the
> limits to which the attacker is willing to go to emulate the original.
> But you can go quite far in that direction, producing results that require
> quite a bit of emulation to replicate.  It requires demanding results that
> depend on quite a bit of dynamic data, and not merely depending on static
> data that can be gotten from the original program file.  And it can all be
> overcome with enough reverse engineering.
>

Nelson,

Absolutely! I understand the limitations of this approach, and why
it's ultimately a waste of time and effort for all involved. I'm
unfortunately not in a position to replace the mechanism in use -
though we can do anything we wanted to our client, the servers we're
talking to are run by another company, and aren't going to be changed.
It's a silly design, but something I'm stuck with. I'm not enough of
an expert in this field to design a 'good' system myself, but I'm
capable of recognising something that is badly designed.

My question was, therefore, simply how to implement this using NSS,
rather than anything else - your description of what approaches could
be taken is interesting (and would be helpful if I were trying to
design something myself).

Thanks,

Mike
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Reply via email to