Amir,

here are comments, not particularly well reviewed.

http://www.cs.biu.ac.il/~herzbea//Papers/ecommerce/spoofing.htm

Mozilla people (2nd try),

This is CC'd to Mozilla security group for the record, as it
is the most likely place where Amir and Ahmad's work will go
next.  But, it is *long* and intermediate - it can for the most
part be ignored unless you are following closely the debate on
securing the browser.

iang

====================================================

In spite of the use of standard web security measures, swindlers often clone sensitive web sites and/or present false credentials, causing substantial damages to individuals and corporations. We believe that, to large extent, this is due to the difficulty of noticing when a sensitive web page has incorrect location or is simply unprotected. In fact, we show that several of the largest web sites, ask users for passwords in unprotected pages, making them easy targets; apparently, even the designers did not notice the lack of protection.

Since only recently, it has been pointed out that there are more frauds going on than what I will call the archtypical phish. In particular, someone posted on [EMAIL PROTECTED] a link to two "diaries" of two crooks. Each of these represents a phish, of sorts, in that the technique is the same. What differs is that they were not using another brand as their bait, they were inventing a whole new brand of some presumable excitement to their victims. I.e., porn sites I think were mentioned.

In each case, they made a profitable haul.  Question arises whether
this is phishing.  If so, how do we incorporate it, if not what to
do about it?

It substantially complicates our previous approach since that was based
on the misuse of a prior relationship.  In this case, there is no
stolen or perverted relationship - unless it be that of AOL.

I'm inclined to say that we should leave this issue aside for now.
I.e., consciously declare that the ideas don't cover it, but also
be very clear that this is not a reason to cease trying.  It's better
to get something done to sort out the brand theft phishing than to
worry about what we can't do, right now, and end up doing not as
much.


Several papers presented web spoofing attacks, but mostly focused on advanced attacks trying to mislead even a careful expert; some also suggested countermeasures, mostly by improved browser user interface. However, we argue that these countermeasures are inappropriate to most non-expert web users; indeed, they are irrelevant to most practical web-spoofing attacks, which focus on naive users. In fact, even expert users could be victim of these practical, simple spoofing attacks, resulting in identity theft or other fraud.


This is very important.  Yes.  It aligns with Mozilla's basic mission.

We present the trusted credentials area, a simple and practical browser UI enhancement, which allows secure identification of sites and validation of their credentials, thereby preventing web-spoofing, even for naïve users. The trusted credentials area is a fixed part of the browser window, which displays only authenticated credentials, and in particular logos, icons and seals. In fact, we recommend that web sites always provide credentials (e.g. logo) securely, and present them in the trusted credentials area; this will help users to notice the absence of secure logo in spoofed sites. This follows the established principle of branding. Logos and credentials may be certified by trusted Certificate Authorities, or by peers using PGP-like `web of trust`.

The existence of the TCA is something I've been pushing for a year now, and I wholeheartedly agree. Now, the contents of what I called the "branding box" are a matter of further debate.

In your paper you explore one
plausible set, others have suggested other sets.  I was recently
made aware of another very good idea, being Tyler Close's one
of petnames for certificated sites.  I've in the past suggested
visit counts, etc, and I think the CA logo is essential.

So this may be a suggestion to suggest that contents of the TCA
are open to experimentation.

In a related work developed independently and after our own, Close [C04] proposes to allow users to select nicknames (called `petname` in [C04]) to web sites. The browser maintains the mapping from the site’s public key to the nickname (we deduce that they use SSL/TLS to confirm the public key although this is not written explicitly). If we understand correctly, this proposal is a subset of the mechanism we propose, and in particular, it fails to provide a sensory input (image) as we do.


Right, that idea.  A couple of things - it's called a petname
which has a defined meaning, you can probably google for the
defining paper.  It is a name that is explicitly not shared
with the rest of the world, so it is distinct by definition
with the nickname, which is shared.

SSL/TLS isn't used to confirm the public key.  Firstly, the
protocol itself is given the key under "confirmed" conditions,
and cannot confirm it.  That's the job of the PKI, which in
this case is the CA or signing model.  Recall the discussion
we had about separating out PKI from SSL?

Also, the key doesn't need any confirming anyway.  It is
sufficient that the key is cached and message digested, that
is what gets petnamed.  This further highlights that it is
the presence of the persistent relationship that is being
protected by these techniques, and any CA signature is just
another input into that relationship equation, not a defining
one.

(I.e., in quick form, self-signed certs do just as well here.)


Existing web security mechanisms (SSL/TLS) may cause substantial overhead if applied to most web pages, as required for securing credentials (e.g. logo) of each page. We present a simple alternative mechanism to secure web pages and credentials, with acceptable overhead. Finally, we suggest additional anti-spoofing measures for site owners and web users, mainly until deployment of the trusted credentials area.

(and)

Unfortunately, most web pages are not protected by SSL. This includes most corporate and government web pages, and other sensitive web pages. The reason is mainly performance; the SSL protocol, while fairly optimized, still consumes substantial resources at both server and client, including at least four flows at the beginning of every connection, state in the server, and computationally-intensive public key cryptographic operations at the beginning of many connections. Later on, we present a more efficient mechanism for authenticating credentials of web pages and other objects.

Amir! This has to be the worst reason to present! It's 2004, most everybody has cycles sitting there doing nothing on their web servers, even your PDA could do SSL without you noticing the slowdown. Yet that old chestnut still hangs around from the days of Netscape's desperate desires to sell a differentiated server product.

Protecting pages with SSL is 99% due to the cost of the
certificate process.  No manager in his right mind wants to
futz around with that process, simply because its costly,
messy, causes sysadmins to go grumpy, and everyone knows
that next year you've got to go through it again, not to
mention, we've all heard stories about how the cert expired
while the boss was on holiday and the customer support desk
went nuts.

I estimate the average corporate cost of "going SSL" to be
O($1000).  No average business worth less than a million
spends that sort of money on a whim.  And most above that
threshold got there by being more careful with money than
to waste on certs.  By the way, that's a conservative
cost, IMO.

The pure and utter reason why people don't use HTTPS is because
it doesn't bootstrap out of the box.  You will never ever hear
of people saying "oooo, don't do your 10Gb backup over SSH,
use ftp because it doesn't do any crypto."  Modern computers
suck up RSA like you and I suck up beer, at the bottom I'll
paste a benchmark of my laptop generated as I'm typing...

I-M-strongly-expressed-O !


Customization: the visual representation of the different credentials should be customizable by the user. Such customization may make it easier for users to validate credentials, e.g. by allowing users to use the same graphical element for categories of sites, for example for `my financial institutions`. Similarly, a customized policy could avoid cluttering the trusted credentials area with unnecessary, duplicate or less important logos; e.g., it may be enough to present one or two of the credit card brands used by the user (and that the site is authorized to accept), rather than present the logos for all of them. In addition, customization could allow users to assign easy to recognize graphical elements (`logos`) to sites that do not (yet) provide such graphical identification elements securely (i.e. that do not yet adopt our proposals). Finally, as argued in [YS02], by having customized visual clues, spoofing may become harder.

OK, this is what Tyler's petname thing is about - storing into the browser things that only the user could have done. You may need to compare and contrast these variants, so that the correct attribution is applied.

And. again, here:

2. To make it harder to spoof the trusted area, even for a program that can write on arbitrary locations in the screen, it may be desirable that the background of the trusted area will be a graphical element selected randomly from a large collection, or selected by the user.

And again, later:

... The browser extension could include a list of icons for typical sensitive sites, such as `my bank`, ` my broker`, `my health-care provider`, and allow the user to select another icon by providing graphical file. ...


So we now have a class of ideas where the USER tells the
BROWSER something about the certificate in question.  And
then, this:

In addition, our extension generates, upon installation, a private signature key, which it uses later on to sign logo certificates, linking public keys and logos, if the user (manually) specifies the use of the logo for the public key.

So we now have a local signing key generated on install to record those decisions of all user trust metrics. Perfect.

(You may want to link this thread more clearly together.
Now back to the reading, skipping back again.)



To prevent these threats, whenever our browser extension detects that a web site is not SSL-protected, it displays a highly visible warning message in the trusted credentials area (see Figure 5 (a), (b)). We recommend that corporate and other serious web sites avoid this warning message, by protecting all of their web pages, and certainly all of their web forms, preferably presenting the corporate logo in the trusted credentials area. Having all web pages secure could cause a performance problem when security is using SSL or TLS; when this overhead is a problem, one could secure the web pages using the CLTLS protocol presented in next section. By protecting all of their pages, such sites will make it quite likely that their users will quickly notice the warning message in the trusted browser area, when the user receives a spoofed version of a web page of such sites. Furthermore, this ensures that all the organization’s web pages will present the logo and credentials of t
he site
(and organization) in the trusted credentials area, using and re-enforcing the brand 
of the organization.

I would concur with the recommendation that all users
should be using SSL.  (But using different logic.)  The
important thing to realise is that the only way to
make this realistically happen - so that browsers can
start to protect based on certificate persistence - is
by using self-signed certs for the majority of sites,
and using CA-signed certs for the sites that benefit
from branding.

Once the TCA is in place, dealing with Self-signed certs
is easy.  Until then, there is this lingering desire to
warn users that self-signed certs may not be quite right
for the user...  so we face a bit of a chicken and egg
there.

(Hence my hammering on the issue of MITM being not a
threat...)

Finally, popular browsers are pre-configured with a list of many certification authorities, and the liabilities of certificate authorities are not well defined;

Yup! Not only are all CAs different, about the only thing they concur on is that they all want to escape liability.

To validate the contents of the Trusted Credentials Area, we first use SSL/TLS to *ENSURE* that the web site has the private key corresponding to a given public key. The browser – or `our` secure browser extension – then uses the site’s public key to identify the logo. Notice this does not depend on the identity (domain name or URL) in the certificate; we use alternative mechanisms to link between the public key and the logos or credentials presented. One method is using a public key certificate linking between the public key of the site and the logo. We call this a logo certificate, and we use the term Logo Certification Authority (LCA) for an issuer that signs logo certificates.

(seems like a missing word above.)

This is where I'm getting confused.  The site has a
logo.  That much we are happy with as a consequence
of (other) branding.

Now, the logo can be signed.  Sure.  We could call
that a logo certificate.  Is that what is meant?

Then, the question is, who signs the logo and thus
creates the logo cert?

It would seem obvious at one level that the site cert
should do so.  That way they can do a dozen or more,
and have control over the site layout, etc.  Kind of
critical, really, for efficiency.

But, if a spoofer does acquire a cert validly signed
by a CA, then it can also create its own logo set,
just by copying the graphics.

Hence the notion of a Logo Certificate Authority,
I guess.  An LCA signs a logo and then acts as a
kind of extended-trademark-authority.  (E.g., it
could be Verisign signing the cococola logo, or it
could be the USPTO signing the Bob's bookstore logo.)

The question then is - which?  Is it one, or the other
or both?

I suspect both - neither will gain mass by itself.
Then the issue arises how to display a site-signed
logo alongside the "better" LCA-signed logo?

(Later I find there is a THIRD choice, the user
signature.  That's good, too... *that* presents
a much better starting point.)


Finally, for each trusted LCA, the browser extension also maintains its logo and name, to identify it to the user (as responsible for the matching between the site and the site’s logo).

Essential, yes.

We expect such predefined lists to contain a relatively small number of logo certifying authorities, known to many users and `almost universally trusted`, e.g. the USPTO. This is very different from the predefined (or, at least for Internet Explorer, remotely managed by the vendor) lists of certification authorities in browsers, which include typically over hundred entities, as discussed in Section ‎2.1.

I don't see how that logic differs from the logic explaining the proliferation of CAs. At least with CAs there are informal country reasons which only exist because someone used a cert form with a country in it, but once you get into intellectual property concerns, there is no way of avoiding jurisdiction.

Every country has its USPTO.  In Europe, every
country has one, and they are only slowly working
to combine them into Brussels, search on CTM
(community trademark).

There are about 200 countries.... even most of the
Caribbean deals with trademarks now, and they (each)
require filing if you want protection.  So I'm
unclear how it is that LCAs will not be subject to
proliferation?

...One reason that the predefined lists of logo certifying authorities can be very short, is that we allow and expect sites to obtain multiple logo certificates, and logo certifying authorities could cross-certify each other,...

CAs don't cross-certify each other. Why should LCAs do so?

Does "obtain multiple logo certificates" refer to
a site going to each of the LCAs and getting a
cert for each of their logos?  What's the point of
that?  How are these logos then used?  What choice
does the site operator make in presentation?


Furthermore, the TCA-enabled browser should also display[6] the logo of the LCA (or multiple logo certifying authorities).

Absolutely. and for the CA...

For example, in Figure 6 (a), eforcity.com is certified by eBay, Square Trade and VISA, while in (b), Citibank™’s logo is certified by Verisign

Is it easy for the user to see which cert is being certified by which authority? Ah, yes, I see, there is a box there that encapsulates the authorities. Excellent!

To `bootstrap` the process of adoption of the Trusted Credentials Area (TCA) and of logo certificates, we propose that TCA-enabled browsers would also allow user-certified logos....

Yup. This is critical.

The following list of 4 points is interesting.
One thing - the whole environment of site access
is about users not being bothered by popup questions
and the browser making default security choices.

In this vein, I'd say that the 3rd choice is the
one to go with:  " Present the organization name and/or URL..."

Then, offer an easy button to do more, like adding
a menu option for "authenticate logo" on the right
click over graphics.  Also, say the certificate has
been seen 5 times, the browser could pop up and
suggest that a selection of logos be chosen from
amongst.

Unfortunately, many sites use multiple different public/private key pairs (and certificates), usually with no special reason, which increases this risk and inconvenience the users; we became aware of this since the opportunistic identification pops-up whenever encountering an unknown public key, and this often happens several times in some sites (e.g. Citibank), while not at all in others.

This is to be expected. Trying to think in terms of "one good cert" is a dead end. Corporates should be able to establish certs on the fly, and they should ideally be able to tie them back to a central signing CA internally, but that might be pushing things.

Either way, progress and web site production staff
should not hold back from deploying just because they
lack a cert.  Generate the thing, and lets get back
to work...

Web of trust...

Well, that's going further than I'd thought of, but yes, it's an obvious extension.

Notice, however, that the history is limited to access via this particular browser and computer.

Notice that this criticism applies to all of the ideas in this class - the browser has its private key and its petnames and signed logos. It doesn't necessarily share all these with every other browser, or even the ones that can be predicted. Now, the user knows this. If it sees that in a different browser, all the logos change, and the numbers change and so forth, they can quickly get the feeling that it is because they are accessing from a different place - and factor that uncertainty in.

4 Efficient Authentication of Response Credentials using CLTLS

I feel that this proposal sits oddly here. Not only the objections to performance issues, as above, but also that few people will read both parts - beyond remit.

You may feel that the paper demands a "comprehensive"
answer to the performance issues - in that case that
would be it.  I think as a matter of practicality,
such a protocol might be better off in its own paper,
so the question arises as to whether to treat its
inclusion strategically or not.

( Also, note that there is an RFC or ID on a way to use
TLS in a lightweight hybrid connectionless protocol
that sits between UDP and TCP.  I think it was by Eric
Rescorla, but I can't recall for sure. )

5 Conclusions and Recommendations

I think the most important thing to come out of this is the placing of the user as the pre-eminient authenticator of local browser-cached info. Whether it be LCAs, logos, petnames, CAs, SSCs, or "My bank" icons, the user is the one that sets up the primary trust record.

Beyond that, LCAs, CAs and WoT can add very valuable
additional inputs to the trust decision.  But the placing
of the user as first is what strikes as an important
strategic goal.  It's what makes the rest of the stuff
work, and it gives an easy migration path in that direction.

Without it, for example, I just can't see how LCAs would
institutionally speaking succeed to roll out into the
marketplace.  But I can see how they would roll out in
a differentiated or discriminated arrangement that went

  1. unauthenticated
  2. signed by user
  3. signed by site
  4. signed by LCA


5.3 - 4. Use cookies to personalize the main web page of each customer, e.g. include personal greeting by name and/or by a personalized mark/picture (e.g. see [PM04]). Also, warn users against using the page if the personal greeting is absent. This will foil many of the phishing attacks, which will be unable to present personalized pages.


Curious ... does that work?  Can a phisher get access
to a cookie?  I've never thought about it.

5.3 On the secure client requirement

Yes, this is inevitable. We've crossed that rubicon in the last few months. But I don't think we should stop work on securing the browser for that. It has to be treated as "not our problem" but an important limitation notwithstanding.


============================================================= localhost$ openssl speed rsa To get the most accurate results, try to run this program when this computer is idle. Doing 512 bit private rsa's for 10s: 2196 512 bit private RSA's in 9.63s Doing 512 bit public rsa's for 10s: 24236 512 bit public RSA's in 9.64s Doing 1024 bit private rsa's for 10s: 379 1024 bit private RSA's in 9.44s Doing 1024 bit public rsa's for 10s: 7541 1024 bit public RSA's in 9.63s Doing 2048 bit private rsa's for 10s: 63 2048 bit private RSA's in 9.72s Doing 2048 bit public rsa's for 10s: 2163 2048 bit public RSA's in 9.65s Doing 4096 bit private rsa's for 10s: 10 4096 bit private RSA's in 10.43s Doing 4096 bit public rsa's for 10s: 559 4096 bit public RSA's in 9.13s OpenSSL 0.9.7a Feb 19 2003 built on: Tue Sep 30 02:48:45 EDT 2003 options:bn(64,32) md2(int) rc4(idx,int) des(ptr,risc1,16,long) aes(partial) blowfish(idx) compiler: cc available timing options: USE_TOD HZ=128 [sysconf value] timing function used: getrusage sign verify sign/s verify/s rsa 512 bits 0.0044s 0.0004s 228.0 2514.9 rsa 1024 bits 0.0249s 0.0013s 40.2 782.9 rsa 2048 bits 0.1543s 0.0045s 6.5 224.2 rsa 4096 bits 1.0431s 0.0163s 1.0 61.2 localhost$ =============================================================

_______________________________________________
Mozilla-security mailing list
[EMAIL PROTECTED]
http://mail.mozilla.org/listinfo/mozilla-security

Reply via email to