Re: Criteria for an antiphishing tool

2005-06-26 Thread Ian Grigg
Guys,

this will be my last post, for reasons that I hope are
clear.  If anyone wants to discuss phishing, let me
know.  I'm hopeful a specialist list for cross-fertilisation
of phishing efforts will pop up soon.



On Saturday 25 June 2005 23:07, Gervase Markham wrote:
 Ian Grigg wrote:
  On the notion of common and consistent security
  UI policy - how is that any different to follow the
  leader ?  It's synonymous as far as I can see it.
 
 sigh
 
 The implication of the phrase follow the leader is that we are just 
 doing what others are doing simply because they are doing it.

The reality is, if Mozilla has decided on a common
and consistent security UI policy then that requires
MS to agree.  If they don't agree then you don't have
it;  if they do agree then you have it.  In short, whatever
they say is it.  That's just commercial reality.

 This is  
 clearly not the case - in partnership with the other browser vendors, we 
 are together working out the most appropriate UI and then all 
 implementing it.

This is news.  Are you intending to announce this or
does it remain embargoed ?  What is clear about it?
Who's in and who's out ?

 If anything (given that I wrote the proposal) _we_ are  
 the leader.

Is it documented anywhere that this proposal be
accepted?  By whom?  Who's put it down on paper
that they are accepting this proposal?  What has
staff said about this?

 Do you *oppose* a common and consistent security UI? If not, why am I 
 wasting my time typing this? I apologise for being short with you, but 
 this newsgroup has a great enough volume already without me having to 
 write things which are unnecessary.

You (mozilla, you, everyone within) are not playing
fair.  There were a bunch of people trying to help.
Everything they've proposed has been knocked
back or ridiculed or blocked.  Everything they've
asked to help with has been shunted to the left,
to the right or wherever.

Now it transpires that a new policy is emerging,
one which has emerged in a secret or private
process to which these people - regardless of
their efforts or time or their applicability to the
community or their credentials - were decidedly
not invited.

Let's put this into the wider perspective of how
you're not dealing fair and that will answer the
question for everyone.

1.  This new policy - is it approved?  Recall how
Frank Hecker went to extreme lengths to create and
formulate a policy and debate it in the open with
(noisy) outsiders and insiders.  And then presented
it to staff for approval.  The word there was Leadership.

Has this been done with the policy for a common
and consistent security UI?  Are staff even aware
that Mozilla may be outsourcing their security UI to
Microsoft?

2.  This policy seems to have arisen alongside or
from a closed meeting of a month or so ago.  Duane
(representing a CA of 2000 members) didn't get
invited to the closed meeting of CAs and browser
manufacturers.  No minutes, no agenda, no published
results.  There is only one word for that - compromised.

3. It turns out that something happened at that
meeting - a month ago? - and this might have
resulted in a new policy to do with security.  So
here we are suggesting stuff about security that
happens to be antithetical towards this new secretly
evolving policy, and having to drag it out of you so
we can finally work out why everything that is tried
in the hopefully open forum is being rejected.  I'd
say the word here is woftam, thanks very much.

4. When I suggested there wasn't a security process,
you all rose up and said of course there is ... and
it's here or there or wherever.  But as soon as
we went there, it disappeared.  This is a 100%
screamingly important staff issue and my impression
is that staff still doesn't even know it has an issue.
Which is just an astounding statement to make in a
society where we are flooded with news on this issue.

5. Tyler Close asked to join the security team and
got ignored.  That's the procedure that is published
and after some hectoring someone on this group
said that's what he should do - ask.  I chimed in
and presented some credentials for the people
here because the team page specifically mentioned
it, and that was ignored too (to put a polite face on
it).  You wanted coders, and code is there - it's in the
plugins that these guys knocked up, but still not good
enough.

So it's a closed shop, right?  We don't want any
trouble makers in our security team, so we'll just
not help anyone join.  You're not even playing by
your own rules.  The word for that is bureaucracy.

6.  When Amir Herzberg drops his normal politeness
briefly and points out that the common and consistent
security UI clearly and blatantly contradicts the
Mozilla mission of preserve choice and innovation
you manage to take umbrage at his phrasing and
thus ignore the central issue he was raising.  That
is called evasion and has its place in politics, not
security work.

7. There is no security process

Re: Strange mail recieved with thunderbird

2005-06-25 Thread Ian Grigg
On Saturday 25 June 2005 12:16, Jeroen van Iddekinge wrote:
 Hi,
 
 I got the following mail in mine Thunderbird (1.0 linux) email box.
 what the hell is it? It doesn't event have a proper header (no 
 'receaved' etc.. header)
 Is it a bug or a virus?

No, accidental usage?!  Someone is experimenting
with a spam package, and they've mistyped the
command and accidentally mailed a script that does
some part of it to all the recipients in their spam list.

(That's a *guess*, I've seen a number of cases where
people who don't know much go and buy cheap
but simple spam packages in order to do mass
mailings, and the results are ... chaotic.)

iang
-- 
Advances in Financial Cryptography, Issue 1:
   https://www.financialcryptography.com/mt/archives/000458.html
Daniel Nagy, On Secure Knowledge-Based Authentication
Adam Shostack, Avoiding Liability: An Alternative Route to More Secure Products
Ian Grigg, Pareto-Secure
___
Mozilla-security mailing list
Mozilla-security@mozilla.org
http://mail.mozilla.org/listinfo/mozilla-security


Re: Criteria for an antiphishing tool

2005-06-24 Thread Ian Grigg
 Amir Herzberg wrote:
  
  So, Mozilla plays `follow the leader`? Nice to know. Not exactly the 
  original goal of the project, was it?
 
 Up to this point, our discussions have been reasonably civil, but now 
 you are just throwing clearly ridiculous assertions around.
 
 Having a common and consistent security UI across browsers, no matter 
 who comes up with it, is not inconsistent with the goals of the project.
 Trying to provide the best user experience is not playing 'follow the 
 leader'.

On goals - I have never been able to identify
the goals of the project?  If you can point me
at them I'd greatly appreciate it.

Trying to provide the best user experience
is not exactly a goal.  It's too imprecise, subject
to rather different meanings by different people
(for example, this would mean turning on ActiveX
and delivering with Flash, for some people)


iang
-- 
Advances in Financial Cryptography, Issue 1:
   https://www.financialcryptography.com/mt/archives/000458.html
Daniel Nagy, On Secure Knowledge-Based Authentication
Adam Shostack, Avoiding Liability: An Alternative Route to More Secure Products
Ian Grigg, Pareto-Secure
___
Mozilla-security mailing list
Mozilla-security@mozilla.org
http://mail.mozilla.org/listinfo/mozilla-security


Re: Criteria for an antiphishing tool

2005-06-24 Thread Ian Grigg
On Friday 24 June 2005 09:50, Gervase Markham wrote:
 Amir Herzberg wrote:
  
  So, Mozilla plays `follow the leader`? Nice to know. Not exactly the 
  original goal of the project, was it?
 
 Up to this point, our discussions have been reasonably civil, but now 
 you are just throwing clearly ridiculous assertions around.
 
 Having a common and consistent security UI across browsers, no matter 
 who comes up with it, is not inconsistent with the goals of the project.
 Trying to provide the best user experience is not playing 'follow the 
 leader'.

On the notion of common and consistent security
UI policy - how is that any different to follow the
leader ?  It's synonymous as far as I can see it.

iang
-- 
Advances in Financial Cryptography, Issue 1:
   https://www.financialcryptography.com/mt/archives/000458.html
Daniel Nagy, On Secure Knowledge-Based Authentication
Adam Shostack, Avoiding Liability: An Alternative Route to More Secure Products
Ian Grigg, Pareto-Secure
___
Mozilla-security mailing list
Mozilla-security@mozilla.org
http://mail.mozilla.org/listinfo/mozilla-security


Re: Criteria for an antiphishing tool

2005-06-22 Thread Ian Grigg
On Wednesday 22 June 2005 18:09, Gervase Markham wrote:
 Tyler Close wrote:
  A reasonable conclusion to draw from the MIT study is that if the user
  is not actively involved in the protection mechanism, he will ignore
  it. 
 
 How is that a reasonable conclusion from anything? A user isn't actively 
 involved in his car's airbag, but it still protects him in the event of 
 a crash.


This is the difference between 'safety' and 'security'.

In brief, a 'safety' good works statistically well, and
generally does what it intends.  If it fails, it fails in
known non-malignant ways so just little tricks like
making it bigger will help.

In contrast a 'security' good has to face a malign
attacker who deliberately inserts the attack into the
gaps.  To puncture the analogy, the airbag won't
protect when the driver has a spike for a nose...

For this reason we could consider that deploying
two agents working together works well:  a human
spots anomolies and can deal with suspicion which
it good for spotting the between-the-cracks attacks.
Whereas software is good at doing routine but boring
things like checking a cert.  It just doesn't ever know
if it is has been handed the wrong cert to check, for
that it needs a suspicious person.  The two strengths
work well together and help to address each other's
weaknesses.

(But, to close the loop, to spot those anomolies,
the user has to play their part.  Hence, making
them an active albeit efficient part of the security
model is necessary for high security in the face
of an active attack.)

iang

-- 
Advances in Financial Cryptography, Issue 1:
   https://www.financialcryptography.com/mt/archives/000458.html
Daniel Nagy, On Secure Knowledge-Based Authentication
Adam Shostack, Avoiding Liability: An Alternative Route to More Secure Products
Ian Grigg, Pareto-Secure
___
Mozilla-security mailing list
Mozilla-security@mozilla.org
http://mail.mozilla.org/listinfo/mozilla-security


Re: Calling for votes for and against

2005-04-27 Thread Ian Grigg

 Although Gerv's worked on me hard, it seems that the
 essence of this border crossing model idea has
 survived in this forum.

 Calling for votes for or against from all lurkers,
 which I'll take back to the relevant bug for
 consideration there.


What is the statement that people are voting for
or against?

iang
___
Mozilla-security mailing list
Mozilla-security@mozilla.org
http://mail.mozilla.org/listinfo/mozilla-security


Re: Low security SSL sites

2005-04-25 Thread Ian Grigg

 Peter 128 128 128 128 128 128 128 128 128 128.

 [Snip]

Ignore the numbers, concentrate on the security.

iang 128 ^ 128 (my 128 is better than your 128)

 Actually you should have used 128+1, because real cryptographers' keys go to
 129.

LOL...  For those who do not understand the
reference, check out the cult classic film
_Spinal Tap_.  Quite apt.

iang
___
Mozilla-security mailing list
Mozilla-security@mozilla.org
http://mail.mozilla.org/listinfo/mozilla-security


Do Firefox browser bugs matter?

2005-04-25 Thread Ian Grigg
A much more reasonable article about the interplay
between Firefox and IE especially w.r.t. security.

http://news.bbc.co.uk/2/hi/technology/4472219.stm

Other than the obvious stuff about FOSS being good,
he suggests that the real impact is in forcing Microsoft
to address security.

This all makes sense;  the emphasis is not on being
secure - but on being better where it matters.  As
there is no such thing as absolute security, being
better and more secure than Microsoft is a useful
measure.

iang
___
Mozilla-security mailing list
Mozilla-security@mozilla.org
http://mail.mozilla.org/listinfo/mozilla-security


Re: Problems with displaying Organisation field

2005-04-25 Thread Ian Grigg
 Ian Grigg wrote:
I am not suggesting that we make any assurances that the CA is not
making; I am suggesting we more clearly represent the CAs position in
the UI. As you know, CAs take different positions on this issue.

 Right.  So there needs to be an easy way to
 show the CA / position.

 Position, yes. CA, no. ;-)


Well, all I can suggest that this is a hard
problem.  The only solution I know of is the
logos / branding / reputation approach which
works for most all retail markets.

Perhaps you could come up with some example
positions so we could play around with them?


[discussion on a CA that doesn't like his logo :]

 I explained why I didn't think putting logos on the chrome was a good
 idea, and he agreed absolutely.

OK!  Well, there's not much I can say to that
other than put him in touch with me and I'll
discuss with him why he doesn't want to put
logos on the chrome of browsers.

(BTW, I wholly agree that users will face
some confusion.  For a while... and then it
will migrate to the point where they won't be
confused, it will be as if it was always that
way, and they'll be very upset if you dare to
take away the logos.  That's a necessary cost
of getting to the next level of security, IMHO.)

...
 Until she learns!  Nobody forces her to shop.  It's
 not our God given mission to make her buy those goods.

 I think a browser which said Hey, don't shop online until you've learnt
 the following 35 logos and assessed their trust levels by, I don't know,
 reading these Certificate Practice Statements wouldn't have much market
 share.

I agree.  That's not what is being suggested.

What is being suggested is that the browser
relate the statement this is X.com to the
person who made the statement CA.com.  I
wouldn't suggest the browser say any more
than that because it is not authoritive in
the question of whether a user should shop
online.

iang
___
Mozilla-security mailing list
Mozilla-security@mozilla.org
http://mail.mozilla.org/listinfo/mozilla-security


Re: Problems with displaying Organisation field

2005-04-22 Thread Ian Grigg
 Ian G wrote:
 As a consumer you want someone else to promise you
 it's safe.  As a supplier, you would be utterly
 insane to do that, without doing a lot of acturial
 (insurance) calculations up front and taking twice
 the likely amount as a premium.

 I am not suggesting that we make any assurances that the CA is not
 making; I am suggesting we more clearly represent the CAs position in
 the UI. As you know, CAs take different positions on this issue.


Right.  So there needs to be an easy way to
show the CA / position.

 OK, but the chance goes down rapidly if it is a
 scam, and this applies both to Verisign's $1300
 platinums as Dodgy Dan's $10 certs.  The only
 determining factor is that a scammer won't bother
 spending $1300, but if you make that your measure,
 all historical evidence points out that you are
 going to be shocked ...

 The way to reduce phishing is to increase the cost of setting up a
 phishing site, both in terms of cash and revealed info (pushing them
 onto SSL, forcing them to reveal info) and decrease the value of the
 site (OCSP). The closer you get the two average costs, the less phishing
 there will be, because it will make the bad guys less money, and they'll
 go back to shipping drugs instead.


Something like that.  The precise mix is open
to question;  what is clear is that we need to
move the phisher to be forced to use SSL, and
we need to show the user that the phisher has
provided an SSL cert with some weaknesesses.

Both of these mean that providing transparent
SSL protection as is currently done are not
going to help.


 Actually, I think the CAs might have an answer to that question.

 They do.  Put their logo on the chrome and let
 them beat each other up in the marketplace on
 the question of brand versus quality.

 Actually, I explained my point about logo confusion to a representative
 of a big CA this week, and he agreed with me absolutely. But I agree
 there's a spectrum of opinion here.


Did he say that he didn't want his logo on the
chrome?

Or course he will agree about user confusion...
Where before there was nothing (including security
from phishing) and then there is something ...
well, it stands to reason that there will be some
confusion.

My point is that this confusion is exactly the same
sort of confusion that humanity has dealt with in the
past and dominated.  This is the sort of confusion
that users brains are really wired for - recognising
images and knowing when they don't recognise images.


 Try it!  You'll get a bunch of different opinions
 on what to do and never get anywhere, would be my
 suggestion.

 I will try it.

 So (just to be clear) a corollary of this position is that we should
 admit any root cert to the browser store without any sort of vetting
 or checking.

 Yes, technically that is a corollary!  I don't want to
 open old sores, but ..  Consider that the proposals
 and the way browsers work is that a dodgy cert or a
 bad CA or a low number of bits are all considered *worse*
 than unprotected HTTP (which is indeed much better for
 phishers) then, actually, accepting any root cert without
 vetting would be an improvement in security terms over
 totally unprotected HTTP.

 I agree that the issue outlined in the first half of the sentence needs
 to be dealt with, but the second half is not a valid conclusion from that.


Well, let's agree that it's a perverse conclusion;
we really shouldn't need to make it at all.  What
needs to be recognised is that the way the browser
treats a large class of certificate protected traffic
as worse than open HTTP can only result in perverse
results;  e.g., phishing.


 To which you say, if you don't know who GeoTrust is,
 then you shouldn't risk your credit card.

 So she won't be buying much, then!


Until she learns!  Nobody forces her to shop.  It's
not our God given mission to make her buy those goods.

GeoTrust on the other hand is going to spend some
money advertising so that she does know who GeoTrust
is, and then she will shop.  Or not, in which case
GeoTrust goes out of business.

Not Mozilla's problem.  Not the user's problem.

 We're obviously going round in circles here. Time to stop, I think. I'm
 getting dizzy.


OK!

iang
___
Mozilla-security mailing list
Mozilla-security@mozilla.org
http://mail.mozilla.org/listinfo/mozilla-security


Re: 2005 - The Year of the Snail

2004-12-09 Thread Ian Grigg
 Ian Grigg wrote:
 snip
 It's really easy to offer a solution:  download Firefox, and buy a Mac.
  But this is like asking a snail to become a hedgehog;  it is simply
 out of the budget of way too many users to rush out and buy a Mac.
 Those that can do so, do so!
 snip

 I'm probably going to regret replying, but here goes. :)  Why Mac?

I'll try and appeal to your regret ;)  The reason that
I don't advise people to use Linux* or *BSD is
that anyone who might benefit from those systems
already knows it, and doesn't need any advice.  They
already know enough to know that the core of the
problem is in the Microsoft OS, browser and related
apps.

But, for the vast majority of people out there, they are
faced with all sorts of conflicting opinions as to what
the problem is.  They chose Windows because it was
the simplest thing going.  They are not going to install
an open source solution even if you pay them for it;
they are the people who have real lives, real jobs, and
real families on which to spend their short days.

(personally I use FreeBSD, and would use a Mac if I
could get a laptop with a thinkpad keyboard.)

iang
___
Mozilla-security mailing list
[EMAIL PROTECTED]
http://mail.mozilla.org/listinfo/mozilla-security


Re: 2005 - The Year of the Snail

2004-12-09 Thread Ian Grigg
 I can
 see your point to some extent.  There's also the You'd be amazed what
 people will do to save a dollar factor though and for the vast majority
 of people that just browse and do email a recent linux distribution
 (with some MINOR tech support from son/daughter/friend/etc) would be
 able to keep browsing and emailing away and not know the difference.

Well, I think it is closer than it has been, but still
not ready.  Lindows is supposed to be that market.

 Personally I use Gentoo at home and would not reccommend that for anyone
 straight off windows.  I've fiddled with Fedora at work and adding
 something like Synaptic makes managing the software install/upgrades a
 breeze.  But yeah...who am I kidding.  As much as I like and use linux,
 I do realize it's still FAR (unfortunately) off from invading the home
 so Mac probably is the next best choice.  Of course none of this stops
 me from reccommending Firefox/Mozilla to anyone I have any influence over.

Absolutely.  Download Firefox, Buy A Mac!

(You see how, with the people we are talking about,
if they do either of those things it helps them, and if
they do both it doesn't matter that the Mac browser
is probably ok.)

Unfortunately until we crack phishing, it's not a
complete prescription.  Oh well, one thing at a
time.

iang
___
Mozilla-security mailing list
[EMAIL PROTECTED]
http://mail.mozilla.org/listinfo/mozilla-security


Re: 2004 - The Year of the Phish

2004-12-09 Thread Ian Grigg
Hi Nelson!

 1.  The reason there is a strong dominating player at
 the moment is because there is no way to compete.

 But the reason there's no way to compete is due to whose root certs are
 in the main browsers, not any other reason like branding or lack of it.

 What are you guys smoking?

 Stop saying Verisign has a monopoly unless you can show evidence of it.

On no measure has Verisign got a monopoly,
but sometimes it is common lingo to call the
largest player the monopoly.  It's incorrect,
but honestly, it's not worth correcting.

However as for stats, published here:

http://www.securityspace.com/s_survey/sdata/200411/certca.html

confirm that the market has resumed slow
growth, and continues to move slowly towards
a more regular free market profile.  If trends
continue Verisign will no longer be the largest
player within 6 months, at a guess.

 There is a large number (~100) of trusted root CA certs in mozilla.
 Some of the CAs there sell SSL server CA certs for WELL BELOW $100.
 Several of them give away email certs for free.
 mozilla has admited more new CA certs to the trusted list in 2004, than
 in any year since the establishment of mozilla in 1998.
 The criteria for competing are pretty well established.
 Stop spreading FUD.

Ah, that's one sentence too far!

The problem with the CA certs market is that
it is artificially constrained by the browser's use
of the root list.  That makes for a barrier to
entry, which is today measured as the cost
of a WebTrust.  Hence, costly (anyone here
got a dollar figure on it?).

Now, if the root list were *not* the undisputed and
sole vector of trust, and we were also to employ
user-based techniques of trust - like Amir and
Ahmed's logo signing ideas, or the other things
that have been discussed to put the relationship
onto the chrome, we would *change* the market
for certs.

And thus change the criteria for competing.

We would actually open it up for more competition,
and also enlarge the market for more CAs to sell
more certs.  What's more, we'd also do something
about phishing by giving the user the tools needed
for them to protect themselvess.  So we'd also be
meeting the security goals of Mozilla as well:
delivering a product that helps the ordinary user
to fight their threats, which as far as Firefox is
concerned, is phishing, phishing, and also phishing.

Changing the market in this way has zero down
side that I can see, and lots and lots of upside.

That all is the opposite of FUD, which happens
to be what the current system is based on:  Fear
of the MITM, Uncertainty in the notion that only
with a CA can you shop safely, and Doubt over
whether users will ever find anyone to take
responsibility for their loses over browsers
supposedly secured by CA-signed certs.

The main game right now is phishing.  What is
the plan to deal with phishing?  Anything else
is of secondary importance.

(Which is not to criticise the development crew
as it is clearly a vexatious issue;  and Mozilla
does lead the way with its little domain sticker
down in the bottom right corner.)

iang
___
Mozilla-security mailing list
[EMAIL PROTECTED]
http://mail.mozilla.org/listinfo/mozilla-security


Re: 2004 - The Year of the Phish

2004-12-08 Thread Ian Grigg
 Ian Grigg wrote:
 1.  The reason there is a strong dominating player at
 the moment is because there is no way to compete.

 But the reason there's no way to compete is due to whose root certs are
 in the main browsers, not any other reason like branding or lack of it.

Yes, the reason there is no way to compete is because
the root certs are a fixed and hidden bucket of certs.
The solution is to unfix it and to unhide it.  One way
of unhiding it is to brand the CAs.  Another way is to
petname the certs.  A third way is to put the counts
in, and a fourth way is to treat self-signed certs as
simply a cert with a less popular brand.  A fifth way
is to have the user sign off on each cert and attach
logos to them.

All of these things are useful for competition, and for
that reason, the CAs generally actually want these
things (because only competition in a public space
will let them grow the market) ... they actually want
to compete against alternates like self-signed certs
that are not based on a fixed/hidden bucket of root
certs.

But, the real reason for them is security.  Currently,
the reason the SSL browser security model is
breached by phishing is because the cert is hidden
from the user, and not surfaced (as per brand and
other activities).  Competition and monopolies
are side issues, really.

 3.  Incumbents don't currently do anything to justify
 their brands, basically because they don't have to.  If
 they had to, there would be a shakeup in the marketplace.
 Those that are small, lean and mean would be much better
 placed to deal with the new branding issues than those
 that are large, and encumbered with the baggage of past
 mistakes.

 But there can never be a proper market in certs, can there?

So, why should we keep the current un-proper one?

 If Amazon is
 secured by Verisign, I can say Well, I don't trust Verisign, but if I
 want to buy from Amazon, I don't have much choice, do I?

I'm not sure where to start!  You have plenty of choice.

If you don't trust Verisign you examine the cert that
is presented and you check it is the one that Amazon
uses.  Hopefully your browser will then allow some
relationship building to assist in this.  Bear in mind that
you almost certainly won't be phished the *first* time,
only later on when you have a relationship established
(when it is a valuable relationship...).

The reason you might think you have no choice is
because the browser doesn't give you any choice.
Hence phishing.  Solution:  have the browser give
you a choice!

(But, you are right in one facet:  x.509 doesn't support
trust as we humans know it, which is with multiple vectors.
But, the browser doesn't have to follow that.)

 I don't see the problem.  The user sees that they
 are different sites.  She analyses the sites, and
 if they are bona fide, enters different pet names
 and carries on.  When she gets phished over to
 https://www.barclaysbank.com/ and finds that the
 site might not be right, she doesn't petname it.

 But surely the point about phishing is that https://www.barclaysbank.com
 looks totally genuine even if it's not.

RIght.  So the user has to determine whether it is.

Verisign isn't going to help, because there is no
cert in use.  In fact the only thing that will help is
that there is no cert in use;   but right now, the
brower hides the presence or otherwise of the
cert, down in the lower left corner where nobody
notices.

(Note that this prescription of fixes also will lead
to much more cert usage ... which will allow Verisign
to help, *iff* it can rely on its brand being obvious.)

 I assumed the point about petnames is that the user goes to their
 bank, but the petname doesn't appear, and they go Huh?. Was I wrong?

Yes, that is the first defence for the immediate
problem.  But what we have to do is to migrate
all important sites across to SSL, and to place
the browser in control of the relationship, so
that the information tied to Verisign and the cert
and all the other personal information can be
integrated.

 to hide all the real information.  So the problem
 with the above is to stop assuming that they are
 valid, and to insist that the user authenticates
 them in some fashion or other.

 Using what evidence? Inspecting the website? The only sensible thing I
 can think of is comparing the URL with one printed in e.g. a magazine.

If you open a bank account online then generally,
the bank takes some care to establish the
relationship.  Maybe it's the URL, maybe it is a
token that is sent in the mail, or text'd to their cell.
Personally, I'd rather see the fingerprint printed
on statements, but that might be a ways in the
future.

If you are just shopping at Amazon, then there
would be less of a burden, because it matters less,
but Amazon still has a way of reaching people,
and generally, they have no difficulty in finding
excuses to try.

Consumers and merchants do use a little bit of
common sense when establishing a relationship;
they just need to be given

Re: 2004 - The Year of the Phish

2004-12-03 Thread Ian Grigg
 Ian Grigg wrote:
 (Just briefly, the Certificate Authority needs to be shown.

 How exactly does this help the average user, who has no idea who
 Verisign are, and whether they should be trusted any more than
 VirtuaRoot (a name I just invented)?

Good question.  The answer:  Branding.  VeriSign
and other CAs would need to establish their brand
with the public.  Verisign would need to act like
Intel or Coke or Ford and establish a brand that
speaks of trust.

The problem is foistered on us somewhat by the PKI
design.  At the moment, any cert signed by any CA
is assumed to be good by the software, but it's
pretty easy to see and to show that that is a really
bad assumption.  Now, if we are going to have a PKI
where a CA is expected to be trusted, then that name
must be known by whoever relies on that trust (the
user).

The alternate is that the CA never needs to stand
up to the trust that the user demands, and thus is
untrusted.  Which is the situation we have now, in
that CAs are essentially trusted in lip service only.
In reality, whether they are worthy of any trust is
a complete lottery, and neither should they bother
to earn that trust, because nobody knows who they
are anyway.  So they can't be punished if they do
the wrong thing.

 Further,
 the cert needs to tracked by the browser, and a relationship built
 up.  I've suggested a usage count (100 times to this site, you must
 like it!).

 That's a reasonable idea - sort of like a history for certs. But still
 can't see how you can detect and warn the user of a problem. Do you pop
 up New secure site every time you visit a new SSL site?

No, this isn't an active popup programme, but a
passive display programme.

There needs to be an area on the chrome that shows
the credentials of the site.  The information
should be blatant and colourful - hence the ideas
of branding - so that the user can then see that
there are problems in the *absence* of that information.

It's a bit like if I were to sell you a can of
Coke that was coloured green.  I say it's coke,
but you know something's wrong coz you've always
had familiar red cans.  That signal should be
sufficient to get the average user thinking a
bit more.

( Popups are not
going to help, we already know that, from the way
that users click through them without understanding
them.  What I call _click-thru-syndrome_ leads to
a fairly easy MITM, although I've only ever heard
of a phish doing this once (and it worked on me :-)
which makes sense, as it is much easier to just
ignore SSL altogether when phishing. )

(Note how these ideas are all designed to force
more websites to more blatantly show the use of
SSL!)

 Amir and Ahmad have suggested that the user sign off on
 the cert and even coded it up,

 Again, how on earth do you get the user to make a meaningful decision here?

Oh, this part is clear - it's based on the fact
that the user went to the site on their own volition
in order to open an account.  They typed in the URL,
hopefully from some safe place.  They have already
made a meaningful decision about their bank, all the
browser needs to do is relate that decision back to
right site, time and time again.

The essence of phishing is to attack an already
existing relationship - your account with Citibank
for example.  It already exists, its got money in
it, and the phisher wants it.

The essence of the defence is to surface the
existing relationship, preferably right back to
the start where it is of no value, so that going
forward as you build up your account into something
worth money, the browser shows you each time that
you are with the same account (by using the certs
to enable the coke can factor).

 while Tyler has suggested the use of
 petnames for the user's idea of what each site is.

 We have that - it's called bookmark keywords.

Ah.  That's a very good point.  It's half way there!

Bookmarks take a user to her site.  Once there,
they disappear in relevance.  The petnames suggestion
is that the name that the user labelled their bookmark
would be displayed on the chrome, quite prominently.
Right now, the only user cue is the favicon, and that
perversely can be forged however you want (see my
silly forged padlock on http://iang.org/ssl/ for an
example).

The essence is to provide a *lot* of prominent info so
that the user's brain is tweaked when she is on a site
without the display.  Hence the idea that Verisign's
logo should be on the chrome, as well as Citibank's.
Also the petname, the count, and whatever else we can
think of.

Getting back to the bookmarks, if the keyword were
to appear on the chrome, that would be it!

Yes, it would be a lot of extra stuff;  but given the
SSL signal - this site is important - and the amount
of money being lost to phishing, then a fairly big
change to the way browsers think about user interfaces
is indicated.

Luckily, for all its flaws, the certificates in
the browser make a perfect base for tracking site
relationships.  Without that, it would

2004 - The Year of the Phish

2004-12-02 Thread Ian Grigg
FTR (1)!  iang




(( Financial Cryptography Update: 2004 - The Year of the Phish ))

   December 01, 2004




http://www.financialcryptography.com/mt/archives/000262.html





Last year, 2003, was a depressing year.  We watched the phishing thing loom and 
rise, and for the most
part, security experts fudged, denied, shuffled and ignored while the phish was 
reeled in.  Now, 2004
can truly be said to be the Year of the Phish.

There is progress.  Firefox have added two small but nice additions to their 
browser to address
phishing.  If you download Firefox (and if you haven't yet, you are now 
classified as too insecure to
be permitted to browse) you can see these when you go to your banking site.  On 
the bottom right, there
is a little box containing the domain that is seen by the browser.  Also, 
notice how the URL bar
changes colour.

Get used to these things, as they are about the only things protecting you from 
phishing.

More is needed, however, much much more.  Whilst I am somewhat ecstatic that 
Mozilla programmers have
started on this journey, the amount done so far is dwarfed by what would be 
required to fully address
phishing in the browser, and no other manufacturer of browsers seems to have 
even woken up yet.

(Just briefly, the Certificate Authority needs to be shown.  Further, the cert 
needs to tracked by
the browser, and a relationship built up.  I've suggested a usage count (100 
times to this site, you
must like it!).  Amir and Ahmad have suggested that the user sign off on the 
cert and even coded it up,
while Tyler has suggested the use of
petnames for the user's idea of what each site is.  They all have their 
purposes and benefits, and a
solution that used all of these and more would be very powerful against 
phishing.  Oh, and all this
needs to be in the face and not discretely hidden down in some forgotten 
corner.)

Most of this was known in 2003, by one means or another.  But even though we 
have now to all intents
and purposes had a full year of
devastating losses due to phishing (more money lost than was ever spent on SSL 
certs) we still can't
say with any degree of confidence that people understand that the browser is 
being attacked and the
browser is where the defences should be placed.

-- 
Powered by Movable Type
Version 2.64
http://www.movabletype.org/




___
Mozilla-security mailing list
[EMAIL PROTECTED]
http://mail.mozilla.org/listinfo/mozilla-security


2005 - The Year of the Snail

2004-12-02 Thread Ian Grigg
FTR (2)!   iang


(( Financial Cryptography Update: 2005 - The Year of the Snail ))

   December 01, 2004




http://www.financialcryptography.com/mt/archives/000263.html





So if 2004 depressingly swims past us as the year of the Phish, what
then will 2005 bring?

Worse, much worse.  The issue is this: during the last 12 months, the
Internet security landscape changed dramatically.  A number of known,
theoretical threats surfaced, became real, and became
institutionalised.  Here's a quick summary:

1.  Viruses started to do more than just replicate and destroy:  they
started to steal.  The first viruses that scanned for valuable
information surfaced, and the first that installed keyloggers that
targetted specific websites and banking passwords.  Just this week, the
first attack on the root list of SSL browsers was being tracked by
security firms.

2.  Money started to be made in serious amounts in phishing.  This then
fed into other areas, as phishers *invested* their ill gotten gains,
which led to the next development:

3.  Phishers started to use other techniques to gather their victims:
viruses were used to harvest nodes for spam that were used to launch
phishing attacks.  Integration across all the potential threats was now
a reality.

4.  DDOS, which seemed to seriously take off in 2002, became a serious
*extortion* threat to larger companies in 2004.  Companies that had
something to lose, lost.

5.  In 2004, it now became clear that we were no longer dealing with a
bunch of isolated hackers who were doing the crack as much to impress
each other as to exercise their own skills.  There is now a market
phase for every conceivable tool out there, and mere hackers do not
purchase the factors of their production.

6.  Malware, spyware, and any other sort of ware turned up as infesting
average PCs with Windows at numbers quoted as 30 per machine.  And this
was just the mild and benign stuff that reported your every browse for
marketing purposes.

7.  Microsoft were shown to be powerless to stem the tide.  Their SP2
mid-life update caused as many problems as it might have solved.  No
progress was discernable overall, and 2004 might be marked as the year
when even the bubble headed IT media started questioning the emporer's
nakedness.

How can I summarise the summary in one pithy aphorism?For most
intents and purposes, the Internet was secure for Windows users until
about 2004.  From 2005 onwards, the Internet is not secure for Windows
users.Are you depressed, yet?

2005 will be the Year of the Snail.  Your machine will move slowly and
slipperily to a fate that you can't avoid.  The security of the Windows
system on which the vast majority of the net depends for its leaf nodes
will repeat the imagery of a snail's house.  Ever toiling, slithering
slowly across the garden with an immense burden on its back, and ever
fearful of approaching predator.  The snail is quick to retreat into
its house, but all to no avail, as that crunching sound announces that
your machine just got turned into more phish compost.

I had hoped - foolish, I know - that Firefox and the like would have at
least addressed the phishing threat by now.  But now we are fighting a
two fronts war:  phishing attacks the browser's security model and UI,
while all the rest attacks the Windows platform.

It's really easy to offer a solution:  download Firefox, and buy a Mac.
 But this is like asking a snail to become a hedgehog;  it is simply
out of the budget of way too many users to rush out and buy a Mac.
Those that can do so, do so!

Those that cannot, prepare for the Year of the Snail.  And check in
with us in a year's time to see how the two fronts war is going.  The
good news is that statistically, a few snails always survive to
populate the garden for the next year.  The bad news is that it will
decidedly take more than a year for your house to evolve away from the
sound of the crunch.

-- 
Powered by Movable Type
Version 2.64
http://www.movabletype.org/

___
Mozilla-security mailing list
[EMAIL PROTECTED]
http://mail.mozilla.org/listinfo/mozilla-security


Re: SHA1 within a firebird extension

2004-10-06 Thread Ian Grigg
Nelson Bolyard wrote:
I suspect there's been a misunderstanding here.  I took Ian's One 
supposes
remark as an unfinished sentence, and so did not attempt to interpret it.
I was thinking out aloud, and expecting to get
shot down in flames.  You were right to ignore
it :)
Jean-Marc seems to have interpreted it to mean that Ian was suggesting that
NSS will take a fingerprint value found in nsIX509Cert as a correct
fingerprint (hash) whether or not it is that.
That's actually what I was thinking, that the
fingerprint was in there, and just being
extracted...  but then I realised that this
was silly.
But IINM, the values returned through nsIX509Cert are computed by NSS from
the actual DER cert itself.  nsIX509Cert depends on NSS, not the other way
around.  IMO, NSS is right to trust its own computations as correct.
Yes, that makes sense.
I'm not sure what to make of the word authoritative.  Anyone can compute
a SHA1 hash of anything.
Right, as long as it is computable, that would
be the preferred way.  Which is what I assumed
SSLBar to do.
Where one gets into the issue of authoritive
would be if one were simply given the SHA1 hash
pre-computed.  In this case, SSLBar is given the
SHA1 hash pre-computed, and is thus asserting
that it has been told this is the hash, and it
has decided to accept the nsIX509Cert/NSS calculation
as authoritive.
From a security pov, this is less satisfactory
as to know for sure one would now need to audit
an extra module, and keep auditing it.
(Although in the context of SSLBar, this is nit-
picking, I think.  The main thing about SSLBar
is that it demonstrates the concept.  How it
does it is less relevent than the experiment.)
iang
___
Mozilla-security mailing list
[EMAIL PROTECTED]
http://mail.mozilla.org/listinfo/mozilla-security


why do corporations require more than one cert?

2004-08-20 Thread Ian Grigg
Amir,
picking up a debate earlier this month around the forthcoming
paper on spoofing:
http://www.cs.biu.ac.il/~herzbea//Papers/ecommerce/spoofing.htm
Amir Herzberg wrote:
 I still don't see why the same corporation needs multiple SSL
 certificates. Why??
 ...
On this one point - I'm unsure whether you are asking from
a practical pov or an ideal pov.
Practically speaking, certs are required for each different
domain name.  There is supposed to be some wildcarding
in place such that *.mydomain.com is covered, but it doesn't
work so well, apparently.  This may be historical, it may be
that browsers now all handle the wildcarding, but maybe not,
as a) I've heard complaints about it not working well enough,
and b) I've seen sites that still duplicate certs for each
different subdomain (my pair web mail has a cert per host
for web1.pair.com, etc...).
In practical terms, any large corporation uses domains as labels
that are cheap and makes them into brands.  So, for example, I
have idea_one.org and biz_two.com.  Now, both of these require
certs.  They are both my corp, but I have to get a cert for
each.  Even if they are the same site, but with a different
domain name (to re-brand the same thing) I still need more
than one cert.
Likewise, even within a single-named secure operation, there
are always duplicates of a site.  For example, there are older
copies, there are test rigs, development rigs, and failovers.
In the banks that I know, the common thing is for them to
have 4 running systems for everything.  Each is a different
role:  production, test, development and something else I
forget.
Now, each requires web access and a domain.  It would be
plausible to use the same cert in each, but that means they
also have to be secured the same way.  Much better to create
a separate cert for each one, and that way if the test site
gets compromised, it matters not.  This frees the test system
to run on its own security regime, and get on with the serious
business of testing.
In structural terms, I think one reason why corporations require
many certs is that it was set up that way by the originators of
the x.509-based PKI software.  If you read through for example
Lynn Wheeler's historical comments, you will see repeated
references to viable revenue model which amounts to we
scotched that because we couldn't sell it.  Recall back in
the mid 90s, there was huge hope put on the SSL server for
Netscape as it had finally found a product to sell.
So, I think it's fair to say that the original architects
deliberately or subconsciously encouraged a situation where
bigger companies could be sold more certs.  For money of
course.  The fact that this had little to do with security
or with the needs of the corporation was lost at the time.
This is almost certainly the reason why Netscape didn't
set out selling software for each corporation to run its
own CA internally - that would have reduced the pretended
market size for certs to something unviable, on paper.
Theoretically, just to repeat my earlier fundamental point,
I don't see the point in having a corporation bound to one
cert.  They are not bound to one car, one building, one set
of letterheads, or one secretary .. what's the deal with
one cert?
The only thing that a corporation is bound to singly that
I can think of is the rules listed by the incorporations
act and the tax people.  There should be one corporate seal,
for example.  The consequence of this is of course certainly
in the english common world, the corporate seal is unused,
because it is too hard to find the one.  Likewise, anything
that is one only is generally bypassed in real business.
iang
___
Mozilla-security mailing list
[EMAIL PROTECTED]
http://mail.mozilla.org/listinfo/mozilla-security


Re: more comments on the protecting naive browsers paper - petnames

2004-08-03 Thread Ian Grigg
Amir Herzberg wrote:
http://www.cs.biu.ac.il/~herzbea//Papers/ecommerce/spoofing.htm

Right, that idea.  A couple of things - it's called a petname
which has a defined meaning, you can probably google for the
defining paper.  It is a name that is explicitly not shared
with the rest of the world, so it is distinct by definition
with the nickname, which is shared.
I didn't find the definition and didn't quite understand the distinction 
you made.

A petname is a private name that never leaves the
local domain.  I.e., the browser in this case.
In contrast, a nickname is shared.  So, for example
amazon.com is a nickname for IP# 207.171.163.90
because it is shared.  But if a petname were used,
I couldn't tell you that my petname for that IP#
was amazing books.
Here's some URLs.  I'm not sure what the primary
one out of these are:
http://zooko.com/distnames.html
http://www.erights.org/elib/capability/pnml.html
iang
___
Mozilla-security mailing list
[EMAIL PROTECTED]
http://mail.mozilla.org/listinfo/mozilla-security


Re: Can FavIcon Favor the Conmen?

2004-07-18 Thread Ian Grigg
Amir Herzberg wrote:
Ian Grigg wrote:

http://www.financialcryptography.com/mt/archives/000179.html

Yes, the FavIcon can become a real favorite with conmen and phishers... 
But I think the real use would not be to present SSL icon where it is 
not really used; as I found, many `serious` web sites such as Yahoo!, 
Chase, Microsoft's Passport, Ebay,... (see fig 5 of 
http://www.cs.biu.ac.il/~herzbea/Papers/ecommerce/spoofing.htm) already 
ask for passwords in a non-SSL-protected page.
...
The solution: allow a FavIcon only if it is properly approved by the 
user or someone trusted by the user (a peer, a-la-PGP, or a trustworthy 
Logo Certifying Authority). I.e., the FavIcon should be a part of the 
Trusted Logo and Credentials Area (see paper for details). While I must 
admit we didn't do this yet in our prototype, adding this functionality 
should not be too difficult (and we'll probably do it soon).
I think the real emphasis of the favicon attack is just
that it highlights how weak the padlock has become as
a security issue.  I don't see spoofers or phishers
adopting it in any seriousness, because they quite happily
ignore the padlock in most cases anyway - as do their
victims.
So as a point of clarification - I don't think there is
much point in Mozilla or anyone putting any effort into
protecting the favicon.  But there is a lot of point in
re-thinking the entire browser security display.
(As per your paper, as per the numourous discussions.)
iang
PS: BTW, FTR, it seems that IE is not vulnerable to this,
as an IE user has to add the site as a favourite.  Oddly,
this matches more or less what you are proposing in the
paper!  I don't see any evidence that Microsoft were
thinking that at the time, but presenting that line of
thinking in your paper may bear thinking about?
___
Mozilla-security mailing list
[EMAIL PROTECTED]
http://mail.mozilla.org/listinfo/mozilla-security


Financial Cryptography Update: New Attack on Secure Browsing

2004-07-15 Thread Ian Grigg
( Financial Cryptography Update: New Attack on Secure Browsing )
 July 15, 2004

http://www.financialcryptography.com/mt/archives/000179.html


Congratulations go to PGP Inc - who was it, guys, don't be shy this
time? - for discovering a new way to futz with secure browsing.
Click on http://www.pgp.com/ and you will see an SSL-protected page
with that cute little padlock next to domain name.  And they managed
that over HTTP, as well!  (This may not be seen in IE version 5 which
doesn't load the padlock unless you add it to favourites, or some
such.)
Whoops!  That padlock is in the wrong place, but who's going to notice?
 It looks pretty bona fide to me, and you know, for half the browsers I
use, I often can't find the darn thing anyway.  This is so good, I just
had to add one to my SSL page (http://iang.org/ssl/ ).  I feel so much
safer now, and it's cheaper than the ones that those snake oil vendors
sell :-)
What does this mean?  It's a bit of a laugh, is all, maybe.  But it
could fool some users, and as Mozilla Foundation recently stated, the
goal is to protect those that don't know how to protect themselves.  Us
techies may laugh, but we'll be laughing on the other side when some
phisher tricks users with the little favicon.
It all puts more pressure on the oh-so-long overdue project to bring
the secure back into secure browsing.  Microsoft have befuddled the
already next-to-invisible security model even further with their
favicon invention, and getting it back under control should really be a
priority.
Putting the CA logo on the chrome now seems inspired - clearly the
padlock is useless.  See countless rants [1] listing the 4 steps needed
and also a new draft paper from Amir Herzberg and Ahmad Gbara [2]
exploring the use of logos on the chrome.
[1] SSL considered harmful
http://iang.org/ssl/
[2]  Protecting (even) Naïve Web Users,
or: Preventing Spoofing and Establishing Credentials of Web Sites
http://www.cs.biu.ac.il/~herzbea/Papers/ecommerce/spoofing.htm
--
Powered by Movable Type
Version 2.64
http://www.movabletype.org/
___
Mozilla-security mailing list
[EMAIL PROTECTED]
http://mail.mozilla.org/listinfo/mozilla-security


Re: Protecting (even) Naïve Web Users from Spoofing and Phishing

2004-07-13 Thread Ian Grigg
Amir Herzberg wrote:
We have created a Mozilla extension that creates a secure, Trusted Logo 
and Credentials Area, which displays logos and other credentials of the 
site. We believe this helps protect web users, even naive users, against 
spoofing and phishing attacks. We are still playing with the code but 
hope to begin providing it to others soon; in the meanwhile, if you are 
interested, we'll love to hear your comments. The proposal is described at:

Protecting (even) Naïve Web Users, or: Preventing Spoofing and 
Establishing Credentials of Web Sites, by Amir Herzberg and Ahmad Gbara
PDF at http://eprint.iacr.org/2004/155/
HTML via http://AmirHerzberg.com, or directly from 
http://www.cs.biu.ac.il/~herzbea/Papers/ecommerce/spoofing.htm

It is the case that Mozilla's policy is to deliver
a browser that protects the default user, one who
does not know how to further secure themselves.
(ref:  Frank's CA policy discussions.)
Unfortunately, Mozilla's browsers, in common with
other browsers, as currently delivered, do not
protect the user against the biggest real threat
out there to their browsing:  phishing.
Protecting against phishing and 2 other MITM attacks
that secure browsing falls to is fairly simple:
1.  the browser caches and counts visits to each cert
protected site.  This is important because only the
browser can know that a perfect copy has never been
seen before.
2.  the browser displays the CA logo (from distro) on
the chrome.  See screen shots in the above paper.
See my earlier mail on why this is needed, and all
of the paper above.
3.  the browser displays self signed certs in the chrome
exactly as per 2.  No popup warnings!  This is critical
to ease servers over to using SSL-by-default.  Only when
large amounts of web traffic are protected by crypto
will it become routine to deal with cert fraud.
4.  for the same reasons, web servers should by default
install and operate with SSCs.
Amir and Ahmad have coded up a version of the browser
logo display in Mozilla, building on earlier work by
Ye and Smith.  They actually go further than I have
and propose site logos, and haven't coded up the cert
counting AFAIK (step 1).
Mozilla Foundation are in the unusual position
(along with Konqueror I suppose) of not having
necessarily to deal with the liability of the
phishing epidemic, but that still doesn't obviate
the need to protect ordinary, default users from
ordinary, easy phishing attacks.
iang
___
Mozilla-security mailing list
[EMAIL PROTECTED]
http://mail.mozilla.org/listinfo/mozilla-security


Making VeriSign like CocaCola - How CA Branding works against Phishing, substitute CA attack, etc etc

2004-07-12 Thread Ian Grigg
[Guys, I've added the mozilla-security group to this thread.
We are discussing this proposal:
http://www.cs.biu.ac.il/~herzbea/Papers/ecommerce/spoofing.htm ]
Amir Herzberg wrote:
Ian, I mostly agree; in particular, I agree that the fact that 
(all/most/many/some...) browsers will display the CA logo together with 
the site's logo or name, will be an incentive to CAs (or others) to 
certify logos, i.e. become LCAs.
OK.  I have no objection to (for example) the TCA/branding
box also displaying the logo of the site.  I suppose what
one would propose is that a client certificate package
could include the cert, and logos according to some agreed
formula.
(See screen shots of Mozilla in above URL.)
Then, the browser could examine the cert as presented,
and request the standard logos from standard places (and
examine the sigs on those).
So it might be a thing of defining the potential package
of initial cert signing as including some logo formats.
(length and width in pixels for example).
Now, one thing that should be done from a marketing pov
is to clearly show that a LCA (logo cert authirity) has
control over the nature of the package delivered, and
could happily sell a range of packages for different
prices [1].  This allows the LCA to differentiate.  A
it like airlines, selling the same seat for 3 times as
much because it is at the front of the plane.
(I've written elsewhere on why the market for CA-signed
certs is as dead as a dodo [2].  While last month's
figures show some overall growth in the market, Verisign's
grip continues to slip due to lack of branding and
commoditisation [3].  I wonder when they'll get it?)
 But notice this by itself does not make
an incentive to actually confirm the logos... the incentive to this 
should be the usual incentive of review and certification bodies, i.e., 
that if their certification is given without proper validation, it will 
become meaningless (and not accepted into the TCA of all/most/many/... 
browsers).
I'm only going to address the CA's logos here:  I
think these will be easily confirmable.  Here is
what I propose and why I believe there is no problem
whatsoever in confirming the CA logo set:
Each CA's root is expanded to include the logo set.
E.g., the default root list of Mozilla includes for
each CA a root cert, a small logo and a big logo [4].
The root cert and the logos are cryptographically
tied together for convenience of checking although
this is not a big deal [5].
Then, the package for each CA is published on the
Mozilla web pages.  That is, we can go to a site
page such as mozilla.org/root_certs/CACert/ and
on that page will be presented the logos in a
standard fashion.  There should be a mailing list
to which new proposed packages are submitted, and
I'd even go so far as to send out the logos as
attachments (!) so that no subscriber has any
excuse.
What does all this mean?  Within the community of
100 or so CAs that are honoured in Mozilla's default
root list, it would be relatively easy for them all
to watch the new proposed root packages and spot
any problems.  E.g., if some Ugandan CA started
proposing VeliSlime logos and certs, that just
happened to bear an uncanny resemblance to VeriSign,
then the latter CA would simply raise the reg flag,
and all would stop.  MF would convene a dispute
resolution process, wait the results and then kick
out the VeliSlime pretender [6].
When the default root list goes out, the browser
includes on disk the set of CA certs *and* the CA
logos.  The browser displays the CA logos on the
branding box on the chrome, and does all the other
good stuff that browsers already do (don't forget
to count the cert visits and present the number to
the user as well).

What does this get the user?  Hey presto, MITM
attacks that now easily breach secure browsing are
addressed [7].  The space of MITM attacks can be
divided four ways, thusly:
1.  If the scammer gets a real VeriSign cert, then
VeriSign now has an incentive to monitor the usage
of the cert.  That is, if the scammer succeeds in
ripping off a bunch of book-buying mamas, then VeriSign
is on the hook for BOTH legs of the transaction (so to
speak).  So VeriSign polices its space *within* its set
of customers.
2.  If the scammer gets an Anazom cert from the
ChineseMinistryOfIntellectualProperty, a known and
accepted CA, and then deploys that in an attack,
the user should notice the change in brand of CA.  Of
course, Amazon, having purchased the premium Verisign
package, will have been displaying the VeriSign logos
in that branding box for years.
The users can see that change.  They can see that
change like they see a change in adverts on the TV,
because it is branding and it is designed to give
the user a sense of comfort when there, a sense of
worry when not [8].
So, the users now police the space *between* the CAs.
3.  If the scammer uses a self-signed cert (SSC),
then the branding box shows Self-Signed Cert in
very boring, non-logo grey.  The user can notice
this as