Re: Another protection layer for the current trust model

2010-03-04 Thread Jean-Marc Desperrier

Nelson B Bolyard wrote:

it has exposed an unrelenting amount of accusation without
evidence.  Show us a single falsified certificate.  Anything less is
unworthy of this forum.


A large amount of that. But not necessarily exclusively.

There is in what has been reported one fact that has merit to be 
examined I think : It's the report that google's automated site 
inspection tools demonstrate that CNNIC is involved, as an entity, in 
the distribution of software that deliberately install itself on the 
computer of user without their consent, using a security vulnerability.


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Another protection layer for the current trust model

2010-02-22 Thread Eddy Nigg

Hi Kurt,



I think it's more subtle than that, some of the problems in brief:

1) Mozilla/Firefox either trust a CA 100% or not at all.


Correct.



3) It's very difficult even for technical users to find out who 
exactly signed a certificate. For example a certificate is signed by 
"valicert", who is that? (Tumbleweed bought Valicert and then Axway 
bought Tumbleweed, who the heck is Axway and what exactly do they 
do?). Or a certificate is signed by beTrust, who is that? (which 
joined up with Baltimore cybertrust to form Cybertrust, and in turn 
Verizon purchased the whole thing.).


Correct observation.

4) CAs are generally not restricted in whom they can issue certs to, 
i.e. governmental CA's (Turkey, Holland, Denmark, etc.) are not 
restricted to issuing certs within .tk, .nl, .dk for example (there 
are good arguments for and against this, but I think it should at 
least be discussed, and I'd love to see a bit more user control over 
this).


We've discussed this previously here and it's a much wanted feature. 
Unfortunately NSS doesn't supports it at the moment.


5) There is no way for an end user to really verify the CPS/CS stuff, 
most CAs seem to publish them online, quite a few are out of date by 
several years


That shouldn't happen. If you know such cases and the CA Policy in not 
adequate anymore, please let us know.


6) There appears to be no re-evaluation for CA's that are bought out 
or merge with other CAs


That's also correct. However Kathleen started to track audit reports 
very recently.


7) There are several suspicious and questionable looking CA's 
in Mozilla/Firefox, e.g.: Internet Publishing Services from Spain, 
they have 7 certificates, what possible need is there for 7 certificates?


They are on the way out - as per request of ipsCA.

8) The CA approval protocol appears to be largely fail open, they 
submit paperwork showing they comply with certain standards/etc at a 
certain time point and then there is a public comment period (where 
exactly?) and if no-one objects they are in.


It happens at mozilla.dev.security.policy and we are doing it for 
several years already. Please join us reviewing CAs at that list.


9) there is no formal process to revoke certificates for a CA that 
violate the rules. Heck theres no official set of rules for them to 
break (one signed malware code, on hundred signed malware codes? 
a provably weak domain authentication process that allows people to 
buy certificates for domains they don't own reliably, etc.).


I believe this is work in progress at the moment. See 
https://wiki.mozilla.org/CA:Root_Change_Process


10) I'm not even sure whom exactly  to contact about these issues or 
to report security problems with respect to a CA doing bad things (so 
I've been lurking on the list for a bit and am now posting).


All the action happens at mozilla.dev.security.policy (and this list is 
actually the wrong one for this discussion).


I've also seem these topics raised in this forum, Bugzilla, etc. and 
nothing much come of them which is what I expect to happen here sadly. 
One simple question I'd love to see answered: who exactly is in charge 
of this and what exactly do they do (it seems that certificate 
approval duty floats around between a few people).




Currently Kathleen Wilson is module owner and in charge for CA issues. 
There are another few Mozilla employees involved from time to time. And 
a couple of volunteers performing the reviews.


--
Regards

Signer:  Eddy Nigg, StartCom Ltd.
XMPP:start...@startcom.org
Blog:http://blog.startcom.org/
Twitter: http://twitter.com/eddy_nigg

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Another protection layer for the current trust model

2010-02-22 Thread Martin Paljak
Hello Kurt and others.

This is something I'd like to see a very long answer from someone in charge of 
these thing in Mozilla.

TIA,
Martin.

On Feb 22, 2010, at 23:25 , Kurt Seifried wrote:
> 
> This does not mean that the certificate verification mechanics are at fault;
> it only means that CA selection protocol has not been thought out properly:
> it limped along with a handful of CAs, it is showing the serious symptoms
> of the malaise with hundreds. In the meantime, does anybody here have any
> estimate of the number of CAs we expect to be around in the foreseeable
> future? And what was the number of CAs anticipated when the current
> anointment protocol was conceived?
> 
> I think it's more subtle than that, some of the problems in brief:
> 
> 1) Mozilla/Firefox either trust a CA 100% or not at all. 
> 2) Since I can't adjust trust or have Firefox warn me that I'm viewing a site 
> using a certificate I don't completely trust I can either remove the root 
> certificate, and then encounter unknown certificates and deal with that, or I 
> can manually look at EACH certificate I encounter and figure out who signed 
> it and whether or not I trust them enough (I might trust a site that I simply 
> read, but not to enter my credit card # for example).
> 3) It's very difficult even for technical users to find out who exactly 
> signed a certificate. For example a certificate is signed by "valicert", who 
> is that? (Tumbleweed bought Valicert and then Axway bought Tumbleweed, who 
> the heck is Axway and what exactly do they do?). Or a certificate is signed 
> by beTrust, who is that? (which joined up with Baltimore cybertrust to form 
> Cybertrust, and in turn Verizon purchased the whole thing.).
> 4) CAs are generally not restricted in whom they can issue certs to, i.e. 
> governmental CA's (Turkey, Holland, Denmark, etc.) are not restricted to 
> issuing certs within .tk, .nl, .dk for example (there are good arguments for 
> and against this, but I think it should at least be discussed, and I'd love 
> to see a bit more user control over this).
> 5) There is no way for an end user to really verify the CPS/CS stuff, most 
> CAs seem to publish them online, quite a few are out of date by several years
> 6) There appears to be no re-evaluation for CA's that are bought out or merge 
> with other CAs
> 7) There are several suspicious and questionable looking CA's in 
> Mozilla/Firefox, e.g.: Internet Publishing Services from Spain, they have 7 
> certificates, what possible need is there for 7 certificates?
> 8) The CA approval protocol appears to be largely fail open, they submit 
> paperwork showing they comply with certain standards/etc at a certain time 
> point and then there is a public comment period (where exactly?) and if 
> no-one objects they are in.
> 9) there is no formal process to revoke certificates for a CA that violate 
> the rules. Heck theres no official set of rules for them to break (one signed 
> malware code, on hundred signed malware codes? a provably weak domain 
> authentication process that allows people to buy certificates for domains 
> they don't own reliably, etc.). 
> 10) I'm not even sure whom exactly  to contact about these issues or to 
> report security problems with respect to a CA doing bad things (so I've been 
> lurking on the list for a bit and am now posting).  
> 
> I've also seem these topics raised in this forum, Bugzilla, etc. and nothing 
> much come of them which is what I expect to happen here sadly. One simple 
> question I'd love to see answered: who exactly is in charge of this and what 
> exactly do they do (it seems that certificate approval duty floats around 
> between a few people). 
> 
> -Kurt


-- 
Martin Paljak
http://martin.paljak.pri.ee
+3725156495


-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Another protection layer for the current trust model

2010-02-22 Thread Kurt Seifried
>
>
>
> This does not mean that the certificate verification mechanics are at
> fault;
> it only means that CA selection protocol has not been thought out properly:
> it limped along with a handful of CAs, it is showing the serious symptoms
> of the malaise with hundreds. In the meantime, does anybody here have any
> estimate of the number of CAs we expect to be around in the foreseeable
> future? And what was the number of CAs anticipated when the current
> anointment protocol was conceived?
>

I think it's more subtle than that, some of the problems in brief:

1) Mozilla/Firefox either trust a CA 100% or not at all.
2) Since I can't adjust trust or have Firefox warn me that I'm viewing a
site using a certificate I don't completely trust I can either remove the
root certificate, and then encounter unknown certificates and deal with
that, or I can manually look at EACH certificate I encounter and figure out
who signed it and whether or not I trust them enough (I might trust a site
that I simply read, but not to enter my credit card # for example).
3) It's very difficult even for technical users to find out who exactly
signed a certificate. For example a certificate is signed by "valicert", who
is that? (Tumbleweed bought Valicert and then Axway bought Tumbleweed, who
the heck is Axway and what exactly do they do?). Or a certificate is signed
by beTrust, who is that? (which joined up with Baltimore cybertrust to form
Cybertrust, and in turn Verizon purchased the whole thing.).
4) CAs are generally not restricted in whom they can issue certs to, i.e.
governmental CA's (Turkey, Holland, Denmark, etc.) are not restricted to
issuing certs within .tk, .nl, .dk for example (there are good arguments for
and against this, but I think it should at least be discussed, and I'd love
to see a bit more user control over this).
5) There is no way for an end user to really verify the CPS/CS stuff, most
CAs seem to publish them online, quite a few are out of date by several
years
6) There appears to be no re-evaluation for CA's that are bought out or
merge with other CAs
7) There are several suspicious and questionable looking CA's
in Mozilla/Firefox, e.g.: Internet Publishing Services from Spain, they have
7 certificates, what possible need is there for 7 certificates?
8) The CA approval protocol appears to be largely fail open, they submit
paperwork showing they comply with certain standards/etc at a certain time
point and then there is a public comment period (where exactly?) and if
no-one objects they are in.
9) there is no formal process to revoke certificates for a CA that violate
the rules. Heck theres no official set of rules for them to break (one
signed malware code, on hundred signed malware codes? a provably weak domain
authentication process that allows people to buy certificates for domains
they don't own reliably, etc.).
10) I'm not even sure whom exactly  to contact about these issues or to
report security problems with respect to a CA doing bad things (so I've been
lurking on the list for a bit and am now posting).

I've also seem these topics raised in this forum, Bugzilla, etc. and nothing
much come of them which is what I expect to happen here sadly. One simple
question I'd love to see answered: who exactly is in charge of this and what
exactly do they do (it seems that certificate approval duty floats around
between a few people).

-Kurt
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: Another protection layer for the current trust model

2010-02-22 Thread makrober

Nelson B Bolyard wrote:

On 2010/02/22 02:11  PST, makrober wrote:

CHHIC controversy has exposed the fallacy of current SSL implementation
premise, 


Rather, it has exposed an unrelenting amount of accusation without
evidence.  Show us a single falsified certificate.  Anything less is
unworthy of this forum.


Personally, I have no view of that particular CA; I am interested in
the somewhat abstract concept of trust, and what has to be done to
model it properly in a computer system.

It appears to me that what we have here is a clash between the concepts
of trust held by two sides: in the world of crypto product architects,
trust is created by a promise, and it takes a proven malfeasance for
it to expire. In real world, a promise is not enough to create trust.
There, it is earned by actions and can be lost by mere suspicion.

MakRober

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Another protection layer for the current trust model

2010-02-22 Thread Nelson B Bolyard
On 2010/02/22 02:11  PST, makrober wrote:
> Nguyễn Đình Nam wrote:
>>> What you're trying to do is a "who is watching the watchers" kind thing...
> 
>> ...Every existing CA [...] made a promise to comply to the universal PKI 
>> trust policy, we just need a scheme to enforce their promise.
> 
> If we need a scheme to enforce some TTP's promise of uncorruptibility, he
> evidently does not qualify as a Trusted Third Party.
> 
> CHHIC controversy has exposed the fallacy of current SSL implementation
> premise, 

Rather, it has exposed an unrelenting amount of accusation without
evidence.  Show us a single falsified certificate.  Anything less is
unworthy of this forum.
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: Another protection layer for the current trust model

2010-02-22 Thread Martin Paljak
On Feb 22, 2010, at 13:03 , Nguyễn Đình Nam wrote:
>> 
> I agree with you that you should revive the CA selection protocol, but
> we should also add 01 Auditing layer above of it anyway, it's an
> independent problem.
CA-s are audited, AFAIK that's one of the basic requirements. If your problem 
is auditing or not trusting the processes in the CA, implement a multi-party CA.


>> Otherwise (as it was correctly observed in one of the previous messages),
>> we can add layers upon layers of "watcher watchers" without ever addressing
>> the fundamental problem.
> We don't need and don't want (near) absolute security, one auditing
> layer is reliable enough.
> Have you considered my argument of the financial report and the
> auditor? Even the most prestigious public company need one layer
> auditor, but one is enough for general use, not countless layers upon
> layers. Of course there is still layers of law above all of them.

Financial auditing has shown to fail miserably, especially in the past years 
(think: Enron, US banking industry, credit rating companies and so on).

Yes, it is better than nothing and they learn from their mistakes and the 
overall level is not that bad. Even though their core business is selling 
trust, business per se can come first. It is a matter of personal beliefs and 
experiences and differences in cultures.

But if asked to improve/re-design existing trust models (nothing to do with 
basic cryptography) I would not patch one central point of failure with another 
one. 
For me the real solution is basic education for users (there are fools who give 
out their PIN codes to the first one who'll ask but that generation will soon 
be gone) and a web of trust kind of model and/or at least the trust decision 
would be done by human beings, not by the software. 

The only real "upgrade" the X509/CA/SSL business ever had is this thing that 
only reveals itself via minor UI improvement: a green bar. If you claim that 
the trust mechanism provided by TLS is not good enough and provide a solution 
that claims to fix it by "By default, there is no new user interface feature, 
for the users, it just works. Relevant parties will watch over the problem." I 
would say that you have failed.

Why?
Because you can't fix a thing, that is in fact a personal decision done by a 
living person, with a solution that the user will never notice and which 
"relevant parties" will deal with.

Long story short: If you think it would be useful to users, implement it as an 
extension and see how it does. For me extensions that trick the SSL layer or 
send out requests to the world wide web without me not noticing it are NO-NO. 
Implementing this as a core service will probably not happen.

At the same time, it seems there are many (more than three) people on this list 
that think the current trust model could be re-designed and more control given 
to the user. What about joining the efforts and pick up/fork PSM/NSS to work in 
a different way?


-- 
Martin Paljak
http://martin.paljak.pri.ee
+3725156495


-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: Another protection layer for the current trust model

2010-02-22 Thread Nguyễn Đình Nam
On Feb 22, 5:11 pm, makrober  wrote:
> > ...Every existing CA [...] made a promise to comply to the universal PKI
> > trust policy, we just need a scheme to enforce their promise.
>
> If we need a scheme to enforce some TTP's promise of uncorruptibility, he
> evidently does not qualify as a Trusted Third Party.
Why are you thinking so absolute in the world of cryptography?

> This does not mean that the certificate verification mechanics are at fault;
> it only means that CA selection protocol has not been thought out properly:
> it limped along with a handful of CAs, it is showing the serious symptoms
> of the malaise with hundreds. In the meantime, does anybody here have any
> estimate of the number of CAs we expect to be around in the foreseeable
> future? And what was the number of CAs anticipated when the current
> anointment protocol was conceived?
> If the above is correct - and I just don't think how one could argue
> otherwise - the ONLY solution is to put the selection of TTPs back into
> the hands of communicating parties. And not as an option, but as a default.
I agree with you that you should revive the CA selection protocol, but
we should also add 01 Auditing layer above of it anyway, it's an
independent problem.

> Otherwise (as it was correctly observed in one of the previous messages),
> we can add layers upon layers of "watcher watchers" without ever addressing
> the fundamental problem.
We don't need and don't want (near) absolute security, one auditing
layer is reliable enough.
Have you considered my argument of the financial report and the
auditor? Even the most prestigious public company need one layer
auditor, but one is enough for general use, not countless layers upon
layers. Of course there is still layers of law above all of them.
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Another protection layer for the current trust model

2010-02-22 Thread makrober

Nguyễn Đình Nam wrote:

What you're trying to do is a "who is watching the watchers" kind thing...


...Every existing CA [...] made a promise to comply to the universal PKI 

> trust policy, we just need a scheme to enforce their promise.

If we need a scheme to enforce some TTP's promise of uncorruptibility, he
evidently does not qualify as a Trusted Third Party.

CHHIC controversy has exposed the fallacy of current SSL implementation
premise, i.e., that there can exist a large (and growing!) number of TTPs
that would be selected by a software vendor and then be trusted by the
whole population of users of their family of computer communication
applications.

This does not mean that the certificate verification mechanics are at fault;
it only means that CA selection protocol has not been thought out properly:
it limped along with a handful of CAs, it is showing the serious symptoms
of the malaise with hundreds. In the meantime, does anybody here have any
estimate of the number of CAs we expect to be around in the foreseeable
future? And what was the number of CAs anticipated when the current
anointment protocol was conceived?

If the above is correct - and I just don't think how one could argue
otherwise - the ONLY solution is to put the selection of TTPs back into
the hands of communicating parties. And not as an option, but as a default.

Otherwise (as it was correctly observed in one of the previous messages),
we can add layers upon layers of "watcher watchers" without ever addressing
the fundamental problem.

MacRober

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: Another protection layer for the current trust model

2010-02-22 Thread Nguyễn Đình Nam
> What you're trying to do is a "who is watching the watchers" kind thing and 
> as you described, you do this by adding another central piece of machinery to 
> the picture where another central piece of machinery is easily manipulated 
> into rogue actions. I don't see how this would make anything better.
I think it's much better. Every existing CA including CNNIC made a
promise to comply to the universal PKI trust policy, we just need a
scheme to enforce their promise. It's quite easy for a single person
to breech the trust, but it's extremely harder for 2 independent
organizations to operate a conspiracy of breach of trust.
It's like people shouldn't trust a self made financial report of a
public company, but if it's reviewed by an independent auditor, it's
considered trustworthy enough for serious usage. Of course there may
be exceptions, but cryptography itself is not absolute anyway.

> If you're talking about a country level PKI (probably supported by law) and 
> the need to bring some bad guys operating in that system to justice under the 
> same law environment This should be fixed on that local level, not as an 
> addon software piece.
If an Auditing scheme is not implemented, almost all bad guys won't be
detected, so they be laughing all the way to the bank.

> The same problem haunts OCSP or all central services.
The proposed scheme reveal much less information than CRL and OCSP,
only reveal the first access instead of every access, so as long as
OCSP exists, the proposed "Auditing scheme" is not a decent privacy
threat.
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Another protection layer for the current trust model

2010-02-21 Thread Martin Paljak
On Feb 22, 2010, at 05:20 , Nguyễn Đình Nam wrote:
> On Feb 22, 3:56 am, Eddy Nigg  wrote:
>> On 02/21/2010 09:34 AM, Nguyễn Đình Nam:
>> 
>>> The way to solve it is not to inform people of each potential attack,
>>> because there will be too many false positive, pushing people to just
>>> ignore it, rendering the scheme ineffective. The way to solve it is to
>>> let a small number of relevant and knowledgable people aware of the
>>> incident...
>> 
>> Chances that this will happen are almost nil I think.
> I googled your name and I found
> https://bugzilla.mozilla.org/show_bug.cgi?id=470897
> So it did happened. Actually a CA abused the trust.
> The proposed scheme is explicitly to prevent this case.
You'r e-mail subject describes your intention well: "everything (in software) 
can be fixed by adding another layer". Yet you can't (easily) fix a broken 
trust issue with another layer, especially if the added layer has the same 
(broken) traits as the original one (vulnerable centre of gravity).

I don't trust a random (or for example the CA of my country) CA more than I 
would trust a bunch of (possibly well chosen and knowledgeable etc) people 
chosen (not by me) to "guard and direct" my trust decisions. 

What you're trying to do is a "who is watching the watchers" kind thing and as 
you described, you do this by adding another central piece of machinery to the 
picture where another central piece of machinery is easily manipulated into 
rogue actions. I don't see how this would make anything better.

If you're talking about a country level PKI (probably supported by law) and the 
need to bring some bad guys operating in that system to justice under the same 
law environment This should be fixed on that local level, not as an addon 
software piece.

Probably some sound multiparty control/public verification mechanism backed by 
cryptography and implemented by the central CA and/or enforced by local laws 
would give better results.


>> there are privacy issues involved too if this would
>> be in a default build. I guess it's not feasible.
> I think it should be in the default build instead of an add-on. Yes
> there is a small privacy issue: if the intrusion detection server is
> malicious, it'll know each time a user establishes a secured
> connection to somewhere else the first time, but not following
> accesses.
The same problem haunts OCSP or all central services.


> If the intrusion detection server is managed by the creator
> of browser itself (in this case, it's Mozilla), the privacy issue is
> solved.
How come? Some people are OK with their browser sending "check if this url 
contains something bad" kind of messages to the internet (to google, to 
antivirus provider, to microsoft or anyone else) others are not. The fact that 
the big brother happens to be the browser vendor does not solve the privacy 
issue for those who care.

The scheme would be similar in nature and function to the "URL scanners" 
installed by software such as AVG. Some people install them by accident or 
knowingly and are OK. Others disable them ASAP. Implementing something like 
this in the core browser would be like implementing a big brother agent.


-- 
Martin Paljak
http://martin.paljak.pri.ee
+3725156495


-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: Another protection layer for the current trust model

2010-02-21 Thread Nguyễn Đình Nam
On Feb 22, 3:56 am, Eddy Nigg  wrote:
> On 02/21/2010 09:34 AM, Nguyễn Đình Nam:
>
> > The way to solve it is not to inform people of each potential attack,
> > because there will be too many false positive, pushing people to just
> > ignore it, rendering the scheme ineffective. The way to solve it is to
> > let a small number of relevant and knowledgable people aware of the
> > incident...
>
> Chances that this will happen are almost nil I think.
I googled your name and I found
https://bugzilla.mozilla.org/show_bug.cgi?id=470897
So it did happened. Actually a CA abused the trust.
The proposed scheme is explicitly to prevent this case.

> there are privacy issues involved too if this would
> be in a default build. I guess it's not feasible.
I think it should be in the default build instead of an add-on. Yes
there is a small privacy issue: if the intrusion detection server is
malicious, it'll know each time a user establishes a secured
connection to somewhere else the first time, but not following
accesses. If the intrusion detection server is managed by the creator
of browser itself (in this case, it's Mozilla), the privacy issue is
solved.
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: Another protection layer for the current trust model

2010-02-21 Thread Eddy Nigg

On 02/21/2010 10:56 PM, Eddy Nigg:

On 02/21/2010 09:34 AM, Nguyễn Đình Nam:

The way to solve it is not to inform people of each potential attack,
because there will be too many false positive, pushing people to just
ignore it, rendering the scheme ineffective. The way to solve it is to
let a small number of relevant and knowledgable people aware of the
incident...


Changes that this will happen are almost nil I think.


s/changes/chances/ :-)

--
Regards

Signer:  Eddy Nigg, StartCom Ltd.
XMPP:start...@startcom.org
Blog:http://blog.startcom.org/
Twitter: http://twitter.com/eddy_nigg


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: Another protection layer for the current trust model

2010-02-21 Thread Eddy Nigg

On 02/21/2010 09:34 AM, Nguyễn Đình Nam:

The way to solve it is not to inform people of each potential attack,
because there will be too many false positive, pushing people to just
ignore it, rendering the scheme ineffective. The way to solve it is to
let a small number of relevant and knowledgable people aware of the
incident...
   


Changes that this will happen are almost nil I think. It's perhaps a 
noble effort, but not too many would bother installing your add-on in 
first place, second there are privacy issues involved too if this would 
be in a default build. I guess it's not feasible.


--
Regards

Signer:  Eddy Nigg, StartCom Ltd.
XMPP:start...@startcom.org
Blog:http://blog.startcom.org/
Twitter: http://twitter.com/eddy_nigg


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: Another protection layer for the current trust model

2010-02-20 Thread Nguyễn Đình Nam
> If this solution would solve the problem in such an easy way, why isn't
> it already in use for more than a decade? Recent studies going at task
> with those accessing SSH servers have shown that most users simple edit
> their known_hosts file - those people are way more knowledgeable than
> the casual users. It doesn't work...
Probably most of you are thinking of how to prevent MITM attack in
general, especially for self signed certificate or equivalent -- SSH.

What I want is different, I want to prevent the case where a trusted
CA abuses the power. Currently, if a CA decides to create a rogue
certificate to MITM attack a few selected people, that CA will most
likely go away undetected and unpunished. This kind of attack is the
real life threat, raised to awareness by the controversy of CNNIC.

The way to solve it is not to inform people of each potential attack,
because there will be too many false positive, pushing people to just
ignore it, rendering the scheme ineffective. The way to solve it is to
let a small number of relevant and knowledgable people aware of the
incident, so the public can bring the violator to the justice.
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Another protection layer for the current trust model

2010-02-20 Thread Eddy Nigg

On 02/21/2010 04:11 AM, Nguyễn Đình Nam:

I think you didn't look closely at my description.
The intrusion detection servers track the changes of certificates
belong to a host name over time, reported by user agent software
around the world, this is just like "Perspectives". If there is one
time the legitimate certificate from the web server reaches the web
browser, it'll be recorded.
   


This will work as with SSH keys or anything that changes fairly often. 
People simply will ignore it and take it as a fact of live that this 
happens from time to time. It just takes a little longer - first they 
examine the certificate perhaps, convince themselves that it's a new 
certificate and allow it to go through. So does your tracking server and 
over time, you are at square one, people will click through like with 
anything.


If this solution would solve the problem in such an easy way, why isn't 
it already in use for more than a decade? Recent studies going at task 
with those accessing SSH servers have shown that most users simple edit 
their known_hosts file - those people are way more knowledgeable than 
the casual users. It doesn't work...


--
Regards

Signer:  Eddy Nigg, StartCom Ltd.
XMPP:start...@startcom.org
Blog:http://blog.startcom.org/
Twitter: http://twitter.com/eddy_nigg


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: Another protection layer for the current trust model

2010-02-20 Thread Nguyễn Đình Nam
> 1. How do you secure the connection to the perspectives server?
The software to be released with predefined intrusion detection
servers, each comes with it's own X.509 certificate, should be self
signed. It's a kind of "Auditive" mechanism, by using it, we should be
suspicious of any CA, so we won't use the same CAs we are trying to
audit. The connection should be https for easy implementation.
I don't see description on how "Perspectives" deal with this issue,
can you explain?

> 2. How do you avoid false reports for the multiple servers that legitimately
> claim to be the same server (same DNS name) in a content distribution
> network (e.g. akamai)?
I don't know why this "Auditive" scheme has to avoid this problem,
what the threat? BTW, IMHO, CDN is used to distribute popular content,
so the connection to a CDN should be in plain text.

> 3. This scheme doesn't help when the MITM places himself close to the server
> under attack (e.g. the server's ISP), such that all the clients everywhere
> (except at the server's own point of presence) see the attacker's MITM'ed
> cert chain.   Isn't that a likely scenario for attacks in situations where
> the ISP is controlled by the hostile party?
I think you didn't look closely at my description.
The intrusion detection servers track the changes of certificates
belong to a host name over time, reported by user agent software
around the world, this is just like "Perspectives". If there is one
time the legitimate certificate from the web server reaches the web
browser, it'll be recorded.

The difference between "Perspectives" and this "Auditive" scheme is
that Auditive is intended to inform system admin of the potential
intrusion, not the user. This is an advantage, let's imagine, when
gmail changes certificate when the existing certificate is almost
expire, many millions users of "Perspectives" will be alerted and goes
panic because the consumer isn't likely to be able to read the
differences, many unnecessary support requests will be generated.
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Another protection layer for the current trust model

2010-02-20 Thread Nelson B Bolyard
On 2010-02-20 08:46 PST, Nguyễn Đình Nam wrote:
[yet another promotion of "perspectives"]

Questions/issues:

1. How do you secure the connection to the perspectives server?
   (This is a recursive problem)

2. How do you avoid false reports for the multiple servers that legitimately
claim to be the same server (same DNS name) in a content distribution
network (e.g. akamai)?

3. This scheme doesn't help when the MITM places himself close to the server
under attack (e.g. the server's ISP), such that all the clients everywhere
(except at the server's own point of presence) see the attacker's MITM'ed
cert chain.   Isn't that a likely scenario for attacks in situations where
the ISP is controlled by the hostile party?


-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: Another protection layer for the current trust model

2010-02-20 Thread Nguyễn Đình Nam
I forget to mention, I aware there are two similar mechanisms:
"Perspectives": http://www.cs.cmu.edu/~perspectives/firefox.html
"Certificate Patrol": https://addons.mozilla.org/en-US/firefox/addon/6415

According to my analysis, my proposed mechanism has following
advantages:
* Easier to use: no interaction with user is required
* Better chance to catch the rogue certificate
* Inform the general public of the incident, provide evidence to
punish the rogue CA

But I may be subjective, so please comment on the idea, if you think
it's really better, I'll implement it.
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Another protection layer for the current trust model

2010-02-20 Thread Nguyễn Đình Nam
Background
Recently I have read the problem of Mozilla and CNNIC. Many years ago,
I was a cryptography researcher, I worked on this problem when my
country – Vietnam – started working on a central PKI. Vietnam is
similar to China, the possibility of being cheated by rogue
certificates created under government's pressure is the risk people
must anticipate. I designed a mechanism to add another protection
layer to the current trust model, which may solve this problem quite
elegantly.

The mechanism
* When the user agent software (usually a web browser) obtains a
certificate which the agent has never seen, it uses encrypted
communication to report the fingerprint of that certificate to a
central intrusion detection server. If the server determines that the
certificate is suspicious, it will request the user agent software to
send the certificate and additional information to the server as the
evidence of violation. If this communication fails, the failure can be
treated the same way as failure of CRL or OCSP.
* The user agent software caches the fingerprint of the certificate
similar to openssh's known_hosts file to bypass this process on futher
visits, saving bandwidth. The encrypted communication may be https to
maximize the reuse of existing code base, or a more lightweight
protocol to save bandwidth.
* The intrusion detection server should determine that the certificate
is suspicious on the first time the certificate has been seen by the
server. If the server doesn't want to store too many certificates, it
may choose to only be suspicious about sensitive domains, which is
more likely to be choosen by eavesdroppers. The server may have a
mechanism to inform the owner a certificate if there are other
certificates issued with similar information such as host name,
company name, etc.
* If a CA creates a rogue certificate, the evidence will be clear,
allowing for adequate punishment of violators, justice will be redeem
later by relevant parties, users should not have to care about this
issue. But there may be an optional "paranoid mode", informing user
each time the intrusion detection server determines that a certificate
is suspicious.

Analysis of the mechanism:
* By default, there is no new user interface feature, for the users,
it just works. Relevant parties will watch over the problem.
* It strengthens the existing trust model. Even a prestigious CA with
a perfect process comes with risks: usually the weakest link in a
cryptography system is the people using it, the people managing the CA
may be corrupted or under pressure of the authority, or has some
personal desire to abuse the power.
* The fingerprint is very short so the overhead is very low, should be
lower than OCSP. Probably the different between using and not using
this mechanism won't be noticeable to users.

Conclusion
I believe this mechanism will add a missing link to improve our trust
model. "The love of money is the root of all evil", people has not
forgoten the case Verisign corrupts DNS, abuse it's trusted status, we
are putting too much trust on the list of CA, it's time to add a
protection layer allow us to punish potential violator of justice.

Please comment
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto