Re: Using the OTR plugin with Pidgin for verifying GPG public key fingerprints

2010-03-13 Thread erythrocyte
On Sat, Mar 13, 2010 at 1:00 PM, Robert J. Hansen r...@sixdemonbag.orgwrote:

  I'm a little confused as to how does that make it any different from
 using the Pidgin OTR method.

 It's a question of degree, not kind.

  I simply open up an OTR session, ask my friend a question the answer to
 which is secret (only known to him)

 How do you know the secret is known only to him?  Most secrets really
 aren't; a good investigator can discover an awful lot of secret
 information about someone.  Shared-secret authentication is one of the
 weakest forms out there.  It's better than nothing, but it's not something
 that ought be relied upon.  People tend to vastly overestimate how secret
 their secrets are.

 As an example, a few years ago I saw in a spy novel (set in the modern day)
 two protagonists negotiating a phone number over an insecure line.  Hey,
 that guy we know who did X?  Take his phone number, subtract this number
 from it.  The resulting phone number is what you need to call.

 It sounds great and reliable: it's a shared secret.  The problem is it's
 totally bogus.  Phone numbers aren't random.  In the United States, for
 instance, phone numbers follow the NPA-NXX format.  That reduces this
 question down to a glorified Sudoku: a skilled investigator could figure it
 out in just a few minutes.



Thanks for the explanation.  Makes sense :-) . I think I understand the
pitfalls much better now.
___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Using the OTR plugin with Pidgin for verifying GPG public key fingerprints

2010-03-13 Thread erythrocyte
On Sat, Mar 13, 2010 at 1:14 PM, Robert J. Hansen r...@sixdemonbag.orgwrote:

 Even then — so what?  Let's say the Type II rate is 25%.  That's a very
 high Type II rate; most people would think that failing to recognize one set
 of fake IDs per four is a really bad error rate.  Yet, if you're at a
 keysigning party where there are five people independently applying a
 25%-faulty test, the likelihood of accepting a fake ID is under 1%.


It really depends on how you're calculating combined probability. If you
take an example of 4 individuals at a keysigning party,

The combined probability that all individuals would accept a fake ID would
be 1/4 * 1/4 * 1/4 * 1/4 = 0.00390625 .

However, the combined probability that at least one of the encounters would
result in accepting a fake ID would be 1/4 + 1/4 + 1/4 + 1/4  = 1 .

Please do correct me if I've made a mistake. I'm not a math guru by any
means.

But all that aside, I'm pretty sure news reports, etc. of human traffickers,
smugglers, spies, etc. all confirm the fact that national IDs such as
passports can be forged and do in fact slip by immigration authorities
pretty commonly.


I think I've gotten the answer to the question in thread. Thanks.
___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Using the OTR plugin with Pidgin for verifying GPG public key fingerprints

2010-03-13 Thread erythrocyte
2010/3/13 Ingo Klöcker kloec...@kde.org
 Sorry, but your calculation is wrong. If the calculation was correct
 then with 5 encounters the probability would be 1.25 which is an
 impossibility. Probability is never negative and never  1. (People say
 all the time that they are 110 % sure that something will happen, but
 mathematically that's complete nonsense.)

 The probability that the fake ID is rejected by all individuals is
  (1 - 1/4)^4.
 Consequently, the probability that the fake ID is not rejected by all
 individuals (i.e. it is accepted at least by one individual) is
  1 - (1 - 1/4)^4.

Ah yes. That makes sense :-) . Thank you.

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Using the OTR plugin with Pidgin for verifying GPG public key fingerprints

2010-03-13 Thread erythrocyte
On Sat, Mar 13, 2010 at 10:04 PM, Robert J. Hansen r...@sixdemonbag.org wrote:

 99.6%; a little different.  The binomial theorem gives us the correct numbers.

 0 failures: 31.6%
 1 failure: 42.2%
 2 failures: 21.1%
 3 failures: 4.7%
 4 failures: 0.4%

Alrighty... :-) . So the combined probability that there would be = 1
failures would be 68.4% .


 Anyway. [...] someone at the keysigning party will say, hey, that's weird! 
 and show it to everyone else at the keysigning party.

Umm.. if I understand the nature of the probability tests or
calculations just mentioned above, the results have to be accepted as
they are. They either got it wrong or right. Those individuals who got
it wrong might have actually had that thought, hey, that's weird,
but eventually they did go ahead and make that wrong decision. I'm
just recollecting some probability concepts and hypothesis testing
concepts I learned a long time ago.

And besides, even if the above weren't true, how do I know that
someone who does have that thought will make sure to check with others
at the keysigning party?

 ...assuming there's not some deep systemic reason for the failure (i.e., all
 trials are independent), you still have nothing to worry about

I guess depending on one's security policy or requirements that's a
pretty weighty assumption to make.

Also, there's a difference between deciding a stranger's identity
solely based on a passport/national ID versus checking his/her ID
_and_ getting to know them a little better. And that decision lies in
the hands of the user. It's a more social issue I guess.

Anyhow, I've learned so much from this great discussion over the past
few days. Let me thank all who've cared to enlighten a new user such
as me about these things. This is definitely a great community! :-)

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Using the OTR plugin with Pidgin for verifying GPG public key fingerprints

2010-03-13 Thread erythrocyte
On Sun, Mar 14, 2010 at 8:08 AM, Robert J. Hansen r...@sixdemonbag.org wrote:
 On 3/13/10 8:06 PM, erythrocyte wrote:
 Umm.. if I understand the nature of the probability tests or
 calculations just mentioned above, the results have to be accepted as
 they are. They either got it wrong or right. Those individuals who got
 it wrong might have actually had that thought, hey, that's weird,
 but eventually they did go ahead and make that wrong decision. I'm
 just recollecting some probability concepts and hypothesis testing
 concepts I learned a long time ago.

 You don't.

 If person A and person B disagree on whether something is fake, the
 operating assumption is that it's fake.  The burden is on the person
 claiming it's *not* fake to persuade the person claiming it *is* fake
 that they're wrong.

 Alan: Hey, Bill, did you see this ID?  It looks fishy.
 Bill: It looked good to me.
 Alan: It doesn't look good to me.
 Bill: Okay.  Let me show you why I thought it was good, and let's take
       it from there...

Hmmm...I know this is already getting off-topic. But let me qualify
that by saying that it really depends on what error you're calculating
here. From my understanding, the probabilities calculated give you
random error. That is given a population of 4 people, there is a
68.4% chance that there would =1 failures purely by random effects
regardless of what actions they may or may not take to influence their
chances of making a mistake .

These calculations do not give you the effects of systematic error or
bias. Systematic error would be what you're referring to. That can be
different.

The sum error would be a combination of random and systematic error.

Of course, all of this gives us a picture of the average chances of
error. When it comes to individual people, like you and I, we are not
averages. Some of us will be more adept than others at not making
mistakes and that in turn will depend on a whole slew of other
factors. Now all of that should be taken into account when thinking
about one's security policy.

And I might add that all of this also depends on what your perspective
is. I for one did not envision a scenario where Alan and Bill from
your example, would discuss their ruminations with each other. Of
course that might happen. But not necessarily always. That's just
human behavior perhaps.

 besides, even if the above weren't true, how do I know that
 someone who does have that thought will make sure to check with others
 at the keysigning party?

 There is a word for someone who keeps their mouth shut about fake IDs at
 keysigning parties.  That word is *conspirator*.

Or *incompetent*, *stupid*, *lazy*, *not learned*, *unsure*,
*unaware*, etc. It could be any combination of the above :-) .

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Using the OTR plugin with Pidgin for verifying GPG public key fingerprints

2010-03-12 Thread erythrocyte
 a Security Now episode.
BTW Schneier did a nice interview discussing some SSL pitfalls here
http://www.v3.co.uk/v3/news/2258899/rsa-2010-q-bruce-schneier .


-- 
erythrocyte

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Using the OTR plugin with Pidgin for verifying GPG public key fingerprints

2010-03-12 Thread erythrocyte
On 3/13/2010 2:14 AM, Doug Barton wrote:
 You posited a scenario where you are using OTR communications to verify
 a PGP key. My assumption (and pardon me if it was incorrect) was that
 you had a security-related purpose in mind for the verified key.

Yes :-) .



-- 
erythrocyte

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Using the OTR plugin with Pidgin for verifying GPG public key fingerprints

2010-03-12 Thread erythrocyte
On 3/13/2010 1:01 AM, Robert J. Hansen wrote:
 Sure.  But the problem here isn't spoofed emails.  The problem here is living 
 in an area where basic human rights aren't respected.  The spoofed emails 
 didn't get them convicted: the spoofed emails were cooked up to provide 
 political cover for a conviction that was preordained.
 
 So I think the statement, people get convicted ... based on spoofed emails 
 ... all the time is overreaching.  The basis for their conviction is they're 
 members of a persecuted minority -- not spoofed emails.

Sure, that is such a valid point. I'm a completely new user to GPG, so
do pardon some of my ruminations :-) . I realize they might not be
completely correct.

I guess what I'm trying to say here is that because regular people don't
understand what spoofing actually is, that by itself is a security hole.
The only way to correct something like that is to educate people and to
educate oneself. I also think the word 'spoofing' could apply not just
to emails, but to other things such as forging real-life identities such
as passports, etc as well. There's no way I could be trained enough to
recognize spoofing of the latter kind even at a keysigning party. So as
I begin to use GPG, I'm becoming more and more aware of the limitations
that one has to come across - be they technological or social.

 Which leaves the question unanswered: since OTR exists to provide PFS/R, and 
 you ignore PFS/R, why use OTR?

I actually use Pidgin OTR because

a. it gives me the PFS/R option if and when I do think that might
   help (realizing its limitations nevertheless).
b. I just think the ease with which users can authenticate makes it
   a good choice. The secret answer method of authenticating is
   easy for most of my friends to understand.

 If you live in a place that does things like this, they can already throw you 
 in the gulag under any pretense they want...

Well, I do think that's such a relative thing. Just because you don't
notice these kinds of things going on in the place where you live
doesn't mean they don't happen. How many people actually bother to look?

I guess what I'm saying here is that human rights abuses can occur
anywhere and everywhere.

 ... but only by helping you keep information safe between the endpoints... 
 This does not mean GnuPG is defective.  It means you need to understand your 
 problem, your solution, and what tools you need to enact your solution.

I think that that makes perfect sense. :-)


-- 
erythrocyte

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Using the OTR plugin with Pidgin for verifying GPG public key fingerprints

2010-03-12 Thread erythrocyte
On 3/13/2010 1:10 AM, MFPA wrote:
 Each of these adds a given amount of risk, that really should be
 made transparent to end-users IMHO.
 
 
 I think you might mean the risk should be made *clear* to end-users?
 Security is already *transparent* to end users visiting a secure website
 whose root certificate the browser already trusts.

I guess you could think of it that way. I guess what I'm trying to say
is that there might be instances where your security requirements aren't
in line with what your browser already trusts. And there has to be a
method to improve that and make it more clear / transparent / etc.

 Some belong to well known CAs, while others belong to less reputable
 ones.
 
 A lot there that I've not heard of. Could be perfectly reputable, but
 I am unaware of their reputation...

Again 'repute' in this context is relative. People's gold-standards can
vary. I might be comfortable in trusting CA-A because they've actually
never ever screwed up in the past, while I wouldn't feel the same way
with CA-B because they actually have. It all goes back to how you define
your security requirements. Steve Gibson on his podcast, Security Now,
once talked about how a certificate from a well known CA was spoofed
because of a weak hash algorithm that was used in signing.

-- 
erythrocyte

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Using the OTR plugin with Pidgin for verifying GPG public key fingerprints

2010-03-12 Thread erythrocyte
On 3/12/2010 5:33 PM, Robert J. Hansen wrote:
 The question isn't whether you can.  The question is whether it's wise.  The 
 principle of using one credential to authorize the use of another credential 
 is about as old as the hills.  The ways to exploit this are about as old as 
 the hills, too. 

That actually got me thinking. Aren't keysigning parties based on that
model anyway?

You have an existing credential - a passport.
You then use that credential to verify another - a PGP key.

-- 
erythrocyte

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Re[2]: Using the OTR plugin with Pidgin for verifying GPG public key fingerprints

2010-03-12 Thread erythrocyte
On Sat, Mar 13, 2010 at 2:44 AM, MFPA expires2...@ymail.com wrote:

 I would question whether the defence solicitor was fit to practice if
 he didn't produce expert witnesses who could explain this sufficiently
 clearly for the jury to understand.



LOL ...Easier said than done, IMHO :-) :-P .

--
erythrocyte
___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Using the OTR plugin with Pidgin for verifying GPG public key fingerprints

2010-03-12 Thread erythrocyte
On Sat, Mar 13, 2010 at 11:40 AM, Robert J. Hansen r...@sixdemonbag.orgwrote:

  You have an existing credential - a passport.
  You then use that credential to verify another - a PGP key.

 The passport isn't used to verify the OpenPGP key.  The passport is used to
 verify *identity*.  The key fingerprint is used to verify the OpenPGP key.

 A signature is a statement of I believe this person is associated with
 this OpenPGP key.  To do that, you have to first verify the person is who
 you think they are (the passport); you have to verify the key is what you
 think it is (the fingerprint); and then you make a statement about the two
 being associated.


I'm a little confused as to how does that make it any different from using
the Pidgin OTR method.

I simply open up an OTR session, ask my friend a question the answer to
which is secret (only known to him) and thereby authenticate that it is in
fact him that I'm talking to. Next, over this secure connection, we exchange
our email-encryption key fingerprints and verify them and then sign them, in
effect stating like you mentioned: Yes, I believe this person is associated
with this OpenPGP key.
___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Using the OTR plugin with Pidgin for verifying GPG public key fingerprints

2010-03-12 Thread erythrocyte
On Sat, Mar 13, 2010 at 11:30 AM, Robert J. Hansen r...@sixdemonbag.orgwrote:

  There's no way I could be trained enough to
  recognize spoofing of the latter kind even at a keysigning party.

 A serious question here -- have you considered writing Immigration and
 Customs Enforcement or the Border Patrol (or equivalent groups, wherever you
 are) and asking them for information on how to distinguish real passports
 from forgeries?

 Most governments are very willing to tell people what to look for.  It's in
 their best interests for official identity documents to not be forged, and
 for forgeries to be discovered as quickly as possible.  When I've asked the
 United States government about this they've always been cooperative.

 You'd be amazed what you can learn just by having the chutzpah to walk up
 to someone who knows and saying, hi, could you share?  :)



The reason I think that it's still difficult is because even immigration
officials get duped all the time.


b. I just think the ease with which users can authenticate makes it
a good choice. The secret answer method of authenticating is
easy for most of my friends to understand.

 It is also a far weaker form of authentication than is often recommended
 for OpenPGP keys.  Not that this makes the technique invalid, but the weaker
 authentication needs to at least be considered.



Okay. What weakness(es) do I need to be wary of?



  Well, I do think that's such a relative thing. Just because you don't
  notice these kinds of things going on in the place where you live
  doesn't mean they don't happen. How many people actually bother to look?

 The United States has 1400 independent daily newspapers, each of whom
 employ a large number of people whose job it is to look.  On top of that you
 have groups like the Innocence Project that look for abuses in criminal
 courts, you have groups like ACCURATE that look for abuses in voting, you
 have...

 The Western tradition of government usually involves a lot of people
 looking.  This is certainly not to say that abuses don't happen -- they
 clearly do -- but they do not occur at the frequency many fear.



Pardon me for being skeptical about all of that. I realize that this is a
controversial issue and I'm respectful of what you believe.

--
erythrocyte
___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Off-The-Record Email

2010-03-11 Thread erythrocyte
I'm a user of Pidgin with the off-the-record plugin:

   http://www.cypherpunks.ca/otr/help/3.2.0/levels.php?lang=en
   http://www.cypherpunks.ca/otr/help/3.2.0/authenticate.php?lang=en

Is there a way to be able to have off-the-record email conversations
with GPG technology? It would definitely be a terrific thing. Email is
traditionally supposed to mirror paper mail and paper mail is usually
not thought of as being off-the-record, so I guess that's probably why
no one really thinks about it.

But if the technology exists, it would be fantastic to have OTR email
conversations every now and then.

I came across some interesting articles and papers worth checking
out:

  http://circle.ubc.ca/handle/2429/18249
  http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.29.2823

but nothing out there for users to be able to use.

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Using the OTR plugin with Pidgin for verifying GPG public key fingerprints

2010-03-11 Thread erythrocyte
I'm a user of Pidgin with the off-the-record plugin:

   http://www.cypherpunks.ca/otr/help/3.2.0/levels.php?lang=en
   http://www.cypherpunks.ca/otr/help/3.2.0/authenticate.php?lang=en

In order to use GPG based email encryption properly, it's important for
users to authenticate with each other and verify that the public keys
downloaded from the keyservers have fingerprints that match the ones on
their respective computers. Typically the securest way to crosscheck
fingerprints is via a secure channel such as an in-person meeting. But a
phone call comes pretty close too (assuming the fact that it would be
difficult to mount a voice man-in-the-middle attack).

But what if there was no way to meet in person, make a phone call or a
VoIP call. I was wondering if using Pidgin with the OTR plugin (and
authenticating the OTR session using the QA method; see above link)
could be considered a secure channel to exchange and crosscheck GPG key
fingerprints in such a case.

Any thoughts?

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Implications Of The Recent RSA Vulnerability

2010-03-11 Thread erythrocyte
On 3/11/2010 3:29 PM, Dan Mahoney, System Admin wrote:
 On Thu, 11 Mar 2010, erythrocyte wrote:
 Ref:
 http://www.engadget.com/2010/03/09/1024-bit-rsa-encryption-cracked-by-carefully-starving-cpu-of-ele/

 
 Okay, let me sum up this article for you:
 
 Researchers who had physical enough access to be able to rewire the
 private-key-holder's system's power supply were able to compromise that
 system.
 
 If you're at that point, I don't think key length is your problem.

Alrighty. But doesn't this compromise the layer of security offered by
the passphrase? What's the point having a passphrase at all, if it's so
easy to compromise a private key?


___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Implications Of The Recent RSA Vulnerability

2010-03-11 Thread erythrocyte
On 3/11/2010 9:13 PM, Robert J. Hansen wrote:
 OpenPGP assumes the endpoints of the communication are secure.
 If they're not, there's nothing OpenPGP can do to help you make it
secure.
 ...All tools have preconditions: the existence of a precondition doesn't mean 
 the tool is broken.
 The precondition for a microwave oven is, the food must need heating.
 The precondition for OpenPGP is, the endpoints must be secure. 


How very eloquently put :-) . Makes sense. Thanks :-) .

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Implications Of The Recent RSA Vulnerability

2010-03-11 Thread erythrocyte
On 3/11/2010 9:15 PM, David Shaw wrote:
 Basically, no, and for several reasons.  There are a few things that need to 
 be understood about the new attack.  Briefly, this is an attack that relies 
 on manipulating the power supply to the CPU, in order to cause it to make 
 errors in RSA signatures.  If you process a lot of these errored signatures, 
 you can recover the secret key.
 
 In practice, and with GPG, however, it's a pretty hard attack to mount.  
 First of all, you have to have access to and the ability to manipulate the 
 power supply to the CPU.  If someone had that kind of access to your machine, 
 there are better attacks that can be mounted (keyboard sniffer, copying the 
 hard drive, etc.)   Secondly, your 4096 bit key is much larger than the 
 1024-bit keys the researchers were able to break.  Thirdly, the attacker 
 needs thousands and thousands of signatures with errors in them.  This takes 
 time to gather, increasing the amount of time that the attacker needs to be 
 manipulating your power supply.  Lastly, and perhaps most significantly, GPG 
 has resistance to this particular attack anyway: it checks all signatures 
 after creation to make sure that nothing like this happened.  If an attacker 
 managed to make the CPU hiccup and make an error when generating the 
 signature, the signature check would see the signature was invalid and cause 
 GPG to exit w
ith an error. 

Thanks for the explanation. Makes sense :-) .

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Using the OTR plugin with Pidgin for verifying GPG public key fingerprints

2010-03-11 Thread erythrocyte
On 3/12/2010 10:54 AM, Doug Barton wrote:
 Secure in this context is a relative term. (Note, I'm a long time user
 of pidgin+OTR and a longer-time user of PGP, so I'm actually familiar
 with what you're proposing.) If you know the person you're IM'ing well
 enough, you can do a pretty good job of validating their OTR
 fingerprint. But how secure that is depends on your threat model. Are
 you going to be encrypting sensitive financial data? Fruit cake recipes?
 Blueprints for nuclear weapons? Is the security of your communication
 something that you're wagering your life (or the lives of others) on?


Hmmm...if I understand it correctly, if and when the OTR session is
fully verified/authenticated it doesn't matter what the content of the
data you transmit is. It could be any of the above - fruit cake recipes,
financial data, et al.


 Is your communication of high enough value that your associate could have a
 gun to their head held by someone who is forcing them to answer your OTR
 questions truthfully? (Remember, you can't see them, or hear stress in
 their voice, you can only see what they type.) Have you and your
 associate pre-established a code question to handle the gun-to-the-head
 scenario?
 
 Hopefully that's enough questions to illustrate the point. :)


I don't think OTR technology can claim to solve the gun-to-the-head
scenario. Although it claims to give users the benefit of
perfect-forward-secrecy and repudiation, I think such things matter
little in a court of law. People get convicted either wrongly or
rightly, based on spoofed emails and plain-text emails all the time.

I think the same goes for GPG based email encryption as well.
GPG-encryption doesn't protect end-points. It only protects the channel
between them. The more end-points there are, the more vulnerable such
encrypted emails become.

The only scenario I see that minimizes end-point vulnerability is to
encrypt data to oneself. One end-point, one source of potential
compromise. Even that is susceptible to a rubber hose attack. In some
countries people are required to decrypt data if asked by law
enforcement and refusal to comply means jail time.

Bottom-line IMHO, you can't let out your inner demons just because
there's encryption technology. That isn't what it was built for afaik.

The safest possible place for data to reside in is within the confines
of one's own brain.

So I envision myself using OTR-based-IM and GPG-based-email-encryption
only with a prior understanding of these deficiencies. If I'm confident
enough that the end-points are secure during an OTR-IM session that has
then been authenticated, can I use such an IM session to exchange and
crosscheck my friend's GPG public key fingerprint that I've downloaded
from a keyserver for email encryption purposes?


PS: Despite the much hyped security behind SSL based websites such as
online banking, if you care to look around you'll soon realize that even
that isn't as bullet-proof as one would like to think. There have been
instances where unscrupulous people have gotten digitally signed
certificates from TTPs/CAs (reputed ones I might add) for businesses
that don't exist, etc. And with companies like Thawte that besides their
traditional for-profit CA business model, also provide individual users
free SSL certificates using email-based authentication, a lay person who
doesn't recognize the different kinds of Thawte certificates could as
well trust that a given bank website is genuine when in fact it might be
a fraud.

All in all, encryption isn't the panacea that we'd like it to be. At
least not yet. There are multiple attack vectors that crop up all the
time - from social engineering to mathematical/technological.


--
erythrocyte

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Changing verifying the --max-cert-depth in Windows

2010-03-04 Thread erythrocyte
Hi,

I have installed the CLI version of GPG.

I understand that GPG options have to be set in a configuration file.
The configuration file can be created if it doesn't exist as per a
previous thread here

 http://lists.gnupg.org/pipermail/gnupg-users/2008-December/035146.html

I added the following line in my gpg.conf :

max-cert-depth 3

And then ran:

gpg --update-trustdb

And then:

   gpg --check-trustdb

And here's the output of the last command:

  gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
  gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
  gpg: next trustdb check due at 2011-03-03

It mentions that the --marginals-needed option is set to 3. And
--completes-needed option is set to 1. Which I think I'm okay with.
But the depth mentioned is 0!

Why hasn't it changed? And how do I verify my current --max-cert-depth value?

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Changing verifying the --max-cert-depth in Windows

2010-03-04 Thread erythrocyte
On 3/4/2010 11:15 PM, Daniel Kahn Gillmor wrote:
 On 03/04/2010 08:18 AM, erythrocyte wrote:
 And here's the output of the last command:

   gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
   gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
   gpg: next trustdb check due at 2011-03-03

 It mentions that the --marginals-needed option is set to 3. And
 --completes-needed option is set to 1. Which I think I'm okay with.
 But the depth mentioned is 0!

 Why hasn't it changed? And how do I verify my current --max-cert-depth value?
 
 I think you're not reading that data the way that it was intended to be
 read.  (this is not your fault, the docs are pretty thin).
 
 That line says of the certificates that are depth 0 from you (meaning
 they effectively *are* you), there is exactly one valid OpenPGP cert,
 and it has been granted ultimate ownertrust -- this is a description of
 *your own key*, actually.  the signed: 0 bit suggests that your key
 has made no certifications over the userIDs of any other OpenPGP key.
 
 When i run gpg --check-trustdb, i get an additional line of output:
 
 0 d...@pip:~$ gpg --check-trustdb
 gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
 gpg: depth: 0  valid:   1  signed:  83  trust: 0-, 0q, 0n, 0m, 0f, 1u
 gpg: depth: 1  valid:  83  signed: 128  trust: 70-, 1q, 1n, 10m, 1f, 0u
 gpg: next trustdb check due at 2010-03-07
 0 d...@pip:~$
 
 So my first line (depth: 0) looks similar to yours, but points out that
 my key has made certifications over the userIDs of 83 other keys.
 
 that second line (depth: 1) says:
 
   of the certificates that are 1 hop away from you, 83 of them are known
 to be valid (these are the same 83 that i've personally certified).
 none of them have ultimate ownertrust (otherwise that key would be
 listed in the depth: 0 line), one of them has full ownertrust (1f'), 10
 have marginal ownertrust (10m), 1 has explicitly *no* ownertrust
 (1n), 70 i've never bothered to state ownertrust (70-), and 1 has
 explicitly-stated undefined ownertrust (1q -- i'm not really sure
 how this is different).
 
 I'm also not sure what the signed: 128 suggests in the depth: 1
 line.  Surely of all 83 keys i've certified, they have collectively
 issued more than 128 certifications themselves.  maybe someone else can
 explain that bit?
 
 
 so, your max-depth is being respected -- you're nowhere near 3 hops away
 from your key.  in fact, it looks like you've issued no ownertrust to
 any key other than yourself, so changing the max depth won't have any
 current effect.
 

Thanks! That makes perfect sense :) .

 
 
 Here's my understanding:
 
  * when you certify the userID of a key, you're saying you believe that
 the real-world entity referred to by the User ID does in fact control
 the secret part of the key.
 
  * in particular, you say *nothing* about whether you feel you can rely
 on certifications made by that key.
 
  * internally to GPG, you can also assign a level of ownertrust to any
 given key -- this tells your OpenPGP toolset how much you you are
 willing to believe certifications made by that key.
 
  * Your own key is marked by default as having ultimate ownertrust,
 which means that any userID/key combo certified by your key will be
 considered to be valid.
 
  * Note that GPG will not apply ownertrust to a key (even if you've
 specified it) unless it already believes that at least one User ID on
 that key is valid.
 
 
 
 So to reach a depth of 2, you'd have to have assigned ownertrust to at
 least one key that you had not personally certified (but was certified
 by other keys in which you've placed ownertrust).  To reach a depth of
 3, you'd have to have assigned ownertrust to one of the keys that are
 depth 2 from you, etc.
 
 hope this helps,
 
   --dkg
 


Thanks for the explanation. I think some bits of this can to be added to
the GnuPG Handbook. The section on web of trusts lacks some much needed
clarity.


Going over what you said, I think I'll be happy with a --max-cert-depth
of 2 :) .

-- 
erythrocyte

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users