Hi synx,

On 02/03/2010 07:12, synx wrote:
>
> Trust networks are difficult to get working right, especially in a
> decentralized fashion. Trying to figure out what exactly I mean by
> "trust network" is hard in of itself, even for me! But here's my best
> shot at what I think we would want in a trust network, and what would
> make it most likely to work.


The classical problem of "trust" in computing/IT is that companies 
successfully managed to reverse the meaning, and nobody noticed.  So 
every time we look at it, we trip over the contradictions.

Trust is what I ascribe to you.  However, in the "trust business", a TTP 
(or CVP) tells me to trust you.  Instead of me being able to trust you, 
I can do nothing but accept you, even if I don't trust you.

That's not trust as humans know it, that's something else.

> What I want to do is make a trust network that gets separated into
> semantic categories or "actions". For instance a man who brought me
> flowers a thousand times might not be a man I trust with my life. But a
> man who saved my life I would be inclined to trust more to do so a
> second time. Contrarily, a man who saved my life, I would still have no
> reason to trust him to be a reliable source of flowers. So trust itself
> is a categorical sort of thing. It depends not just on how much they did
> for you in the past, but on how much of what they did for you in the
> past. If a man murdered your friend with an axe, you would have very
> good reason to trust that this man will now murder the rest of you, and
> hopefully take measures to take his axe away. You certainly would want
> to preserve that trust relationship (to avoid being axe murdered) and
> yet you don't want it to spill over into your other trust relationships,
> as an axe murderer may be a very untrustworthy person with loaning money to.


It is possible to categorise ... but that doesn't mean it is useful to 
do.  Libraries catalogue books, but that tells us where to find a book, 
not how good it is.

Also, there is a sort of top-ten winners effect as soon as you succeed 
to deliver a useful metric for quality.  Once the metric is established, 
the ones on the top ten sell disproportionately to the ones off the top 
ten list.  Also, the capturing of the top ten becomes more an issue of 
money that quality.


> I think there would have to be therefore some kind of semantic language
> behind any trust network. Each person would have a list, not just of who
> they trust, but who they trust to perform what action. If the action is
> beneficial, such as someone repaying your loan, or if the action is
> malignant, such as someone skipping out on payments, it's still a matter
> of trust. You're trying to predict how they will act in the future, and
> the more reliable your predictions can be, the less you get fooled by
> scams and con artists.
>
> Newsgroups have one such language. Each group is named by hierarchical
> topic. A poster in one group might act trollish and brusque, but while
> posting in another group would act proper and modest, and they might use
> the same PGP key for both groups. That happens a lot actually, that the
> environment of the group determines what attitude a poster will bring to it.
>
> But the PGP Web of Trust has no such categories. In fact it doesn't
> refer to trust at all, but merely a way to extend already centralized
> identity tracking systems.


Right, the PGP Web of Trust is a network in name, but trust isn't quite 
what it delivers.  More it delivers a sense of "who met who" and 
therefore likely similar interests.  But that isn't trust, it is more 
like loose community.

What I like to do with these things is ask:  what goes wrong?  What 
happens when someone breaches trust?  Another way of asking is, how much 
money do I get if I was tricked (this is the notion of insurance).  With 
the PGP web of trust, the answer is "probably nothing" so the strength 
of web of trust is therefore, in my eyes "probably worthless."  Fun, but 
not trust as I know it.

Let me drift over to a web of trust network I am involved with; and you 
choose which of the contrary definitions this is:  CAcert.

CAcert has a large body of Assurers (3401 yesterday) who run around the 
planet checking your "identity" and other things, p2p but also 
face2face.  Now, over the last N years it tried to do an audit, and the 
nasty auditor asked nasty questions like "what happens if it goes 
wrong?"  To cut a long story short, CAcert put in place two things:  a 
community agreement that all are signed up for, and a dispute resolution 
mechanism.  So.... the answer to the question of what goes wrong is 
simple:  file a dispute (they have about 10 overworked arbitrators 
because people keep filing disputes...).

Then, in the arbitration, the Arbitrator looks at all the evidence, and 
makes a ruling.  The ruling has some teeth, because the Arbitrator can 
award a fine of up to 1000 euros, not that this has happened as yet.

This is in place, and it helps to set the scene for Assurance, which is 
just that technical identity blah blah that people tell us we need. 
What happens next is far more interesting:

We have established a thing called CAcert Assurer Reliable Statement, or 
CARS for short.  If we request some form of "proof" or evidence, we can 
simply ask any Assurer to go research or do something, then report back. 
  And add CARS to the end, signifying that the author will stand by the 
words.  (We also often sign these things digitally.)

So, for example, when a sysadm uploads a new security patch, he can 
report back "I uploaded the necessary debian security patch, iang, CARS" 
and this statement can now be relied upon.

Primarily we want these things so that we can present them to the 
auditor.  No longer are we talking about some volunteer with a penguin 
t-shirt, we are now talking a statement that has some weight.  With 
enough of these, and enough practice, we can present a solid statement.

And we use these things, these CARS messages, for all sorts of purposes. 
  Last night a group of us badgered a volunteer into joining the formal 
association, and he agreed.  But not in writing.  So, the people at the 
table were now able to send in a statement "he agreed, CARS" to the 
secretary.  That's solid, although novel.

The result isn't your classical idea of web of trust, but it is a web, 
and you can trust the statements coming out, so I think we meet the real 
definition.  The reason it works is partly because we didn't try to 
catalogue what trust meant, instead we created a vector, a message, that 
can be used for anything ... but carries weight.


iang
_______________________________________________
p2p-hackers mailing list
p2p-hackers@lists.zooko.com
http://lists.zooko.com/mailman/listinfo/p2p-hackers

Reply via email to