current version of the text:
https://nusenu.github.io/tor-relay-operator-ids-trust-information/
Some comments, in no particular order:
Why not just put the keys in directly, or even a magnet link to your
latest web of trust? That would remove the need to trust SSL CAs.
Since the spec does not mention keys, which keys do you mean?
Note that the level of indirection
trust information -> operator ID -> relay ID
is crucial. Anything that requires to assign trust to
individual relays does not really scale well
and we trust relays largely by trusting their operators (less on other factors).
What problems does this solve, specifically, and how? If I - me
personally, not the generic I - wanted to spin up a relay, how would
I do that?
Would I go on this mailing list and ask random people to sign my
relay? If so, it's not very useful.
Or would I just run it without any signatures at all? If so, it's not
very useful.
The basic problem, I think, is the same as for PGP: it's not really
clear what you're attesting to
I've tried to make it more clear now and I've added a second point:
a TA asserts that
(1) a given operator ID is running relays without malicious intent
(2) they have met at least once in a physical setting (not just online)
https://github.com/nusenu/tor-relay-operator-ids-trust-information/blob/main/README.md#trust-anchor-ta
If I sign a my mate's
relay,
Note: there is no manual signing and no trust at the individual relay level in
the spec.
and then that relay turns out to be dodgy, do I also lose my
relay operation privileges?
No, but you will likely loose people's trust to assert a third parties trust
level.
So if you were a TA for someone before you probably loose that ability, but it
is up to the consumer
of trust information to define their rules for which TA to trust and how to respond to TA
"errors".
Thanks to your input I added support for negative trust configurations:
https://nusenu.github.io/tor-relay-operator-ids-trust-information/#negative-trust-configuration
If you're going to do it in a "machine-friendly" manner, then I
suppose you have to come up with some kind of formalized notion of
what trust represents, maybe have some numerical scale so you can
define (just as an example) 100 = "I've personally audited the
hardware", 70 = "This is an organization I trust", 10 = "I know who
this person is, it's not just a fresh hotmail".
currently, by publishing an operator-id (=domain) a TA
only claims that "this operator runs relays without malicious intent" and that
they met at least once.
It does not say anything about the operational security practices of an
operator.
Having a granularity of 100 steps to denote the trust level is too much in my
opinion.
Let's keep it simple.
Anyway, if you're going to do that, it might also be reasonable to
hook into a pre-existing web of trust, like GPG or something. That
way, we can encode stuff like "I trust my mate Alice, she isn't a
relay operator, she trusts Bob, who is, therefore I transitively
trust Bob."
I don't think there is much benefit in using existing GPG signatures because
signatures on GPG keys only make claims about identities, they do not make any
claims about
non-malicious relay operator intentions. Malicious operators are willing to go
quite far as we see in practice. Finding a poor person who is willing to go to
the next GPG key signing event for money is trivial for them I guess.
kind regards,
nusenu
--
https://nusenu.github.io
_______________________________________________
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev