Still trying to understand this in detail:

On Sat, Jul 10, 2010 at 9:46 PM, Blaine Cook <[email protected]> wrote:

> On 10 July 2010 13:26, Ted Smith <[email protected]> wrote:
> > It means that if your server (to be precise, your
> > core) is cracked, or subpoenaed by the MAFIAA/ACTA-Empowered Sharing
> > Police, it can give up no data that you haven't already decided is
> > public.
> >
> > I don't think that StatusNet GNU Social makes that guarantee, even when
> > it comes to private messaging. I would be very happy to be wrong.
>
> It doesn't, though servers are free to encrypt the data before and/or
> after it's sent. The same applies for email. Two thoughts:
>
> 1. I welcome experiments using P2P networks for social networks, but
> consider the human-level usability concerns. No matter what the
> underlying technology is, you need a human-level addressing system
> (the acid test for a good addressing scheme is the ability for one
> person to be able to write down on a scrap of paper an address at
> which someone else can contact them later). If you use webfinger (re:
> email-like addresses), you can maintain compatibility with mainline
> GNU Social, Status.net, Diaspora (i.e., OStatus), and Google Buzz
> while providing forwards-compatibility to stronger privacy-based
> networks*.
>

From: GNU social - Privatemessaging - Open wiki - Gitorious
http://gitorious.org/social/pages/Privatemessaging:

>
>    - If Bob hasn’t authenticated against Alice’s server, then Bob’s server
>    goes through the Webfinger auth process, generating a shared secret. If he
>    already has, he’ll already have such a secret.
>    - Bob’s server uses the shared secret from the Webfinger auth process
>    to retrieve Alice’s message.
>
> So, as I understand it, this shared secret is simply a way of ensuring that
Bob is really Bob and Alice is really Alice, and that they know eachother,
not a key that is used to encrypt messages between Alice and Bob- correct?

If you go this far why not take the extra step of encryption?  Is that a
whole lot more complicated to do?  What process are you using to
authenticate?  Are you making use of public keys shared through Webfinger?


> 2. Your threat model is incomplete. The data you've shared is private
> not until *you* decide it's public, but until *someone you've shared
> the data with* decides it's public (or is forced to make it public).
> It's certainly true that the approach you describe is *more* secure
> than the default approach, but it's important to remember that it's
> not *completely* secure. Another way to think about this issue is to
> consider what (deployment / payload) approaches provide strong
> security over the default (OStatus-esque) approach, providing a local
> maximum of utility AND security?
>
> b.
>
> * There are approaches to using DHTs and either webs-of-trust or
> bootstrapping methods to provide trusted DNS-independent lookups for
> email addresses (and other addresses). See VIPR, MonkeySphere, and
> RedPhone for ideas.
>
>

Reply via email to