Re: [tor-dev] Improving Private Browsing Mode/Tor Browser

2011-06-23 Thread Mike Perry
Thus spake Robert Ransom (rransom.8...@gmail.com):

> On Thu, 23 Jun 2011 11:19:45 -0700
> Mike Perry  wrote:
> 
> > So perhaps Torbutton controlled per-tab proxy username+password is the
> > best option? Oh man am I dreading doing that... (The demons laugh
> > again.)
> 
> If you do this, you will need to give the user some indication of each
> tab's ???compartment???, and some way to move tabs between compartments.
>
> Coloring each tab to indicate its compartment may fail for anomalous
> trichromats like me and *will* fail for more thoroughly colorblind
> users.  Putting a number or symbol in each tab will confuse most users.
> 
> I suggest one compartment per browser window.  (Of course, you can and
> should leave more detailed hooks in the browser's source if possible,
> in case someone wants to experiment with a different scheme.)

As soon as I sent the previous email, I wanted to edit it to change
"per-tab" to something else.  I think any kind of per-tab and
per-window isolation does not correspond to how people have been
trained to use their existing browsers.

In fact, I think we should also treat this linkability just like the
window.name and referer. So, how about we set the Proposal 171 SOCKS
username to a function of the hostname in the referer header (possibly
caching the first referer for subsequent link navigation). If the
referer is blank, use the request URL hostname. This policy should
effectively give us the top-level origin isolation we want for other
identifiers.


-- 
Mike Perry
Mad Computer Scientist
fscked.org evil labs


pgpcnykQb6oiR.pgp
Description: PGP signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Improving Private Browsing Mode/Tor Browser

2011-06-23 Thread Robert Ransom
On Thu, 23 Jun 2011 11:19:45 -0700
Mike Perry  wrote:

> So perhaps Torbutton controlled per-tab proxy username+password is the
> best option? Oh man am I dreading doing that... (The demons laugh
> again.)

If you do this, you will need to give the user some indication of each
tab's ‘compartment’, and some way to move tabs between compartments.

Coloring each tab to indicate its compartment may fail for anomalous
trichromats like me and *will* fail for more thoroughly colorblind
users.  Putting a number or symbol in each tab will confuse most users.

I suggest one compartment per browser window.  (Of course, you can and
should leave more detailed hooks in the browser's source if possible,
in case someone wants to experiment with a different scheme.)


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Improving Private Browsing Mode/Tor Browser

2011-06-23 Thread Mike Perry
Thus spake Robert Ransom (rransom.8...@gmail.com):

> On Thu, 23 Jun 2011 10:10:35 -0700
> Mike Perry  wrote:
> 
> > Thus spake Georg Koppen (g.kop...@jondos.de):
> > 
> > > > If you maintain two long sessions within the same Tor Browser Bundle
> > > > instance, you're screwed -- not because the exit nodes might be
> > > > watching you, but because the web sites' logs can be correlated, and
> > > > the *sequence* of exit nodes that your Tor client chose is very likely
> > > > to be unique.
> > 
> > I'm actually not sure I get what Robert meant by this statement. In
> > the absence of linked identifiers, the sequence of exit nodes should
> > not be visible to the adversary. It may be unique, but what allows the
> > adversary to link it to actually track the user? Reducing the
> > linkability that allows the adversary to track this sequence is what
> > the blog post is about...
> 
> By session, I meant a sequence of browsing actions that one web site
> can link.  (For example, a session in which the user is authenticated
> to a web application.)  If the user performs two or more distinct
> sessions within the same TBB instance, the browsing actions within
> those sessions will use very similar sequences of exit nodes.
> 
> The issue is that two different sites can use the sequences of exit
> nodes to link a session on one site with a concurrent session on
> another.

Woah, we're in the hinterlands, tread carefully :).

When performed by websites, this attack assumes a certain duration of
concurrent use that is sufficient to disambiguate the entire user
population. It also assumes exact concurrent use, or the error starts
to go up at an unknown and population-size dependent rate.

However, when performed by the exits, this linkability is a real
concern. Let's think about that. That sounds more like our
responsibility than the browser makers. Now I think I see what Georg
was getting at. We didn't mention this because the blog post was
directed towards the browser makers.

I've actually been pondering the exit side of this attack for years,
but we've never come to a good conclusion about what solution to
deploy for various reasons. There are impasses in every direction.

Observe:

Does this mean we want a more automatic version of Proposal 171,
something like Robert Hogan proposed? Something per-IP or per
top-level domain name? That is what I've historically argued for, but
I keep getting told it will consume too many circuits and help
bittorrent users (though we have recently discovered how to throttle
those motherfuckers, so perhaps we should just do that).

Or does this mean that Torbutton should be handing different SOCKS
usernames+passwords down to the SOCKS proxy per tab? This latter piece
is very hard to do, it turns out. SOCKS usernames and passwords are
not supported by the Firefox APIs. But that is the easy part, now that
we have control over the source.

The harder problem is the Foxyproxy API problem.. The APIs to do this
type of proxy tracking don't exist, and they don't exist because of
Firefox architectural problems.. But maybe there's a bloody hack to
the source that we can do because we just don't give a damn about
massively violating their architecture to get exactly what we want in
the most expedient way. Maybe.

I still think Tor should just do this, though. Every app should be
made unlinkable by a simple policy there by default, and we should
just rate limit it if it gets to intense (similar to NEWNYM rate
limiting).


-- 
Mike Perry
Mad Computer Scientist
fscked.org evil labs


pgpoytVHqrXXI.pgp
Description: PGP signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Improving Private Browsing Mode/Tor Browser

2011-06-23 Thread Mike Perry
Thus spake Mike Perry (mikepe...@fscked.org):

> Thus spake Robert Ransom (rransom.8...@gmail.com):
> 
> > On Thu, 23 Jun 2011 10:10:35 -0700
> > Mike Perry  wrote:
> > 
> > > Thus spake Georg Koppen (g.kop...@jondos.de):
> > > 
> > > > > If you maintain two long sessions within the same Tor Browser Bundle
> > > > > instance, you're screwed -- not because the exit nodes might be
> > > > > watching you, but because the web sites' logs can be correlated, and
> > > > > the *sequence* of exit nodes that your Tor client chose is very likely
> > > > > to be unique.
> > > 
> > > I'm actually not sure I get what Robert meant by this statement. In
> > > the absence of linked identifiers, the sequence of exit nodes should
> > > not be visible to the adversary. It may be unique, but what allows the
> > > adversary to link it to actually track the user? Reducing the
> > > linkability that allows the adversary to track this sequence is what
> > > the blog post is about...
> > 
> > By session, I meant a sequence of browsing actions that one web site
> > can link.  (For example, a session in which the user is authenticated
> > to a web application.)  If the user performs two or more distinct
> > sessions within the same TBB instance, the browsing actions within
> > those sessions will use very similar sequences of exit nodes.
> > 
> > The issue is that two different sites can use the sequences of exit
> > nodes to link a session on one site with a concurrent session on
> > another.
> 
> Woah, we're in the hinterlands, tread carefully :).
>
> I still think Tor should just do this, though. Every app should be
> made unlinkable by a simple policy there by default, and we should
> just rate limit it if it gets to intense (similar to NEWNYM rate
> limiting).

Arg. The demons in my head just told me that there exists a stupid
mashup web-app out there just waiting to ruin our day if we do this in
Tor without browser interaction. The demons tell me at least one
stupid banking or shopping-cart site checks to make sure both the IP
address and the cookies match for all pieces of the app to work
together across domains. I think the demons are right. I think this is
why we created TrackHostExits, but the demons just laugh and tell me
that the hosts are not the same in this case.

So perhaps Torbutton controlled per-tab proxy username+password is the
best option? Oh man am I dreading doing that... (The demons laugh
again.)


-- 
Mike Perry
Mad Computer Scientist
fscked.org evil labs


pgp0qdqRfXZ0n.pgp
Description: PGP signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Improving Private Browsing Mode/Tor Browser

2011-06-23 Thread Robert Ransom
On Thu, 23 Jun 2011 10:10:35 -0700
Mike Perry  wrote:

> Thus spake Georg Koppen (g.kop...@jondos.de):
> 
> > > If you maintain two long sessions within the same Tor Browser Bundle
> > > instance, you're screwed -- not because the exit nodes might be
> > > watching you, but because the web sites' logs can be correlated, and
> > > the *sequence* of exit nodes that your Tor client chose is very likely
> > > to be unique.
> 
> I'm actually not sure I get what Robert meant by this statement. In
> the absence of linked identifiers, the sequence of exit nodes should
> not be visible to the adversary. It may be unique, but what allows the
> adversary to link it to actually track the user? Reducing the
> linkability that allows the adversary to track this sequence is what
> the blog post is about...

By session, I meant a sequence of browsing actions that one web site
can link.  (For example, a session in which the user is authenticated
to a web application.)  If the user performs two or more distinct
sessions within the same TBB instance, the browsing actions within
those sessions will use very similar sequences of exit nodes.


> Or are we assuming that the predominant use case is for a user to
> continually navigate only by following links for the duration of their
> session (thus being tracked by referer across circuits and exits), as
> opposed to entering new urls frequently?
> 
> I rarely follow a chain of links for very long. I'd say my mean
> link-following browsing session lifetime is waay, waay below the Tor
> circuit lifetime of 10min. Unless I fall into a wikipedia hole and
> don't stop until I hit philosophy... But that is all the same site,
> which can link me with temporary cache or session cookies.

The issue is that two different sites can use the sequences of exit
nodes to link a session on one site with a concurrent session on
another.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Improving Private Browsing Mode/Tor Browser

2011-06-23 Thread Mike Perry
Thus spake Georg Koppen (g.kop...@jondos.de):

> > If you maintain two long sessions within the same Tor Browser Bundle
> > instance, you're screwed -- not because the exit nodes might be
> > watching you, but because the web sites' logs can be correlated, and
> > the *sequence* of exit nodes that your Tor client chose is very likely
> > to be unique.

I'm actually not sure I get what Robert meant by this statement. In
the absence of linked identifiers, the sequence of exit nodes should
not be visible to the adversary. It may be unique, but what allows the
adversary to link it to actually track the user? Reducing the
linkability that allows the adversary to track this sequence is what
the blog post is about...

Or are we assuming that the predominant use case is for a user to
continually navigate only by following links for the duration of their
session (thus being tracked by referer across circuits and exits), as
opposed to entering new urls frequently?

I rarely follow a chain of links for very long. I'd say my mean
link-following browsing session lifetime is waay, waay below the Tor
circuit lifetime of 10min. Unless I fall into a wikipedia hole and
don't stop until I hit philosophy... But that is all the same site,
which can link me with temporary cache or session cookies.

Are my browsing habits atypical?

> Ah, okay, I did not know that. Thanks for that information. I was just
> wondering how the proposed changes to the private browsing mode would
> avoid being tracked by exit mixes (as the blog post claimed).

See my other reply for a response to this question.



-- 
Mike Perry
Mad Computer Scientist
fscked.org evil labs


pgp6Maspxx7cW.pgp
Description: PGP signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Improving Private Browsing Mode/Tor Browser

2011-06-23 Thread Mike Perry
Thus spake Georg Koppen (g.kop...@jondos.de):

> >> And why having again add-ons that can probably be toggled on/off and
> >> are thus more error-prone than just having an, say, Tor anon mode?
> >> Or is this already included in the Tor anon mode but only separated
> >> in the blog post for explanatory purposes?
> > 
> > If we operate by upgrading private browsing mode, we'll effectively
> > have the "toggle" in a place where users have already been trained by
> > the UI to go for privacy. Torbutton would become an addon that is only
> > active in private browsing mode. 
> 
> Okay. That means there is no additional toggling of Torbutton in this
> enhanced private mode. The user just enters it and Torbutton is running
> and doing its job and if the user does not want it anymore she does not
> toggle anything but leaves this enhanced private browsing mode and
> that's it, right?

That's correct. If the user wants their regular private browsing mode
back, they would presumably uninstall the extension.

> >> If one user requests
> >> google.com, mail.google.com and other Google services within the 10
> >> minutes interval (I am simplifying here a bit) without deploying TLS the
> >> exit is still able to connect the whole activity and "sees" which
> >> services that particular user is requesting/using. Even worse, if the
> >> browser session is quite long there is a chance of recognizing that user
> >> again if she happens to have the same exit mix more than once. Thus, I
> >> do not see how that helps avoiding linkability for users that need/want
> >> strong anonymity while surfing the web. Would be good to get that
> >> explained in some detail. Or maybe I am missing a point here.
> > 
> > We also hope to provide a "New Identity" functionality to address the
> > persistent state issue, but perhaps this also should be an explicit
> > responsibility of the mode rather than the addon..
> 
> Hmmm... If that is the answer to my questions then there is nothing like
> avoiding getting tracked by exit mixes in the concept offered in the
> blog post. Okay.

That is not entirely true. Because identifiers would be linked to
top-level urlbar domain, gone are the days where exits could insert an
iframe or web-bug into any arbitrary page and use that to track the
user for the duration of the session, regardless of page view.

Instead, they would be pushed back to doing some sort of top-level
redirect (which we hope would be way more visible), or maybe not even
that, depending on how we define redirects with respect to
"top-level".

So no, we are not completely abandoning exits as an adversary with
this threat model. If I'm wrong about something, or you think there
are still attacks exits can perform that we should address somehow,
let me know.

> How should the "New Identity" functionality work? Is
> that identity generated automatically after a certain amount of time has
> passed or does a user have to click manually on a button every time?

I don't know the answer here. This may vary by browser and use case.
For a communications-suite style use case, I think we probably want to
detect inactivity and ask the user if they want to clear state,
because communications-suites are heavy and a pain to relaunch (hence
once opened, they probably will stay open).

For something lighter, like Chrome's Incognito, we may just rely on
the user to leave the mode. This divergence is one of the reasons I
didn't mention the feature in the blog post. 

If you want to track what solution we ultimately deploy for TBB, here
is the ticket you should follow:
https://trac.torproject.org/projects/tor/ticket/523
 
> >> Assuming I understood TorButton's
> >> Smart-Spoofing option properly: Why is it not applied to the
> >> referer/window.name anymore? In other words: Why is the referer (and
> >> window.name) not kept if the user surfs within one domain (let's say
> >> from example.com to foo.example.com and then to foo.bar.example.com)?
> > 
> > I don't really understand this question. The referer should be kept in
> > these cases.
> 
> That sounds good. Then we probably had just different concepts of SOP in
> mind. I was thinking about
> http://tools.ietf.org/html/draft-abarth-origin-09 (see: section 3 and
> 4). That would treat http://example.com, http://foo.example.com and
> http://foo.bar.example.com as different origins (let alone mixing
> "http://"; and "https://"; and having different ports).

Yeah. The reality is we're basically picking an arbitrary heuristic
for squelching this information channel to find some sweet spot that
minimizes breakage for maximal gain. True same-origin policy may or
may not be relevant here.

Since I personally believe any heuristic squelch is futile against bad
actors, I haven't thought terribly hard about the best "sweet spot"
policy. I just took what Kory Kirk came up with for a GSoC project and
tweaked it slightly to make it symmetric:
https://trac.torproject.org/projects/tor/ticket/2148

This policy will appe

Re: [tor-dev] Improving Private Browsing Mode/Tor Browser

2011-06-23 Thread Georg Koppen
> Additionally, we expect that fingerprinting resistance will be an
> ongoing battle: as new browser features are added, new fingerprinting
> defenses will be needed. Furthermore, we'll likely be inclined to
> deploy unproven but better-than-nothing fingerprinting defenses (so
> long as they don't break much), where as the browser vendors may be
> more conservative on this front, too.

Yes, that seems likely.

>> And why having again add-ons that can probably be toggled on/off and
>> are thus more error-prone than just having an, say, Tor anon mode?
>> Or is this already included in the Tor anon mode but only separated
>> in the blog post for explanatory purposes?
> 
> If we operate by upgrading private browsing mode, we'll effectively
> have the "toggle" in a place where users have already been trained by
> the UI to go for privacy. Torbutton would become an addon that is only
> active in private browsing mode. 

Okay. That means there is no additional toggling of Torbutton in this
enhanced private mode. The user just enters it and Torbutton is running
and doing its job and if the user does not want it anymore she does not
toggle anything but leaves this enhanced private browsing mode and
that's it, right?

> We also expect that if browser vendors become serious enough about
> privacy, they will be the ones who deal with all the linkability
> issues between the private and non-private states, not us.

Yes, that would be really helpful.

>> If one user requests
>> google.com, mail.google.com and other Google services within the 10
>> minutes interval (I am simplifying here a bit) without deploying TLS the
>> exit is still able to connect the whole activity and "sees" which
>> services that particular user is requesting/using. Even worse, if the
>> browser session is quite long there is a chance of recognizing that user
>> again if she happens to have the same exit mix more than once. Thus, I
>> do not see how that helps avoiding linkability for users that need/want
>> strong anonymity while surfing the web. Would be good to get that
>> explained in some detail. Or maybe I am missing a point here.
> 
> We also hope to provide a "New Identity" functionality to address the
> persistent state issue, but perhaps this also should be an explicit
> responsibility of the mode rather than the addon..

Hmmm... If that is the answer to my questions then there is nothing like
avoiding getting tracked by exit mixes in the concept offered in the
blog post. Okay. How should the "New Identity" functionality work? Is
that identity generated automatically after a certain amount of time has
passed or does a user have to click manually on a button every time?

>> Assuming I understood TorButton's
>> Smart-Spoofing option properly: Why is it not applied to the
>> referer/window.name anymore? In other words: Why is the referer (and
>> window.name) not kept if the user surfs within one domain (let's say
>> from example.com to foo.example.com and then to foo.bar.example.com)?
> 
> I don't really understand this question. The referer should be kept in
> these cases.

That sounds good. Then we probably had just different concepts of SOP in
mind. I was thinking about
http://tools.ietf.org/html/draft-abarth-origin-09 (see: section 3 and
4). That would treat http://example.com, http://foo.example.com and
http://foo.bar.example.com as different origins (let alone mixing
"http://"; and "https://"; and having different ports).

> Neither of these properties are really identifiers (yes yes,
> window.name can store identifiers, but it is more than that). Both are
> more like cross-page information channels.

Agreed, although the distinction is somewhat blurred here.

> Hence it doesn't make sense to "clear" them like cookies. Instead, It
> makes more sense to prohibit information transmission through them in
> certain cases.

I am not sure about that as "clearing" them for *certain contexts* seems
a good means to prohibit information transmission *in these contexts*:
If there isn't any information it cannot be transmitted (at least not by
referer or windows.name).

> I believe the cases where you want to prohibit the
> information transmission end up being the same for both of these
> information channels.

Yes, that's true.

> To respond to your previous paragraph, it is debatable exactly how
> strict a policy we want here, but my guess is that for Tor, we have
> enough IP unlinkability such that the answer can be "not very", in
> favor of not breaking sites that use these information channels
> legitimately.
> 
> The fact is that other information channels exist for sites to
> communicate information about visitors to their 3rd party content. If
> you consider what you actually *can* restrict in terms of information
> transmission between sites and their 3rd party elements, the answer is
> "not much".
> 
> So in my mind, it becomes a question of "What would you be actually
> preventing by *completely disabling* referers (and window.name)

Re: [tor-dev] Improving Private Browsing Mode/Tor Browser

2011-06-23 Thread Georg Koppen
> If you maintain two long sessions within the same Tor Browser Bundle
> instance, you're screwed -- not because the exit nodes might be
> watching you, but because the web sites' logs can be correlated, and
> the *sequence* of exit nodes that your Tor client chose is very likely
> to be unique.

Ah, okay, I did not know that. Thanks for that information. I was just
wondering how the proposed changes to the private browsing mode would
avoid being tracked by exit mixes (as the blog post claimed).

Georg



signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev