Re: [tor-dev] Control-port filtering: can it have a reasonable threat model?

2017-04-10 Thread Yawning Angel
On Mon, 10 Apr 2017 19:35:24 +0400
meejah  wrote:
> Obviously as per my other post I agree with fragmented / limited views
> given to "real" applications of the control-port. However, personally
> I don't see the point of implementing this in 'tor' itself -- existing
> control-port filters are "fairly" limited code, typically in "safer
> than C" languages anyway. So then you have the situation where
> there's a single trusted application (the filter) conencted to the Tor
> control-port.

I agree with this, because it's basically required to do certain
things, and for certain adversarial models.

> Ultimately, it would probably be best if there was "a" robust
> control-port filter that shipped as part of a Tor release. So if that
> means "must implement it in C inside Tor" I guess so be it.

I moderately disagree with this.  It's not clear to me if a one size
fit's all solution (that supports all "first class platforms" and use
cases) would be easy to develop initially, and it will take continuous
love and care to support everything that people want to do.

By "first class" platforms in this context (since it's more client
facing) I'll start off with "Whatever Tor Browser happens to be
packaged for" as a first pass narrow definition.

Even if this was shipped, I'm trying to keep the external dependencies
required for correct sandbox functionality to a minimum, and something
that's part of the bundle it downloads/auto updates doesn't feel great
to me.

> Maybe this would be a good target for "experiment with Rust" if
> anyone's excited about writing control-port code in Rust...?

I disagree with this, but since it'll never be used by the sandbox, my
disagreement shouldn't stop anyone.

-- 
Yawning Angel


pgpEwYYfnw948.pgp
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Control-port filtering: can it have a reasonable threat model?

2017-04-10 Thread meejah
anonym  writes:

>> It allows "GETINFO onions/current", which can expose a list of every
>> onion service locally hosted, even those not launched through
>> onionshare.

> I think this can be disallowed; in fact, when looking at the
> onionshare and stem sources I don't see why this would ever be used by
> onionshare.

I may have said this already, but I think the original comment is wrong:
this only lists ephemeral onions created by the current control
connection so I don't believe there's any information leak here anyway.

> BTW, I guess a `restrict-onion-view` would also make sense for HS_DESC
> events [..]

Yes, I think this would be good. To determine if a control-connection
owns an onion or not, I think you could either use "GETINFO
onions/current" (to ask Tor) or just remember the answers from any
ADD_ONION on "this" connection (and then match against the args in the
HS_DESC event).

If the filter is re-started, all the control connections will be lost
at which point any non-"detached" onions will vanish anyway.

> Imagine that ControlPort can take a "RestrictedView" flag. When set,
> controllers will get a view of Tor's state (streams, circuits, onions
> etc) restricted to what "belongs" to them, e.g. it only sees streams
> for connections itself made via the SocksPort. Tor would then have to
> internally track who these things belong to, which could be done by
> PID, which is pretty weak, but I bet there are more convincing ways.

Obviously as per my other post I agree with fragmented / limited views
given to "real" applications of the control-port. However, personally I
don't see the point of implementing this in 'tor' itself -- existing
control-port filters are "fairly" limited code, typically in "safer than
C" languages anyway. So then you have the situation where there's a
single trusted application (the filter) conencted to the Tor
control-port.

Ultimately, it would probably be best if there was "a" robust
control-port filter that shipped as part of a Tor release. So if that
means "must implement it in C inside Tor" I guess so be it.

Maybe this would be a good target for "experiment with Rust" if anyone's
excited about writing control-port code in Rust...?

-- 
meejah
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Control-port filtering: can it have a reasonable threat model?

2017-04-10 Thread anonym
Nick Mathewson:
> Hi!
> 
> As you may know, the Tor control port assumes that if you can
> authenticate to it, you are completely trusted with respect to the Tor
> instance you have authenticated to.  But there are a few programs and
> tools that filter access to the Tor control port, in an attempt to
> provide more restricted access.
> 
> When I've been asked to think about including such a feature in Tor in
> the past, I've pointed out that while filtering commands is fairly
> easy, defining a safe subset of the Tor control protocol is not.  The
> problem is that many subsets of the control port protocol are
> sufficient for a hostile application to deanonymize users in
> surprising ways.
> 
> But I could be wrong!  Maybe there are subsets that are safer than others.
> 
> Let me try to illustrate. I'll be looking at a few filter sets for example.
[...]
> Filters from 
> https://git-tails.immerda.ch/tails/tree/config/chroot_local-includes/etc/tor-controlport-filter.d

Small note: we've renamed tor-controlport-filter to onion-grater, to not 
infringe on the Tor trademark. :) 

> 1. onioncircuits.yml
> 
> See onioncircuits.json above; it allows the same GETINFO stuff.

The whole point of onioncircuits is to present all Tor circuit/stream state to 
the users since they (IIRC) feel that Tor is too opaque without this (and I'm 
sure the Tor Browser added its per-tab circuit view for similar reasons). In 
other words, the point of onioncircuits *is* to expose this information. Hence 
I guess this all boils down balancing the security consequences of this (e.g. 
user compromise => full Tor state leak) vs the desired transparency.

As for Tails, my impression of our current threat model here is that we don't 
protect against the main user being compromised, so we certainly won't 
sacrifice the transparency desired by our users to block this leak -- there are 
probably equally bad leaks around already so that sacrifice would be pointless. 
But we are incrementally working towards this by limiting information leaks 
(e.g. control port filtering) and sandboxing applications to protect full user 
compromise so we *do* care about these things. At the point where we feel we 
can start caring about this for real we'll have to revisit this point.

> 2. onionshare.yml
> 
> As above, appears to allow HS_DESC events.

Explanation: modern (ADD_ONION instead of SETCONF HiddenService{Dir,Port}) 
onionshare uses stem's create_ephemeral_hidden_service() with 
`await_publication = True`, which means waiting for the corresponding HS_DESC 
event. I believe the roflcoptor filter was written for the "old" onionshare 
only. 

> It allows "GETINFO
> onions/current", which can expose a list of every onion service
> locally hosted, even those not launched through onionshare.

I think this can be disallowed; in fact, when looking at the onionshare and 
stem sources I don't see why this would ever be used by onionshare.

> 3. tor-browser.yml
> 
> As "tbb.json" above.

Not quite! As intrigeri pointed out, this filter sets `restrict-stream-events: 
true` which gives what meejah called a "limited view" of the STREAM events, 
namely only those "belonging" to the client/controller (implementation: for 
each event look up which PID that has opened the socket with the event's source 
address/port, then match PIDs to determine whether it should be suppressed or 
not).

So, how bad is "GETINFO circuit-status" with only the "limited" STREAM view)? 
Well, by knowing all circuits' exit nodes an attacker that also observes the 
traffic of these exit nodes knows a bit more than what we are comfortable with. 
:/

I guess treating "GETINFO circuit-status" specially with a 
`restrict-circuit-status` option that, when set, suppresses circuits that 
doesn't have any stream belonging to the client. But the same goes for CIRC 
events and "GETINFO stream-status", so, in fact, what about these options:

* restrict-circuit-view: when enabled:
  - "GETINFO circuit-status" will only show circuits that has some stream 
attached that belongs to the controller.
  - CIRC events are dropped unless some stream attached to the circuit in 
question belongs to the controller.
* restrict-stream-view (replacing the current `restrict-stream-events`):
  - "GETINFO stream-status" will only show streams belonging to the controller.
  - STREAM events are dropped unless they belong to the controller.

Does this make sense? What other bits of sensitive internal Tor state 
accessible for controllers have I missed?

BTW, I guess a `restrict-onion-view` would also make sense for HS_DESC events 
and "GETINFO onions/current", but I see no general way to learn what 
application an onion "belongs" to. The filter could keep track of it, but such 
tracking would be lost if restarted (and not tracked at all if the onion was 
added before the filter started). A general solution would depend on little-t 
tor tracking this information, e.g. the PID of the controller that asked for an 
onion to 

Re: [tor-dev] Control-port filtering: can it have a reasonable threat model?

2017-04-08 Thread dawuud

> Yes, that is necessary.  I question, however, whether it is sufficient.

Sufficient for what purpose?

It *is* sufficient for the purpose of preventing Subgraph sandboxed
applications from escaping it's sandbox via the Tor control
port. Actually, one of the Subgraph guys figured this out and that's
why they wanted a Tor control port filter.

I can see how our intentions for this tool (roflcoptor) could have
been misleading since we never explicitly/publicly stated the above as
the motivation for tor control port filtration.

I think now that the other "Tor integrated Linux distributions" have more
or less caught up with Subgraph, I feel comfortable telling people how
easy it is to get tor to run arbitrary programs via the control port.

Looks like as per usual Yawning Angel did the exact correct thing and
made the Tor hardened browser bundle filter the control port to
disallow SETCONF.  Further, he mentioned to me on irc that the tor
proc is also sandboxed..  so yeah that sounds thorough and proper.


cheers from Montreal!

David Stainton



signature.asc
Description: PGP signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Control-port filtering: can it have a reasonable threat model?

2017-04-04 Thread Damian Johnson
Hi Nick. Just a quick note that something I've wanted from time to
time is a 'make the control port read-only' option so only GETINFO,
GETCONF, events, etc would work. Yes, these could be used to
deanonymize a user, but it could provide assurance the controller
doesn't tamper with tor. This has been of interest to me since nyx
(aka arm) is primarily a read-only monitor and this could provide
users with an assurance that it's not doing anything to their tor
instance.

Besides that, 'makes the control port read-only' is a pretty straight
forward, simple to understand capability for a torrc option to have.

Cheers! -Damian

On Mon, Apr 3, 2017 at 11:41 AM, Nick Mathewson  wrote:
> Hi!
>
> As you may know, the Tor control port assumes that if you can
> authenticate to it, you are completely trusted with respect to the Tor
> instance you have authenticated to.  But there are a few programs and
> tools that filter access to the Tor control port, in an attempt to
> provide more restricted access.
>
> When I've been asked to think about including such a feature in Tor in
> the past, I've pointed out that while filtering commands is fairly
> easy, defining a safe subset of the Tor control protocol is not.  The
> problem is that many subsets of the control port protocol are
> sufficient for a hostile application to deanonymize users in
> surprising ways.
>
> But I could be wrong!  Maybe there are subsets that are safer than others.
>
> Let me try to illustrate. I'll be looking at a few filter sets for example.
> =
> Filters from https://github.com/subgraph/roflcoptor/filters :
>
> 1. gnome-shell.json
>
> This filter allows "SIGNAL NEWNYM", which can potentially be used to
> deanonymize a user who is on a single site for a long time by causing
> that user to rebuild new circuits with a given timing pattern.
>
> 2. onioncircuits.json
>
> Allows "GETINFO circuit-status" and "GETINFO stream-status", which
> expose to the application a complete list of where the user is
> visiting and how they are getting there.
>
> 3. onionshare-gui.json
>
> Allows "SETEVENTS HS_DESC", which is exposes to the application every
> hidden service which the user is visiting.
>
> 4. ricochet.json
>
> Allows "SETEVENTS HS_DESC", for which see "onionshare-gui" above.
>
> 5. tbb.json
>
> Allows "SETEVENTS STREAM" and "GETINFO circuit-status", for which see
> "onioncircuits" above.
>
> =
> Filters from 
> https://git-tails.immerda.ch/tails/tree/config/chroot_local-includes/etc/tor-controlport-filter.d
> :
>
> 1. onioncircuits.yml
>
> See onioncircuits.json above; it allows the same GETINFO stuff.
>
> 2. onionshare.yml
>
> As above, appears to allow HS_DESC events.  It allows "GETINFO
> onions/current", which can expose a list of every onion service
> locally hosted, even those not launched through onionshare.
>
> 3. tor-browser.yml
>
> As "tbb.json" above.
>
> 4. tor-launcher.yml
>
> Allows setconf of bridges, which allows the app to pick a hostile
> bridge on purpose.  Similar issues with Socks*Proxy.  The app can also
> use ReachableAddresses to restrict guards on the .
>
> Allows SAVECONF, which lets the application make the above changes
> permanent (for as long as the torrc file is persisted)
> =
>
> So above, I see a few common patterns:
>   * Many restrictive filters still let the application learn enough
> about the user's behavior to deanonymize them.  If the threat model is
> intended to resist a hostile application, then that application can't
> be allowed to communicate with the outside world, even over Tor.
>
>   * Many restrictive filters block SETCONF and SAVECONF.  These two
> changes together should be enough to make sure that a hostile
> application can only deanonymize _current_ traffic, not future Tor
> traffic. Is that the threat model?  It's coherent, at least.
>
>   * Some applications that care about their own onion services
> inadvertantly find themselves informed about everyone else's onion
> services.  I wonder if there's a way around that?
>
>   * The NEWNYM-based side-channel above is a little scary.
>
>
> And where do we go forward from here?
>
> The filters above seem to have been created by granting the
> applications only the commands that they actually need, and by
> filtering all the other commands.  But if we'd like filters that
> actually provide some security against hostile applications using the
> control port, we'll need to take a different tactic: we'll need to
> define the threat models that we're trying to work within, and see
> what we can safely expose under those models.
>
> Here are a few _possible_ models we could think about, but I'd like to
> hear from app developers and filter authors and distributors more
> about what they think:
>
>  A. Completely trusted controller.  (What we have now)
>
>  B. Controller is untrusted, but is blocked from exfiltrating information.
> B.1. Controller can't connect to the network at all.
> B.2. Controller can't connect 

Re: [tor-dev] Control-port filtering: can it have a reasonable threat model?

2017-04-04 Thread Nick Mathewson
On Mon, Apr 3, 2017 at 6:39 PM, dawuud  wrote:
>
>
> It's worth noting that controllers able to run SETCONF can ask the tor
> process to execute arbitrary programs:
>
> man torrc | grep exec
>
> So if you want a controller to have any less privileges than the tor
> daemon does, you need a control port filter for SETCONF at the very
> least.

Yes, that is necessary.  I question, however, whether it is sufficient.

> Without a control port filter, what is the threat model of the
> ControlSocketsGroupWritable and CookieAuthFileGroupReadable options?

The same as with the rest of the control port: all authorized
controllers have full control over the Tor process.

(Not saying it's a _good_ threat model, but there it is.)

-- 
Nick
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Control-port filtering: can it have a reasonable threat model?

2017-04-03 Thread dawuud


It's worth noting that controllers able to run SETCONF can ask the tor
process to execute arbitrary programs:

man torrc | grep exec

So if you want a controller to have any less privileges than the tor
daemon does, you need a control port filter for SETCONF at the very
least.

Without a control port filter, what is the threat model of the
ControlSocketsGroupWritable and CookieAuthFileGroupReadable options?

Maybe the torrc documentation for those options should recommend using
one?


On Mon, Apr 03, 2017 at 02:41:19PM -0400, Nick Mathewson wrote:
> Hi!
> 
> As you may know, the Tor control port assumes that if you can
> authenticate to it, you are completely trusted with respect to the Tor
> instance you have authenticated to.  But there are a few programs and
> tools that filter access to the Tor control port, in an attempt to
> provide more restricted access.
> 
> When I've been asked to think about including such a feature in Tor in
> the past, I've pointed out that while filtering commands is fairly
> easy, defining a safe subset of the Tor control protocol is not.  The
> problem is that many subsets of the control port protocol are
> sufficient for a hostile application to deanonymize users in
> surprising ways.
> 
> But I could be wrong!  Maybe there are subsets that are safer than others.
> 
> Let me try to illustrate. I'll be looking at a few filter sets for example.
> =
> Filters from https://github.com/subgraph/roflcoptor/filters :
> 
> 1. gnome-shell.json
> 
> This filter allows "SIGNAL NEWNYM", which can potentially be used to
> deanonymize a user who is on a single site for a long time by causing
> that user to rebuild new circuits with a given timing pattern.
> 
> 2. onioncircuits.json
> 
> Allows "GETINFO circuit-status" and "GETINFO stream-status", which
> expose to the application a complete list of where the user is
> visiting and how they are getting there.
> 
> 3. onionshare-gui.json
> 
> Allows "SETEVENTS HS_DESC", which is exposes to the application every
> hidden service which the user is visiting.
> 
> 4. ricochet.json
> 
> Allows "SETEVENTS HS_DESC", for which see "onionshare-gui" above.
> 
> 5. tbb.json
> 
> Allows "SETEVENTS STREAM" and "GETINFO circuit-status", for which see
> "onioncircuits" above.
> 
> =
> Filters from 
> https://git-tails.immerda.ch/tails/tree/config/chroot_local-includes/etc/tor-controlport-filter.d
> :
> 
> 1. onioncircuits.yml
> 
> See onioncircuits.json above; it allows the same GETINFO stuff.
> 
> 2. onionshare.yml
> 
> As above, appears to allow HS_DESC events.  It allows "GETINFO
> onions/current", which can expose a list of every onion service
> locally hosted, even those not launched through onionshare.
> 
> 3. tor-browser.yml
> 
> As "tbb.json" above.
> 
> 4. tor-launcher.yml
> 
> Allows setconf of bridges, which allows the app to pick a hostile
> bridge on purpose.  Similar issues with Socks*Proxy.  The app can also
> use ReachableAddresses to restrict guards on the .
> 
> Allows SAVECONF, which lets the application make the above changes
> permanent (for as long as the torrc file is persisted)
> =
> 
> So above, I see a few common patterns:
>   * Many restrictive filters still let the application learn enough
> about the user's behavior to deanonymize them.  If the threat model is
> intended to resist a hostile application, then that application can't
> be allowed to communicate with the outside world, even over Tor.
> 
>   * Many restrictive filters block SETCONF and SAVECONF.  These two
> changes together should be enough to make sure that a hostile
> application can only deanonymize _current_ traffic, not future Tor
> traffic. Is that the threat model?  It's coherent, at least.
> 
>   * Some applications that care about their own onion services
> inadvertantly find themselves informed about everyone else's onion
> services.  I wonder if there's a way around that?
> 
>   * The NEWNYM-based side-channel above is a little scary.
> 
> 
> And where do we go forward from here?
> 
> The filters above seem to have been created by granting the
> applications only the commands that they actually need, and by
> filtering all the other commands.  But if we'd like filters that
> actually provide some security against hostile applications using the
> control port, we'll need to take a different tactic: we'll need to
> define the threat models that we're trying to work within, and see
> what we can safely expose under those models.
> 
> Here are a few _possible_ models we could think about, but I'd like to
> hear from app developers and filter authors and distributors more
> about what they think:
> 
>  A. Completely trusted controller.  (What we have now)
> 
>  B. Controller is untrusted, but is blocked from exfiltrating information.
> B.1. Controller can't connect to the network at all.
> B.2. Controller can't connect to the network except over tor.
> 
>  C. Controller is trusted wrt all current private information, but
> future 

Re: [tor-dev] Control-port filtering: can it have a reasonable threat model?

2017-04-03 Thread Yawning Angel
For what it's worth, since there's a filter that's shipped and
nominally supported "officially"...

On Mon, 3 Apr 2017 14:41:19 -0400
Nick Mathewson  wrote:
> But I could be wrong!  Maybe there are subsets that are safer than
> others.

https://gitweb.torproject.org/tor-browser/sandboxed-tor-browser.git/tree/src/cmd/sandboxed-tor-browser/internal/tor

The threat model I used when writing it was, "firefox is probably owned
by the CIA/NSA/FBI/FSB/DGSE/AVID/GCHQ/BND/Illuminati/Reptilians, the
filter itself is trusted".  There's a feature vs annonymity tradeoff,
so it's up to the user to enable the circuit display if they want
firefox to have visibility into certain things.

Allowed (Passed through to the tor daemon):

 * `SIGNAL NEWNYM`.  If both `addressmap_clear_transient();`
   and `rend_client_purge_state();` aren't important then it can
   disallow the call, because it rewrites the SOCKS isolation for all
   connections to the SOCKSPort.

   At one point this was entirely synthetic and not propagated.  It's
   only a huge problem if people are not using the containerized tor
   instance.

   It's worth noting that even if I change the behavior to just change
   the SOCKS auth, a misbehaving firefox can still force new circuits
   for itself.

   The sandbox code could pop up a modal dialog box asking if the user
   really wants to "New Identity" or "New Tor Circuit for this Site",
   so that "scary" behavior requires manual user intervention (since
   torbutton's confirmation is probably subverted and not to be
   trusted).

 * (Optional) `GETCONF BRIDGE`.  The Tor Browser circuit display uses
   this to filter out Bridges from the display.  Since the circuit
   display is optional, this only happens if the user explicitly
   decides that they want the circuit display.

 * (Optional) `GETINFO ns/id/`.  Required for the circuit display.
   Mostly harmless.

 * (Optional) `GETINFO ip-to-country/`.  Required for the circuit
   display.  Harmless.  Could be handled by the filter.

Synthetic (Responses generated by the filter):

 * `PROTOCOLINFO`.  Not used by Tor Browser, even though it should be.
   Everything except the tor version is synthetic.

 * `AUTHENTICATE`.  Just returns success since the filtered control
   port does not require authentication.

 * `AUTHCHALLENGE`.  Just returns an error.  See `AUTHENTICATE`.

 * `QUIT`.  Only prior to the `AUTHENTICATE` call.  Not actually used
   by Tor Browser ever.

 * `GETINFO net/listeners/socks`.  torbutton freaks out without this.
   The response synthetically generated to match what torbutton expects.

 * (Optional) `SETEVENTS STREAM`.  Required for the circuit display.
   Events are synthetically generated to only include streams that
   firefox created.

 * (Optional) `GETINFO circuit-status`.  Required for the circuit
   display.  Responses are synthetically generated to only include
   circuits that firefox created.

Denied:

 * Everything else.

> So above, I see a few common patterns:
>   * Many restrictive filters still let the application learn enough
> about the user's behavior to deanonymize them.  If the threat model is
> intended to resist a hostile application, then that application can't
> be allowed to communicate with the outside world, even over Tor.

  "The only truly secure system is one that is powered off, cast in a
   block of concrete and sealed in a lead-lined room with armed guards -
   and even then I have my doubts." -- spaf

>   * The NEWNYM-based side-channel above is a little scary.

I don't think this is solvable while giving the application the ability
to re-generate circuits.  Maybe my modal doom dialog box should run
away from the user's mouse cursor, and play klaxon sounds too.

The use model I officially support is "sandboxed-tor-browser launches a
tor daemon in a separate container dedicated to firefox".  People who
do other things, get what they deserve.

> And where do we go forward from here?

If it were up to me, I'd re-write the circuit display to only show the
exit(s) when applicable, since IMO firefox is not to be trusted with
the IP address of the user's Guard.

But the circuit display when running sandboxed defaults to off, so
people that enable it, presumably fully understand the implications of
doing so.

> The filters above seem to have been created by granting the
> applications only the commands that they actually need, and by
> filtering all the other commands.  But if we'd like filters that
> actually provide some security against hostile applications using the
> control port, we'll need to take a different tactic: we'll need to
> define the threat models that we're trying to work within, and see
> what we can safely expose under those models.

"Via the control port a subverted firefox can get certain information
about what firefox is doing, if the user configures it that way,
otherwise, all it can do is repeatedly NEWNYM" is what I think I ended
up with.

Though I have the 

Re: [tor-dev] Control-port filtering: can it have a reasonable threat model?

2017-04-03 Thread meejah
Nick Mathewson  writes:

> But I could be wrong!  Maybe there are subsets that are safer than
> others.

So, I guess the "main" use-case for this stuff would be the current
users of control-port filters (like Subgraph and Whonix; others?).

It seems what these things *really* want is a "limited view" of the One
True Tor. So for example, you don't want to filter on the "command" or
"event" level, but a complete coherent "version" of the Tor state.

As in: see "your" STREAM events, or "your" HS_DESC events etc. Probably
the same for BW or similar events. This is really kind of the
"capability system" you don't want, though ;)

Also, I really don't know exactly what the threat-model is, but it does
seem like a good idea to limit what information a random application has
access to. Ideally, it would know precisely the things it *needs* to
know to do its job (or at least has been given explicit permission by a
user to know). That is a user might click "yes, OnionShare may add onion
services to my Tor" but in reality you have to enable: ADD_ONION, (some)
HS_DESC events, DEL_ONION (but only ones you added), etc. If you really
wanted an "on-disk" one (i.e. via HiddenServiceDir not ADD_ONION), then
you have to allow (at least some) access to SETCONF etc.

Or, maybe you're happy to let that cool visualizer-thing have access to
"read only" events like STREAM, CIRC, BW, etc if you know it's sandboxed
to have zero network access.

> As above, appears to allow HS_DESC events.  It allows "GETINFO
> onions/current", which can expose a list of every onion service
> locally hosted, even those not launched through onionshare.

Doesn't this just show "onions that the current control connection has
added"?

>   * Some applications that care about their own onion services
> inadvertantly find themselves informed about everyone else's onion
> services.  I wonder if there's a way around that?

HS_DESC events include the onion (in args) so could in principle be
filtered by a control-filter to only include events for certain onions
(i.e. those added by "this" control connection). In practice, this is
probably exactly what the application wants anyway.

>  E.  Your thoughts here?

Maybe this is a chance to play with a completely different, but ideally
much better "control protocol for Tor"? The general idea would be that
you have some "trusted" software (i.e. like existing control-port
filters) that on the one side connects to the existing control-port of
Tor (and is thus "completely trusted") but then exposes "the Next Great
Control Protocol" to clients.

Nevertheless, there's still the question of what information to expose
(and how) -- i.e. the threat model, and use-cases.

Of course, the same idea as above could be used except it speaks "Tor
Control Protocol" out both sides -- that is, 'just' a slightly fancier
filter.

> signing-off-before-this-turns-into-a-capabilities-based-system,

Aww, that's what I want ;)

-- 
meejah
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Control-port filtering: can it have a reasonable threat model?

2017-04-03 Thread Nick Mathewson
Hi!

As you may know, the Tor control port assumes that if you can
authenticate to it, you are completely trusted with respect to the Tor
instance you have authenticated to.  But there are a few programs and
tools that filter access to the Tor control port, in an attempt to
provide more restricted access.

When I've been asked to think about including such a feature in Tor in
the past, I've pointed out that while filtering commands is fairly
easy, defining a safe subset of the Tor control protocol is not.  The
problem is that many subsets of the control port protocol are
sufficient for a hostile application to deanonymize users in
surprising ways.

But I could be wrong!  Maybe there are subsets that are safer than others.

Let me try to illustrate. I'll be looking at a few filter sets for example.
=
Filters from https://github.com/subgraph/roflcoptor/filters :

1. gnome-shell.json

This filter allows "SIGNAL NEWNYM", which can potentially be used to
deanonymize a user who is on a single site for a long time by causing
that user to rebuild new circuits with a given timing pattern.

2. onioncircuits.json

Allows "GETINFO circuit-status" and "GETINFO stream-status", which
expose to the application a complete list of where the user is
visiting and how they are getting there.

3. onionshare-gui.json

Allows "SETEVENTS HS_DESC", which is exposes to the application every
hidden service which the user is visiting.

4. ricochet.json

Allows "SETEVENTS HS_DESC", for which see "onionshare-gui" above.

5. tbb.json

Allows "SETEVENTS STREAM" and "GETINFO circuit-status", for which see
"onioncircuits" above.

=
Filters from 
https://git-tails.immerda.ch/tails/tree/config/chroot_local-includes/etc/tor-controlport-filter.d
:

1. onioncircuits.yml

See onioncircuits.json above; it allows the same GETINFO stuff.

2. onionshare.yml

As above, appears to allow HS_DESC events.  It allows "GETINFO
onions/current", which can expose a list of every onion service
locally hosted, even those not launched through onionshare.

3. tor-browser.yml

As "tbb.json" above.

4. tor-launcher.yml

Allows setconf of bridges, which allows the app to pick a hostile
bridge on purpose.  Similar issues with Socks*Proxy.  The app can also
use ReachableAddresses to restrict guards on the .

Allows SAVECONF, which lets the application make the above changes
permanent (for as long as the torrc file is persisted)
=

So above, I see a few common patterns:
  * Many restrictive filters still let the application learn enough
about the user's behavior to deanonymize them.  If the threat model is
intended to resist a hostile application, then that application can't
be allowed to communicate with the outside world, even over Tor.

  * Many restrictive filters block SETCONF and SAVECONF.  These two
changes together should be enough to make sure that a hostile
application can only deanonymize _current_ traffic, not future Tor
traffic. Is that the threat model?  It's coherent, at least.

  * Some applications that care about their own onion services
inadvertantly find themselves informed about everyone else's onion
services.  I wonder if there's a way around that?

  * The NEWNYM-based side-channel above is a little scary.


And where do we go forward from here?

The filters above seem to have been created by granting the
applications only the commands that they actually need, and by
filtering all the other commands.  But if we'd like filters that
actually provide some security against hostile applications using the
control port, we'll need to take a different tactic: we'll need to
define the threat models that we're trying to work within, and see
what we can safely expose under those models.

Here are a few _possible_ models we could think about, but I'd like to
hear from app developers and filter authors and distributors more
about what they think:

 A. Completely trusted controller.  (What we have now)

 B. Controller is untrusted, but is blocked from exfiltrating information.
B.1. Controller can't connect to the network at all.
B.2. Controller can't connect to the network except over tor.

 C. Controller is trusted wrt all current private information, but
future private information must remain secure.

 D. Controller is trusted wrt a fraction of the requests that the
clients are handling. (For example, all requests going over a single
SOCKSPort, or all ADD_ONION requests that it makes itself.)

 E.  Your thoughts here?




signing-off-before-this-turns-into-a-capabilities-based-system,
-- 
Nick
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev