Re: [tor-dev] Temporary hidden services

2018-10-19 Thread Leif Ryge
On Wed, Oct 17, 2018 at 07:27:32PM +0100, Michael Rogers wrote:
[...] 
> If we decided not to use the key blinding trick, and just allowed both
> parties to know the private key, do you see any attacks?
[...]

If I'm understanding your proposal correctly, I believe it would leave
you vulnerable to a Key Compromise Impersonation (KCI) attack.

Imagine the scenario where Alice and Bob exchange the information to
establish their temporary rendezvous onion which they both know the
private key to, and they agree that Bob will be the client and Alice
will be the server.

But, before Bob connects to it, the adversary somehow obtains a copy of
everything Bob knows (but they don't have the ability to modify data or
software on his computer - they just got a copy of his secrets somehow).

Obviously the adversary could then impersonate Bob to Alice, because
they know everything that Bob knows. But, perhaps less obviously, in the
case that Bob is supposed to connect to Alice's temporary onion which
Bob (and now the adversary) know the key to, the adversary can also
now impersonate Alice to Bob (by overwriting Alice's descriptors and
taking over her temporary onion service).

In this scenario, if Bob hadn't known the private key for Alice's
temporary onion that he is connecting to, impersonating Alice to Bob
would not have been possible.

And of course, if the adversary can successfully impersonate both
parties to eachother at this phase, they can provide their own long-term
keys to each of them, and establish a long-term bidirectional mitm -
which, I think, would otherwise not be possible even in the event that
one party's long-term key was compromised.

Bob losing control of the key material before using it (without his
computer being otherwise compromised) admittedly seems like an unlikely
scenario, but you asked for "any attacks", so, I think KCI is one (if
I'm understanding your proposal correctly).

~leif
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] design for a Tor router without anonymity compromises

2015-05-04 Thread Leif Ryge
On Sat, May 02, 2015 at 08:37:17PM -0700, coderman wrote:
> a friend and i are working on a Tor router design that doesn't
> compromise anonymity for convenience. [0][1][2][3][4]

So, unlike a transparent tor router, this system is not intended to prevent
malicious software on client computers from being able to learn the client
computer's location, right? An attacker who has compromised some client
software just needs to control a single relay in the consensus, and they'll be
allowed to connect to it directly?

It is unclear to me what exactly this kind of tor router *is* supposed to
protect against. (I haven't read the whole document yet but I read a few
sections including Threat Model and I'm confused.)

~leif

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] RFC: Ephemeral Hidden Services via the Control Port

2015-02-28 Thread Leif Ryge
On Sat, Feb 28, 2015 at 02:40:29PM +0100, carlo von lynX wrote:
> Thanks "Angel", appreciate your effort.
> 
> On Thu, Feb 26, 2015 at 09:29:05AM +0100, Andreas Krey wrote:
> > On Wed, 25 Feb 2015 13:51:59 +, carlo von lynX wrote:
> > ...
> > > What is useful here is if I can use existing $app with existing
> > > tor router and just have a shell script drop the glue instructions
> > > into the tor unix socket.
> > 
> > One way to do that would be to tie the hidden service to the existence
> > of the PID of your app - just exec the app in the script after setting
> > up the HS. (I seem to remember that being an option in some form already.)
> 
> Not exactly the intended behaviour when somebody has to restart the web
> server and doesn't expect Tor to stop servicing it... or when the web
> server is written in $occasionalcoredumpstyle.

I think this is an important point I hadn't considered - at the very least, it
will be necessary to make sure that Tor handles well the case where the same HS
is destroyed and then immediately recreated.

> > Alternatively tor could check whether the listener the HS is accessing
> > is still open, and discard the HS when that is no longer the case.
> > (Possibly new idea.)
> 
> Yes, and then hope not for a racing condition.
> 
> > (And hopefully your application isn't giving extended authority to
> > clients connecting from 127.0.0.1.)
> 
> Depends on the specific constellation.. if noone is web browsing
> on the same system.. if processes are not separated by uid anyway,
> because that actually takes some work, und finally nobody else has
> a login, warning about unsecured control ports or suchlike is crying 
> wolf and educating users to ignore such warnings.
> 
> The current default way to run the Tor router is with the same uid as the 
> user herself, right? Putting an authentication method on the control
> port is pretty pointless - if an attacker manages to break into her
> browser he doesn't have to look very far for her Tor state. So all
> the warnings about localhost being not safe enough yet even though for
> the majority of users it is the factual configuration appears somewhat
> counter-productive to me. We should first introduce a habit of having
> Tor neither launched by TBB nor by vidalia nor as root but using its
> own isolated uid.

FWIW this is already how Debian (and presumably other distros') tor packages
work: tor runs as a dedicated user. Already it is possible to grant other users
access to the control port (from which they can already create and remove
hidden services). The reason why HS applications that create their own HSes
generally run their own instance of tor as their own uid is that the hidden
service data (key and hostname) written by tor is currently only readable by
the tor user. There is another patch to address this issue (in progress or
possibly already merged, sorry I'm not looking up the ticket right now) to
allow this data to be written with permissions for another group to read it,
but this ephemeral HS plan of delivering the information over the control port
is obviously much better/more flexible.

>From Valencia,
~leif

> Then again, whichever way you give the user a way to control the Tor
> router opens up an attack vector for somebody who managed to break into
> a faulty client application. So to me the entire lets-not-trust-localhost
> logic doesn't work out in my head. It either produces bureaucratic
> complications or false positives in the warning log.
> 
> Maybe I overlooked something.
> 
> 
> -- 
>   E-mail is public! Talk to me in private using Tor.
>   torify telnet loupsycedyglgamf.onionDON'T SEND ME
>   irc://loupsycedyglgamf.onion:67/lynX  PRIVATE EMAIL
>  http://loupsycedyglgamf.onion/LynX/OR FACEBOOGLE
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] RFC: Ephemeral Hidden Services via the Control Port

2015-02-16 Thread Leif Ryge
On Mon, Feb 16, 2015 at 03:47:07PM +, Yawning Angel wrote:
> On Mon, 16 Feb 2015 10:17:51 -0500
> David Goulet  wrote:
> [snip]
> > A hidden service is created using the key and list of
> > port/targets, that will persist till configuration reload or the
> > termination of the tor process.
> > 
> > Now, an HS bound to a control connection might be a good idea, I'm not
> > 100% sure but I can see issues with this. Let's say
> > "ControlListenAddress" is used, this means a TCP socket and it can
> > timeout if no activity, so if that happens, I loose my HS?
> 
> That's correct, though unless tor or the controller library has code to
> stomp on long dormant connections (which a casual look says there
> isn't), this shouldn't happen, because TCP/IP in itself has no idle
> timeout (See RFC 1122 4.2.3.6 regarding keep alives, which would also
> not cause HS loss, since the OS will respond to the probe).
> 
> There may be certain broken middleboxes (loadbalancers etc) that stomp
> on long idle TCP connections, but anyone that is running a control port
> connection through such a thing, and sending RSA keying material in the
> clear, probably has bigger things to worry about.
> 
> > This also put quite a requirement on the user side to add an HS on its
> > tor-ramdisk for instance but has to use a client that keeps the
> > control connection opens for its lifetime?... How does that work with
> > stem, it would have to keep the control connection open and the
> > script using it can't quit else the socket gets closed by the OS?
> 
> Yup, I don't see "you need to leave stem running" as being all that
> bad, since this is mostly targeted at managed applications
> (chat, filesharing, global leaks, etc).
> 
> If someone has a suggestion for an alternative interface that can
> handle applications crashing (possibly before they persist the list of
> HSes they need to clean up), applications that are just poorly written
> (and not cleaning up all the ephemeral HSes), and (optionally, though
> lacking this would be a reduction in features) limiting cross
> application HS enumeration, I'd be more inclined to change things.

First, thanks for doing this! I think it's a great feature which will make it
much easier to create new hidden service applications.

I like the idea of tying HS lifetime to the control port connection for the
reasons you state, namely that cleanup is automatic when applications crash.

However, it seems like in the case of applications which are not HS-specific
this will necessitate keeping another process running just to keep the HS
alive. I'd rather see two modes: one as you describe, and another in which the
ephemeral HS stays running until a new control port connection requests that it
be stopped. To avoid allowing enumeration of running services, the "stop"
command could require that the requestor already knows some details of the HS -
either a cookie generated at creation time, or perhaps just the private key
that was provided when it was started.

This of course wouldn't result in crashed applications' HSes being cleaned up
automatically, but having a few stale HSes sitting around isn't the end of the
world. One approach for cleaning them up could be that tor could remove them
automatically after it sees connection refused a few times.

~leif
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal xxx: Consensus Hash Chaining

2015-01-10 Thread Leif Ryge
On Tue, Jan 06, 2015 at 05:51:53PM +, Andrea Shepard wrote:
> Here's a proposal Nick Mathewson and I just wrote for ticket #11157.
> 
> --- Begin proposal body ---
> Filename: xxx-consensus-hash-chaining.txt
> Title: Consensus Hash Chaining
> Author: Nick Mathewson, Andrea Shepard
> Created: 06-Jan-2015
> Status: Draft
> 
> 1. Introduction and overview
> 
> To avoid some categories of attacks against directory authorities and their
> keys, it would be handy to have an explicit hash chain in consensuses.
> 
> 2. Directory authority operation
> 
> We add the following field to votes and consensuses:
> 
> previous-consensus ISOTIME [SP HashName "=" Base16]* NL
> 
> where HashName is any keyword.
> 
> This field may occur any number of times.
> 
> The date in a previous-consensus line in a vote is the valid-after date of
> the consensus the line refers to.  The hash should be computed over the
> signed portion of the consensus document. A directory authority should
> include a previous-consensus line for a consensus using all hashes it supports
> for all consensuses it knows which are still valid, together with the two
> most recently expired ones.
> 
> When this proposal is implemented, a new consensus method should be allocated
> for adding previous-consensus lines to the consensus.
> 
> A previous-consensus line is included in the consensus if and only if a line
> with that date was listed by more than half of the authorities whose votes
> are under consideration.  A hash is included in that line if the hash was
> listed by more than half of the authorities whose votes are under
> consideration.  Hashes are sorted lexically with a line by hashname; dates
> are sorted in temporal order.
> 
> If, when computing a consensus, the authorities find that any
> previous-consensus line is *incompatible* with another, they must issue a
> loud warning.  Two lines are incompatible if they have the same ISOTIME, but
> different values for the the same HashName.
> 
> The hash "sha256" is mandatory.
> 
> 3. Client and cache operation
> 
> All parties receiving consensus documents should validate previous-consensus
> lines, and complain loudly if a hash fails to match.
> 
> When a party receives a consensus document, it SHOULD check all
> previous-consensus lines against any previous consensuses it has retained,
> and if a hash fails to match it SHOULD warn loudly in the log mentioning the
> specific hashes and valid-after times in question, and store both the new
> consensus containing the mismatching hashes and the old consensus being
> checked for later analysis.  An option SHOULD be provided to disable
> operation as a client or as a hidden service if this occurs.
> 
> All relying parties SHOULD by default retain all valid consensuses they
> download plus two; but see "Security considerations" below.
> 
> If a hash is not mismatched, the relying party may nonetheless be unable to
> validate the chain: either because there is a gap in the chain itself, or
> because the relying party does not have any of the consensuses that the latest
> consensus mentions.  If this happens, the relying party should log a warning
> stating the specific cause, the hashes and valid-after time of both the
> consensus containing the unverifiable previous-consensus line and the hashes
> and valid-after time of the line for each such line, and retain a copy of
> the consensus document in question.  A relying party MAY provide an option
> to disable operation as a client or hidden service in this event, but due to
> the risk that breaks in the chain may occur accidentally, such an option
> SHOULD be disabled by default if provided.
> 
> If a relying party starts up and finds only very old consensuses such that
> no previous-consensus lines can be verified, it should log a notice of the
> gap along the lines of "consensus (date, hash) is quite new.  Can't chain back
> to old consensus (date, hash)".  If it has no old consensuses at all, it
> should log an info-level message of the form "we got consensus (date, hash).
> We haven't got any older consensuses, so we won't do any hash chain
> verification"
> 
> 4. Security Considerations:
> 
>  * Retaining consensus documents on clients might leak information about when
>the client was active if a disk is later stolen or the client compromised.
>This should be documented somewhere and an option to disable (but thereby
>also disable verifying previous-consensus hashes) should be provided.
> 
>  * Clients MAY offer the option to retain previous consensuses in memory only
>to allow for validation without the potential disk leak.
> --- End proposal body ---

Thank you Andrea and Nick for working on this! I think it's very important, but
I'm not a fan of the design specified above.

I think it would be preferable to do something similar to what other
blockchains do: have a header structure in the consensus document which
contains the hash of the body and the hash of the header 

Re: [tor-dev] Making and distributing custom TBB with a new "home-page"

2014-09-21 Thread Leif Ryge
On Sun, Sep 21, 2014 at 04:12:00PM +0200, Fabio Pietrosanti (naif) wrote:
> Hi all,
> 
> for a very interesting deployment of GlobaLeaks in the are of Human
> Rights defense, we will have the need to distribute a customized Tor
> Browser Bundle to the sources.
> 
> The "customization" requirement is simple: Have as a default home-page
> the GlobaLeaks .onion site .
> 
> We must go that way because:
> - the "target country" where the sources are cannot download TBB due to
> torproject.org being censored
> - the sources are absolutely non-technologically savy (average 60yo
> lawyers doing human rights defense)
> 
> The website where there will be the leaking instructions and the
> download of such custom TBB will be "privately distributed" trough word
> of mouth and trusted connections, with no public solicitation.
> 
> So we must do some piece of software that will:
> - Download TBB in specific languages (2-3 specific languages) for each
> platform
> - Unpack TBB (in all formats for Windows, OSX, Linux)
> - Apply the customization (set the home-page, with slightly different
> parameters depending on the language)
> - Check periodically if a new version is available and, in that case,
> re-execute the process described above to release updated version of TBB
> 
> The questions are:
> a) Which is a simple/stable/resilient way to check which is the latest
> version of TBB
> b) Does someone have already done that kind of customization-process?
> c) Can everything be done from Linux, like a cron-job, in a fully
> automatic way?
> d) Which other customization / ideas / concern are there regarding this
> process?
> 
> I'd personally love if the customization would enable me to completely
> disable the "URL Bar" and all of the Browser Button in order to make it
> useful only to use it as a console to send information being a source,
> without the possibility to go browse other sites.
> 
> Waiting for comments before writing some quick specs

If I remember correctly I heard some GlobaLeaks people discussing the idea of
rebranding TBB a long time ago but eventually they concluded that it was
generally undesirable for potential whistleblowers to have GlobaLeaks-specific
bytes sitting around on their storage devices.

Also, it seems like whatever private distribution mechanism you plan to use for
a modified TBB could also be used just as well for a standard TBB, or Tails.

Have you considered just distributing Tails USB sticks along with the .onion
address on a piece of paper?

As for a TBB updater, until TBB's own updater is released, for GNU/Linux users
there is Tor Browser Launcher: https://github.com/micahflee/torbrowser-launcher

Unfortunately I'm not aware of anything similar for Mac or Windows. Tails'
incremental upgrader generally works these days, if you have enough RAM and a
good clock battery.

~leif
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] only one signature on TBB 3.6.4

2014-08-15 Thread Leif Ryge
Looking at https://www.torproject.org/dist/torbrowser/3.6.4/ I see that there
is currently only one signature on sha256sums.txt for this release. As far as I
can remember, every other stable release in the 3.x series has had signatures
from at least 3 people.

Is this an aberation or should users not be expecting multiple signatures in
the future?

I was just advocating for torbrowser-launcher to require multiple signatures
when this happened:
https://github.com/micahflee/torbrowser-launcher/issues/113#issuecomment-51734515

~leif
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] carml: tasty treats from your Tor

2014-08-04 Thread Leif Ryge
Thanks for writing this, meejah! Awesome tool. I'm seeing some rather strange
things in its "monitor" output though, indicating either bugs in it, or in tor,
or that something is wrong with my system, or perhaps that Tor has some
behavior I don't know about :/

For instance:

Circuit 398 () is LAUNCHED for purpose "GENERAL"
Circuit 398 (tornodenl) is EXTENDED for purpose "GENERAL"
Circuit 398 (tornodenl->Kaarli) is EXTENDED for purpose "GENERAL"
Circuit 398 (tornodenl->Kaarli->CompSciR0x) is EXTENDED for purpose "GENERAL"
Circuit 398 (tornodenl->Kaarli->CompSciR0x) is BUILT for purpose "GENERAL"

the above seems normal, but then some time later...

Circuit 398 (tornodenl->Kaarli->CompSciR0x->FlappyBird) is EXTENDED for purpose 
"GENERAL"
Circuit 398 (tornodenl->Kaarli->CompSciR0x->FlappyBird) is BUILT for purpose 
"GENERAL"
Stream 2509 to 217.23.4.123.$EABB28C6030D78A98B0D8E3BF583463F49C04C59.exit:9001 
attached via circuit 398

I've seen this happen several times: four hop circuits, followed by streams to
the last hop using the .exit notation (that IP and fingerprint are for the
relay FlappyBird, according to Atlas). I don't have AllowDotExit enabled in my
torrc, fwiw.

I'm also occassionally seeing single-hop circuits in the output of "circ -L",
though I haven't noticed one being used yet.

Any ideas?

~leif

On Sun, Aug 03, 2014 at 09:32:18PM +0400, meejah wrote:
> 
> I've got a first super-alpha release of this thing that's been sitting
> around for a while. Turns out "sanitize a bit" turns into "refactor some
> things" and so forth...
> 
> Anyway, carml does various command-line things with Tor and I thought it
> might be useful to others (plays nicely with grep, pipes, etc).
> 
> I would really love feedback on whether the "downloadbundle" command is
> doing the right thing with certificate-checks.
> 
> https://github.com/meejah/carml
> https://carml.readthedocs.org/en/latest/
> 
> You can "pip install carml" to try it out. Recommend doing this in a
> virtualenv:
> 
>virtualenv trycarml
>./trycarml/bin/pip install carml
>./trycarml/bin/carml help
> 
> To check signatures first, instead download the WHL file and associated
> signature from PyPI, gpg --verify it and then replace "install carml"
> with "install path/to/.whl" above.
> 
> Some other things to try:
> 
>carml downloadbundle --extract --system-keyring
>echo "hello darkweb" | carml pastebin
> 
> wait for a new consensus to be published, dump it and exit:
> 
>carml events --once NEWCONSENSUS
> 
> Currently, the defaults work with a system Tor (i.e. localhost port
> 9051). Probably I'll change this to be TBB defaults. To connect to a Tor
> Browser Bundle instance, do this:
> 
>carml --connect tcp:localhost:9151 monitor
> 
> It is written using Twisted and txtorcon.
> 
> Thanks,
> meejah



> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Cute Otter == Tahoe-LAFS + Tor?

2014-05-16 Thread Leif Ryge
I think the idea would be to have a web publishing app which doesn't
necessarily expose Tahoe-LAFS to users directly, but rather just has a
"Publish" button which uploads to it. The only user exposure to Tahoe-LAFS
would be that the URLs contain lengthy cryptographic idenitifers (read
capabilities). For instance, this is the URL to a page about the Onion Grid:
http://etg4ersbwhmvoywb.onion/uri/URI:DIR2-RO:j7flrry23hfiix55xdakehvayy:pn7wdmukxulpwxc3khdwqcmahdusgvfljjt4gx5oe4z35cyxngga/Latest/index.html
This is certainly not something you'd write on paper and expect users to type
in, whereas the current 80-bit .onion address might be. But, I believe the
hidden service improvements plan includes onion addresses getting a lot longer
anyway, so this loss of typability isn't a unique problem here.

What properties does stormy provide? Can I read about it somewhere?

~leif

On Fri, May 16, 2014 at 12:12:40PM -0400, Griffin Boyce wrote:
>   I'm working on a project with the same goals (Stormy), but not
> sure what the status is for formalized Torstuff.
> 
>   For me at least I'm not interested in using Tahoe because it adds
> unnecessary complexity.  My work with users typically shows that
> people have learned or been taught how to use PGP/OTR, but don't
> have experience as sysadmins and don't have consistent access to
> advanced technical help.  It's also far beyond what most sysops
> actually need.  For WikiLeaks, it might make sense.  But for The
> Dubai Times, it might not and the complexity is more likely to
> confuse/demoralize people.
> 
> ~Griffin
> 
> On 2014-05-16 01:56, David Stainton wrote:
> >Hi, What is going on with that cute otter hidden service
> >publishing project?
> >
> >What do people think about having it use the Tahoe-LAFS Onion Grid and
> >lafs-rpg instead of telling users to run their own webservers?
> >Tahoe-LAFS could help to greatly increase the security and censorship
> >resistance of the data being published.
> >
> >If the people involved with this were interested in using Tahoe-LAFS
> >as the data store then I would be more than happy to help out with
> >this. (I don't do any web development at all)
> >
> >
> >Sincerely,
> >
> >David
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Attentive Otter: Analysis of xmpp-client

2013-10-07 Thread Leif Ryge
On Mon, Oct 07, 2013 at 07:21:42PM +0200, Jurre van Bergen wrote:
> [...]
> *Is traffic send over Tor?*
> Yes, xmpp-client has support for sending all traffic over Tor, this 
> includes connecting to onion's. When you connect to jabber.ccc.de or the
> riseup.net jabber service, you are automatically connected over Tor
> through their onion address (hidden service), if Tor is running. SRC
> lookups are not proxied.

I assume you mean SRV lookups? To clarify, they aren't proxied when Tor is used
because they aren't sent at all, correct? (I haven't checked to see, but
assumed this is the case since the onion addresses are hardcoded for
jabber.ccc.de and riseup.net.)

> [...]
> * XMPP in Go - https://github.com/mattn/go-xmpp

Note that xmpp-client does not use that xmpp library, it uses this one:
https://github.com/agl/xmpp

> [...]
> *OTR*
> OTR support comes from the Go crypto package:
> https://code.google.com/p/go.crypto/
> This library only has support for OTRv2 and not the latest OTRv3 
> specification. If we want to be resistant to several attacks[1]  on the
> OTR protocol, we need to reimplement the OTR protocol and update it to
> the latest version or, we use Cgo, which binds into libotr. (Open
> questions: OTR by default?, )

OTR by default (or, outright refuse to send non-OTR messages) is a feature I
would very much like to see and have been meaning to add myself.

~leif

> [...]


signature.asc
Description: Digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] entry guards and linkability

2013-09-13 Thread Leif Ryge
On Wed, Sep 11, 2013 at 11:20:59AM -0400, Nick Mathewson wrote:
> On Wed, Sep 11, 2013 at 10:57 AM, Leif Ryge  wrote:
> > Is the following statement correct?
> >
> > When a user connects to Tor from multiple locations where the network is
> > monitored by the same adversary, their persistent use of the same set of 
> > entry
> > guards uniquely identifies them and reveals their location to the adversary.
> 
> To avoid confusion, I would phrase that as not as "reveals their
> location to the adversary" but as "shows the adversary that
> connections are all coming from the same user."  But yes.
> 
> (If you want to avoid this, you also need to make sure that your MAC
> address is randomized whenever you move networks, that you make
> absolutely no non-Tor connections, and so on.)

Is this tradeoff of using entry guards documented somewhere? I suspect that
there may be many users changing their MAC address to protect themselves
against this exact threat while not understanding that their entry guard set
uniquely identifies them. Perhaps the man page text about UseEntryGuards and
NumEntryGuards should mention it? A FAQ entry would be nice too.

~leif

> > Assuming this is an accurate assessment, wouldn't it make sense to maintain
> > separate sets of entry guards for each network that the user connects from?
> 
> This is indeed a desirable feature, I think, although you'd want to be
> quite careful in how you tell what a "network" is.   You would *not*,
> for example, want to maintain a different set of entry guards for
> every IP that you receive, since if you did, a hostile DHCP server
> could feed you new IPs until you picked a hostile guard. Similarly, if
> you are a busy traveller who changes your view of what network you are
> on hundreds or thousands of times, your chance of picking a hostile
> guard would rise accordingly.
> 
> We'd also need to figure out the storage issues here. Having a record
> in your state file of every network you have visited is not
> necessarily the best idea either.
> 
> 
> As an alternative solution, Roger has been advocating for reducing the
> default number of client guards to 1, to avoid the property of letting
> guard choices identify Tor clients.  I for one am hoping that there
> will be some good solution that partitions guards into N sets of m,
> such that clients will fall into N classes rather than Nm choose 3...
> but it's hard to design such a solution in a way that makes the
> partitions secure against an adaptive attacker.  So perhaps Roger's
> idea is best here.
> 
> 
> best wishes,
> -- 
> Nick


signature.asc
Description: Digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] entry guards and linkability

2013-09-11 Thread Leif Ryge
Is the following statement correct?

When a user connects to Tor from multiple locations where the network is
monitored by the same adversary, their persistent use of the same set of entry
guards uniquely identifies them and reveals their location to the adversary.

Assuming this is an accurate assessment, wouldn't it make sense to maintain
separate sets of entry guards for each network that the user connects from?

~leif


signature.asc
Description: Digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev