Re: [tor-talk] Tor and solidarity against online harassment
On Thu, Dec 11, 2014 at 11:04 PM, Ted Smith wrote: > This sounds like a very flaky reason to be not okay with a denouncement > of online harassment. You might want to reconsider the communities > you're a member of, if they have to look for reasons so hard for why a > commitment against harassment of women on the Internet is such a bad > thing. I expressed no view that I had any issue with denouncing harassment. Without context I'm at a bit of a loss as to what exactly it all means, but I am certainly not okay with harassment. I am upset that you thought it appropriate to accuse me (or people I associate with) of being okay with harassment, simply because I asked for a clarification that this new (?) intolerance of harassment didn't extend to undermining prior values. The statement of support was vague-- how would things change? how was harassment tolerated before? What can I do to help? etc., no doubt justifiably in the interest of not exacerbating the problems. Some people read it as saying that it meant things I was sure it didn't mean. It was very helpful to get a clarification to help put people at ease. -- tor-talk mailing list - tor-talk@lists.torproject.org To unsubscribe or change other settings go to https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Tor and solidarity against online harassment
On Thu, Dec 11, 2014 at 10:07 PM, Roger Dingledine wrote: > I'd like to draw your attention to > https://blog.torproject.org/blog/solidarity-against-online-harassment > https://twitter.com/torproject/status/543154161236586496 > > One of our colleagues has been the target of a sustained campaign of > harassment for the past several months. We have decided to publish this > statement to publicly declare our support for her, for every member of > our organization, and for every member of our community who experiences > this harassment. She is not alone and her experience has catalyzed us to > action. This statement is a start. > > I'd love to get your feedback on this post, and your thoughts on how to > turn it into something more. This is a bigger struggle than just Tor's > piece of it. The intensity of the language such as "Further, we will no longer hold back out of fear or uncertainty from an opportunity to defend a member of our community online", immediately caused people in two different communities I'm a member of to express concern that this was basically a declaration of war and that the Tor Project and the signing parties might engage in activities like releasing backdoored software in an effort to return fire. I was only able to respond that I didn't think that was the case, but nothing in the document provides a strong basis to support that... and also pointing out that these people could always be coerced and so that risk exists regardless of any statements of intent, and so we must audit and count on the auditing of others. A counter argument given was that the auditing by others is not worthwhile when they are also part of the "war". Is there a similarly strong statement that the software will never be intentionally backdoored by the same parties that I can point people to? I don't wish to deflect from the serious concern about online harassment, but it seems this statement can easily be misconstrued (perhaps maliciously) as a statement of abandoning prior values. -- tor-talk mailing list - tor-talk@lists.torproject.org To unsubscribe or change other settings go to https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Propsal for decentralization of the Tor network
On Mon, Nov 24, 2014 at 3:03 AM, Cari Machet wrote: > prove decentralization creates vulnerability to a larger degree than > centralization You haven't specified the decentralization mechanism. So I guess I get to pick? Okay. Instead of believing the directory authority signatures, instead you have nodes connect out to as many nodes as they can find, and add any entry returned by a majority of nodes to their local directory. Oops. The attacker is a local network and only lets them connect out to their own nodes, which perform a sybil attack and limit the tor client's view to just the attackers hosts. Client security is lost completely. Q.E.D. ... There are many ways you can go about trying to be 'decentralized' most are _profoundly_ insecure in an active adversaries attack model. Usually the main failure mode is inadequate sybil resistance. This isn't to say that I don't think useful things are possible, I don't know. I have not seen a proposal which even makes an argument for its own security for this application. Saying "decenteralized" is easy, tendering a concrete proposal which achieves useful security properties is much harder. And "decenteralized" isn't something that can be deployed or analyzed for its security, specific concrete proposals are. Incidentally, > Ruh-roh, this is now necessary: This email is intended only for the > addressee(s) and may contain confidential information. If you are not the > intended recipient, you are hereby notified that any use of this > information, dissemination, distribution, or copying of this email without > permission is strictly prohibited. If you don't want your emails being made public you should consider not sending them to a public mailing list. -- tor-talk mailing list - tor-talk@lists.torproject.org To unsubscribe or change other settings go to https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Propsal for decentralization of the Tor network
On Mon, Nov 24, 2014 at 1:07 AM, wrote: > I have carefully checked trac and torproject.org website for proposals, > seen many interesting ones but not a single one to decentralize the Tor > network from the direcotry authorities. There are many ways to accomplish > this apparently, and it's the only way to guarantee full independence and > anonymity. > > Are there even plans to make this change? Or the current system which > offers full control for few people seams good enough to you? It's far from clear to me that substantially stronger decentralization is practically possible for this application; at least not without additional assumptions and exposure to new and concerning attack vectors. I think a better short term goal would be improving review and auditability... which I think can be done. E.g. better tools for providing convincing evidence that the directory authorities are not misbehaving, and additional protections against misbehaving, better automatic handling should authorities misbehave. (E.g. making it so that authority signing is moved into a HSM which at least enforces the constraint that only a single signature will be given for a particular time period, or the like; making proof of a misbehaving authority forever ban that authority, beyond a threshold misbehaving should shut down the network until manually overridden, etc.). -- tor-talk mailing list - tor-talk@lists.torproject.org To unsubscribe or change other settings go to https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Bitcoin over Tor isn’t a good idea (Alex Biryukov / Ivan Pustogarov story)
On Mon, Oct 27, 2014 at 11:19 PM, Seth David Schoen wrote: > First, the security of hidden services among other things relies on the > difficulty of an 80-bit partial hash collision; even without any new > mathematical insight, that isn't regarded by NIST as an adequate hash So? 80 bits is superior to the zero bits of running over the open internet? (You should be imagining me wagging my finger at you for falling into the trap of seeming to advance not using cryptographic security at all since it's potentially not perfect) > service user and the hidden service operator is not as trustworthy in > some ways as a modern TLS implementation would be. Hah. Well here modern TLS seems to be mostly a cluster@#$@ of complexity and resulting protocol an implementation failure. :) But thats not here nor there, because it isn't actually a choice offered. > Second, a passive attacker might be able to distinguish Bitcoin from other > protocols running over Tor by pure traffic analysis methods. If a new > user were downloading the entire blockchain from scratch, there would > be a very characteristic and predictable amount of data that that user > downloads over Tor (namely, the current size of the entire blockchain -- > 23394 megabytes as of today). Sure, though thats a one time transfer common to all Bitcoin users. Which the user may have already had most of previously, or obtained from some other source. At worst, that traffic has just identified you as someone who has started up a Bitcoin node. > Not many files are exactly that size, so it's a fairly strong guess that > that's what the user was downloading. Even submitting new transactions > over hidden services might not be very similar to, say, web browsing, > which is a more typical use of Tor. The amount of data sent when > submitting transactions is comparatively tiny, while blockchain updates > are comparatively large but aren't necessarily synchronized to occur > immediately after transaction submissions. So maybe there's a distinctive > statistical signature observable from the way that the Bitcoin client > submits transactions over Tor. It would at least be worth studying > whether this is so (especially because, if it is, someone who observes > a particular Tor user apparently submitting a transaction could try to > correlate that transaction with new transactions that the hidden services > first appeared to become aware of right around the same time). Bitcoin core intentionally obscures the timing of its transaction relaying and batches with other transactions flowing through. It could do much better, the existing behavior was designed before we had good tor integration and so didn't work as hard at traffic analysis resistance as it could have. In some senses Bitcoin transaction propagation would be a near ideal application for a high latency privacy overlay on tor, since they're small, relatively uniform in size, high value, frequent... and already pretty private and so are unlikely to gather nuisance complaints like email remailers do. > Third, to take a simpler version of the attacks proposed in the new > paper, someone who _only_ uses Bitcoin peers that are all run by > TheCthulhu is vulnerable to double-spending attacks, and even more > devious attacks, by TheCthulhu. (You might say that TheCthulhu is Bitcoin has a fair degree of robustness against network sybils, and even if all your peers are a single malicious party their ability to attack is gated by the several thousand dollar per block costs (and the risk that the receiver will realize something is wrong when it takes days to get six confirmations). (New client software comes with foreknowledge of the work in the real network, so you cannot even provide a replacement alternative history without doing enormous amounts of computation, e.g. 2^82 sha256 operations currently to replicate the history). More mechanisms to reduce sybil risk are important for onion peers and IPv6 where address scarcity are unavailable and people have been experimenting with various ideas to address those and related concerns, e.g. https://bitcointalk.org/index.php?topic=310323.0 and https://en.bitcoin.it/wiki/Identity_protocol_v1, but the system already assumes that the peers are attackers generally. > but that does at least > undermine the decentralization typically claimed for Bitcoin because > you have to trust a particular hidden service operator As above, at least the 'trusted' operator has considerable costs to attack you... This is arguably a much stronger security model than using tor in the first place due to tor's complete reliance on directory authorities, for all you know you're being given a cooked copy of the directory and are only selecting among compromised tor nodes. This is one of the reasons that a some amount of work has gone into supporting multi-stack network configurations in bitcoin, so that you can have peers on each of several separate transports. > Using Bitcoin over T
Re: [tor-talk] High-latency hidden services (was: Re: Secure Hidden Service (was: Re: ... Illegal Activity As A Metric ...))
On Sun, Jun 29, 2014 at 5:58 PM, Seth David Schoen wrote: > I wonder if there's a way to retrofit high-latency hidden services > onto Tor -- much as Pond does, but for applications other than Pond's > messaging application. [...] > Then a question is whether users would want to use a service that takes, > say, several hours to act on or answer their queries (and whether the > amount of padding data required to thwart end-to-end traffic analysis > is acceptable). If such a facility existed, e.g. a "mailbox delivery to a hidden service" we would use it in Bitcoin, at least optionally for broadcasting new transactions (We already use hidden services). Many seconds of delay are basically always acceptable for transaction broadcasts and privacy conscious users would probably not mind using hours in a reliable system, at least if they could reduce the time when they wanted to do so. Designing this well may be tricky. To prevent DOS attacks (e.g. I send you a gigabyte of messages and the network dutifully keeps delivering them to you) you may want to have the recipient per-approve incoming traffic... but if if that would allow linking otherwise independent messages from users it would break some applications. This is fixable, e.g. connect once to get the recipient to blind-sign a bunch of return-envelopes with the current HS key... then use them to authorize delivery, but thats getting into non-trivial cryptographic design. Have you seen http://freehaven.net/anonbib/cache/alpha-mixing:pet2006.pdf ? Part of the argument is that having a store of high latency messages to send through can improve privacy for everyone (e.g. by padding out links with useful traffic to make passive traffic analysis harder on the realtime flows). > One important problem is what counts as "thwarting end-to-end traffic > analysis". Right now with Pond, the goal is to prevent anyone from > knowing which Pond users communicate with which other Pond users and > when, not necessarily prevent them from knowing who is a Pond user. Nor does it prevent the pond server operators from learning general traffic matrix data about their pseudonymous users... e.g. it doesn't use PIR techniques to check the mailbox. Bitmessage, which uses flooding (and thus can be seen as the simplest kind of PIR) could have complete privacy in the receive direction (but doesn't right now due to braindead protocol design that lets you send parties messages that they'll automatically respond to), but in the send direction users are fairly vulnerable to traffic analysis inherently. So both pond and bitmessage— systems which have traffic analysis resistance— would still be improved by this kind of tool. > Satisfying this property might have implausibly high padding requirements. If the recipient of messages has a way to rate limit them, he should be able to choose to not allow traffic levels that would be conspicuous (e.g. out of line with the level of padding that they/the network can support). -- tor-talk mailing list - tor-talk@lists.torproject.org To unsubscribe or change other settings go to https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Satori (this crazy app thing I've been working on)
On Sun, May 4, 2014 at 5:14 PM, Griffin Boyce wrote: > Hey all, > > So Satori is this app for Google Chrome that distributes circumvention > software in a difficult-to-block way and makes it easy for users to check if > it's been tampered with in-transit. You might be interested in some of the ideas that have been floating around in Bitcoin land about better tools for distributing software updates, I've collected the ones I think are most important here: https://en.bitcoin.it/wiki/User:Gmaxwell/update_checking_requirements Note that it's not about automatic updates, it's about automatic update staging— the user stays in control there... but the goal is to advance the art so that users aren't just pulling updates from some website in a way that any MITM could compromise too easily... but without introducing centralized gate-keeping either. I think some of these ideas might be pretty important when distributing software specifically to 'interesting targets'— e.g. it would give pretty good dividends to rubber hose the guy who can issue the updates to a bunch of activist, so both for the users and the operators safety something more robust ought to be done. -- tor-talk mailing list - tor-talk@lists.torproject.org To unsubscribe or change other settings go to https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Improved HS key management
On Sat, Dec 28, 2013 at 1:15 PM, grarpamp wrote: > On Sat, Dec 28, 2013 at 6:46 AM, Gregory Maxwell wrote: >> One of the current unfortunate properties of hidden services is that >> the identity of the hidden service is its public key (or the > >> This is pretty bad for prudent key management— the key is very high >> value because its difficult to change, and then stuck always online > > It's not difficult to change, you just change it. > I'm pretty sure there's a ticket open involving most of this key > management stuff, you could add any missing concepts to it. > It's been on the list before too. And there's a second gen draft > proposal on tor-dev/torspec. It absolutely is difficult to change— you can only "just change it" if no one uses it. Otherwise you're chasing people to change addresses on websites and in software, and the static addresses in people's bookmarks are vulnerabilities— both if the key falls into an attackers hands but also because if users become used to the URL just changing they'll believe it when an attacker DOS attacks the URL while publishing a new one. Copies of the old name lurk around for years hitting unsuspecting people, etc. Sure, it's not the end of the world. Life goes on, and even with good key management possible, many won't use it. -- tor-talk mailing list - tor-talk@lists.torproject.org To unsubscribe or change other settings go to https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
[tor-talk] Vanity onion attacks
With the advent of super fast onion address generators it's become not too uncommon for hidden services to use vanity addresses, but this seems to have brought about some vanity attacks where people grind out lookalike addresses to setup fake sites. People then do a poor job visually comparing them as the vanityness practically demands. I've heard from some people getting tricked by some of these, but I don't have any idea how common it is in general. I wonder if anyone is enumerating hidden services and can post some stats on how many low edit-distance names they're seeing in the directory? -- tor-talk mailing list - tor-talk@lists.torproject.org To unsubscribe or change other settings go to https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
[tor-talk] Improved HS key management
One of the current unfortunate properties of hidden services is that the identity of the hidden service is its public key (or the equivalent hash, in the current setup), and this key must always be available for signing on an online host (usually the HS itself, though potentially on a bastion host). This is pretty bad for prudent key management— the key is very high value because its difficult to change, and then stuck always online constantly being signed with— even on demand by a hostile attacker. Then the matter is made even worse by there being no systematized mechanism for revocation. It would be preferable if it were possible to have a HS master key which was kept _offline_ which could be use to authorize use for some time period and/or revoke usage. The offline key could be used to create an online key which is good for a year or until superseded by a higher sequence number, and every 6 months the online key could be replaced. Thus if an old copy of the HS media were discovered it couldn't be used to impersonate the site. Sadly the homomorphism proposed to prevent HSDIR enumeration attacks cannot be used to accomplish this, as knoweldge of the ephemeral private key and the public blinding factor yields the original private key. I can describe a scheme to address this but I'm surprised to not find any discussion of it. -- tor-talk mailing list - tor-talk@lists.torproject.org To unsubscribe or change other settings go to https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] x.509 for hidden services
On Sun, Oct 27, 2013 at 1:08 PM, Andrea Shepard wrote: > For defense in depth on the HS side, it's best to run the HS Tor on a > different machine, or at least a different VM, than the HS server, so > that if the HS server software is owned, the HS private key isn't > compromised. The setup you describe would prevent that configuration; > consider allowing, in addition to self-signed certs, a certificate chain > where the root is a CA certificate matching the HS key in the manner you > describe and signing a certificate using a different key for the service > to use. Fantastic point, and just as easily done. As obvious as it is, I'd forgotten about people keeping the HS keys on separate hosts. It also raises the point that perhaps future Tor HS should also support delegation so that the HS master identity key could be kept offline. E.g. you have a HS identity key, and it delegates to a short term HS key which has a lifetime of only 1 month, and perhaps has some kind of priority scheme such that a key with a higher sequence number takes precedence. E.g. if someone compromises your key you can instantly throw up a new service which people will connect to instead... If your HS (bastion) host is compromised you wouldn't completely lose control of your HS identity. Might even be useful to pre-define a maximum sequence number such that an announcement with that sequence number blocks access. So if your site is compromised you can announce a pre-signed HS revocation which forever kills the address so long as someone keeps periodically rebroadcasting it to RPs. > As for the migration to elliptic curves, I think the most serious problem > you'll encounter is that the curve we end up using may not be one that has > a standardized OID or is widely supported in X.509 implementations - e.g., > Curve25519. Yea, I'm not too worried about that. If the Tor usage becomes very common we could simply extend the protocol with a tor-specific extension that supports them. My thinking here is that at least for today I can do something to make existing HS identities work with very little effort. -- tor-talk mailing list - tor-talk@lists.torproject.org To unsubscribe or change other settings go to https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] x.509 for hidden services
On Sat, Oct 26, 2013 at 12:57 AM, grarpamp wrote: >> I believe torchat does this > > IIRC, torchat is just doing a bidirectional secret passing > pingpong between clients behind the HS addresses, no > actual x509 stuff. There's a good paper on it. Link please. :) At least in one (early) version it needed to access the HS keys so it could sign with them and identify itself on outgoing connections. I didn't mean to imply it used x.509, but rather just that something else had used a HS identity key for some application level auth. -- tor-talk mailing list - tor-talk@lists.torproject.org To unsubscribe or change other settings go to https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
[tor-talk] x.509 for hidden services
==Background== (you can skip to the Tor section if you don't care) The Bitcoin universe is in the process of creating a specification for digital invoices called "the bitcoin payment protocol". (More info: https://bitcointalk.org/index.php?topic=300809.msg3225143#msg3225143) The payment protocol allows someone to request someone else pay Bitcoin for specific things, instructing them to pay specific amounts in specific ways, and allows the receiver to provide things like instructions for sending a refund if the transaction is for some reason aborted... all sorts of extra metadata which doesn't belong in the public Bitcoin network for scalability and privacy reasons. One of the things these invoices have is an optional signing mechanism for authentication and non-repudiation. Normally these requests would be sent over an authenticated and encrypted channel which provides confidentiality and authentication, but not non-repudiation. The non-repudiation will provide cryptographic evidence to the participants which can be used to resolve disputes: e.g. "He didn't send me my alpaca socks!" "Thats not the address I told you to pay!" "He told me he'd send my 99 red-balloons, not just one!" "No way, that was the price for 1 red-balloon!" The payment protocol is extensible and may someday commonly support many kinds of signatures, but the initial implementations only support signing with internal x.509 certificates and verifying those certificates with standard CAs. As _horrible_ as this is, it's better than nothing, and the primary users asking for this functionality have SSL websites today. We don't believe that any other PKI mechenism is actually functional enough to be usable (e.g. as evidenced by the fact that downloads of our GPG signatures, is on the order of 1% of the downloads of the Bitcoin software; and probably only a small portion of those users have actually done anything to verify the signing keys) today, so other options haven't been a priority. However, the need to use the known insecure CA infrastructure for this (optional!) feature has seriously spazzed out some people. A lot of this is pure confusion, e.g. people thinking that all payment requests would have to go via the CA (no kidding!), but its surprisingly hard to convince people who are responding emotionally of the subtle tradeoffs involved especially when they have the luxury of saying "it's your problem to go figure out, figure it out and go write a bunch extra of software for me". So having some alternative on day one would be useful in helping the more conspiracy minded understand that this isn't some effort to cram the use of CAs down their throat. ==Where Tor comes in== One of the downsides if using x.509 certs to non-repudiate here is that sites hosted as tor hidden services can't participate. It occurred to me that this could be fixed with very little code: Take the HS pubkey, pack it into a self-signed x.509 cert with the cn=pubkeybase32.onion. And specify that .onion certs have a special hostname validation rule that hashes and encodes the key. Then the whole process would work for .onion, we'd have a non-CA option available and working, etc. I'm aware that HS pubkeys have been used for application level authentication in Tor elsewhere (e.g. I believe torchat does this) so it's not entirely unprecedented. I'm not aware of anyone packing them in x.509 certificates. If anyone has, I'd like to use the same encoding style for greater compatibility. The biggest reason I can see not to do this is that it will not be compatible with future editions of hidden services which aren't based on RSA keys. (e.g. the EC point addition stuff to protect against enumeration attacks wouldn't fit into this model). I don't think this is a serious concern: if HS x.509s do become widely used we could add a new authentication type for the new onion addresses when those are introduced. Does anyone else see any other reasons not to do this? Are there other applications which would benefit from having x.509 certs for onion names? -- tor-talk mailing list - tor-talk@lists.torproject.org To unsubscribe or change other settings go to https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] NIST approved crypto in Tor?
On Sat, Sep 7, 2013 at 8:09 PM, Gregory Maxwell wrote: > On Sat, Sep 7, 2013 at 4:08 PM, anonymous coward > wrote: >> Bruce Schneier recommends *not* to use ECC. It is safe to assume he >> knows what he says. > > I believe Schneier was being careless there. The ECC parameter sets > commonly used on the internet (the NIST P-xxxr ones) were chosen using > a published deterministically randomized procedure. I think the > notion that these parameters could have been maliciously selected is a > remarkable claim which demands remarkable evidence. Okay, I need to eat my words here. I went to review the deterministic procedure because I wanted to see if I could repoduce the SECP256k1 curve we use in Bitcoin. They don't give a procedure for the Koblitz curves, but they have far less design freedom than the non-koblitz so I thought perhaps I'd stumble into it with the "most obvious" procedure. The deterministic procedure basically computes SHA1 on some seed and uses it to assign the parameters then checks the curve order, etc.. wash rinse repeat. Then I looked at the random seed values for the P-xxxr curves. For example, P-256r's seed is c49d360886e704936a6678e1139d26b7819f7e90. _No_ justification is given for that value. The stated purpose of the "veritably random" procedure "ensures that the parameters cannot be predetermined. The parameters are therefore extremely unlikely to be susceptible to future special-purpose attacks, and no trapdoors can have been placed in the parameters during their generation". Considering the stated purpose I would have expected the seed to be some small value like ... "6F" and for all smaller values to fail the test. Anything else would have suggested that they tested a large number of values, and thus the parameters could embody any undisclosed mathematical characteristic whos rareness is only bounded by how many times they could run sha1 and test. I now personally consider this to be smoking evidence that the parameters are cooked. Maybe they were only cooked in ways that make them stronger? Maybe SECG also makes a somewhat curious remark: "The elliptic curve domain parameters over (primes) supplied at each security level typically consist of examples of two different types of parameters — one type being parameters associated with a Koblitz curve and the other type being parameters chosen verifiably at random — although only verifiably random parameters are supplied at export strength and at extremely high strength." The fact that only "verifiably random" are given for export strength would seem to make more sense if you cynically read "verifiably random" as backdoored to all heck. (though it could be more innocently explained that the performance improvements of Koblitz wasn't so important there, and/or they considered those curves weak enough to not bother with the extra effort required to produce the Koblitz curves). -- tor-talk mailing list - tor-talk@lists.torproject.org To unsusbscribe or change other settings go to https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] NIST approved crypto in Tor?
On Sat, Sep 7, 2013 at 4:08 PM, anonymous coward wrote: > Bruce Schneier recommends *not* to use ECC. It is safe to assume he > knows what he says. I believe Schneier was being careless there. The ECC parameter sets commonly used on the internet (the NIST P-xxxr ones) were chosen using a published deterministically randomized procedure. I think the notion that these parameters could have been maliciously selected is a remarkable claim which demands remarkable evidence. -- tor-talk mailing list - tor-talk@lists.torproject.org To unsusbscribe or change other settings go to https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] HS drop
On Sun, Aug 11, 2013 at 5:20 PM, Griffin Boyce wrote: > And if you spider them based on links and onion search engines, you > can get a decent idea of active hidden services. But I'd still like to No need to do this. http://www.ieee-security.org/TC/SP2013/papers/4977a080.pdf -- tor-talk mailing list - tor-talk@lists.torproject.org To unsusbscribe or change other settings go to https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] HS drop
On Sun, Aug 11, 2013 at 2:53 PM, mirimir wrote: > Have you accumulated a list of all hidden services using spiders etc? > The address space of all possible hidden services (36! = 3.72e+41) is > far^N too large to scan, right? ;) Unfortunately, due to mildly design limitations in hidden services you don't need to scan the 80 bit HS space in order to monitor which hidden services are active. -- tor-talk mailing list - tor-talk@lists.torproject.org To unsusbscribe or change other settings go to https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Is Tor still valid?
On Mon, Aug 5, 2013 at 11:41 PM, intrigeri wrote: > mirimir wrote (06 Aug 2013 05:46:37 GMT) : >> If this exploit had included a Linux component, Tails would not have >> protected you. > I've not studied the attack code but this appears to be mostly > correct. I believe it would have had to also include a local privilege escalation exploit and tails specific code to do the bypass. This is basically the threat model that whonix's isolation is intended to address, it would be good to see tails improve wrt this. -- tor-talk mailing list - tor-talk@lists.torproject.org To unsusbscribe or change other settings go to https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] BitMail 0.1 - p2p Email
On Tue, Jul 30, 2013 at 6:07 AM, krishna e bera wrote: > On 13-07-30 12:47 AM, Thomas Asta wrote: >> http://bitmail.sourceforge.net/ > > No design, no specs, no discussion, no docs. > A feature list that looks remarkably like GoldBug, And source code that looks remarkably like GoldBug. Also being promoted on various list for crypto/privacy minded people by parties who seem to be pretending that they just "found" it and are curious about it. -- tor-talk mailing list - tor-talk@lists.torproject.org To unsusbscribe or change other settings go to https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Network diversity [was: Should I warn against Tor?]
On Fri, Jul 19, 2013 at 8:35 AM, Jens Lechtenboerger wrote: > [For those who are confused about the context of this: I started the > original thread. A write-up for my motivation is available at [0].] I > Links to my code and a README.txt clarifying necessary prerequisites are > available at [0]. Best wishes Jens [0] > https://blogs.fsfe.org/jens.lechtenboerger/2013/07/19/how-i-select-tor-guard-nodes-under-global-surveillance/ It's _very_ hard to reason about this subject and act safely. It is common for ISPs to use segments in their network which are provided by third party providers, even providers who are almost entirely facilities based will have some holes or redundancy gaps. Because these are L1 (wave) and L2 (e.g. ethernet transport) they are utterly invisible from the L3 topology. You can make some guesses which are probably harmless: a guard that is across the ocean is much more likely to take you across a compromised path than one closer—but going much further than that may well decrease your security. These concerns should be reminding us of the importance of high latency mix networks... they're the only way to start getting any real confidence against a global passive observer, and the are mostly a missing item in our privacy tool toolbelt. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Network diversity [was: Should I warn against Tor?]
On Fri, Jul 19, 2013 at 9:45 AM, adrelanos wrote: > Seems like high latency mix networks failed already in practice. [1] > > Can't we somehow get confidence even against a global active adversary > for low latency networks? Someone start a founding campaign? So have low latency ones, some things fail. Today you'd answer that concern by running your high latency mix network over tor (or integrated into tor) and so it cannot be worse. Answering the "you need users first, and low latency networks are easier to get users for" concern. The point there remains that if you're assuming a (near) global adversary doing timing attacks you cannot resist them effectively using a low latency network. Once you've taken that as your threat model you can wax all you want about how low latency mix networks get more users and so on.. it's irrelevant because they're really not secure against that threat model. (Not that high latency ones are automatically secure either— but they have a fighting chance) On Fri, Jul 19, 2013 at 10:03 AM, Jens Lechtenboerger wrote: >> but going much further than that may well decrease your security. > > How, actually? I’m aware that what I’m doing is a departure from > network diversity to obtain anonymity. I’m excluding what I > consider unsafe based on my current understanding. It might be that > in the end I’ll be unable to find anything that does not look unsafe > to me. I don’t know what then. Because you're lowering the entropy of the nodes you are selecting maybe all the hosts themselves are simply NSA operated, or if not now, they be a smaller target to compromise. Maybe it actually turns out that they all use a metro fiber provider in munich which is owned by an NSA shell company. In Germany this may not be much of a risk. But if your logic is applied to someplace that is less of a hotbed of Tor usage it wouldn't be too shocking if all the nodes there were run by some foreign intelligence agency. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] CloudFlare
On Thu, Apr 18, 2013 at 2:57 PM, Jacob Appelbaum wrote: > It is possible to request a special flag on a Wikipedia account that is > granted by way of some special handshake. It is possible to take an > already created account and use it for edits as the flag overrides the > Tor block. The flag is called ipblock-exempt You can see the the list of uses on english wikipedia that have it here: http://en.wikipedia.org/w/index.php?title=Special%3AListUsers&username=&group=ipblock-exempt&limit=500 (bot accounts and administrators also inherit this ability without the ipblock-exempt flag) (As an aside, your own account was previously flagged this way, (by Wikimedia's chairman of the board), but the flag has since been removed because your account has been inactive: http://en.wikipedia.org/w/index.php?title=Special%3ALog&type=&user=&page=User%3AIoerror&year=&month=-1&tagfilter= ) [snip] > I think we should ensure that Wikipedia understands that the account was > created with Tor and that the user may be using this to circumvent > censorship, to protect what they are reading or editing from their local > network censors or surveillance regime as well as to protect IP address > information that the US currently doesn't really protect (see USA vs. > Appelbaum; re: my Twitter case). Since the US can see a lot of the > traffic to Wikipedia, I'd guess that this is important worldwide. I've been generally unable to convince people that surveillance of Wikipedia access is both happening and actually important. The people participating in the creation and administration Wikipedia (and likewise those employed by the Wikimedia foundation) enjoy the privileged of having the greatest intellectual freedom that has ever been enjoyed by anyone anywhere. This is unsurprising: People without substantial freedom of all kinds are not the most likely to go about assembling a Free Encyclopedia. Like any other privileged it's not always obvious to the beholder. The idea that someone's Wikipedia editing (or, much less _reading_) habits might be highly private and personal and likely to cause harm if monitored isn't really appreciated by people who really find that kind of monitoring hard to believe (even, ironically, when it's currently happening to them— the illusion of intellectual freedom is greater than the actual intellectual freedom) I was unsuccessful in the last major datacenter reworking convincing the technical staff to adopt an architecture which could reasonably scale to supporting SSL always on for all readers (one where SSL wasn't handled by a separate cluster but was instead run in parallel on the existing non-ssl frontends). Unfortunately, I think it will probably take someone being killed for reasons considered unjust by western standards before the considerable expenditure necessary to HSTS the entire site will be justified. Pressure on this front needs to come from activists, not from technology people. > A workable solution would be to continue to use such a list to detect > Tor usage and then to ensure that we now allow new accounts to be > created over Tor. The MediaWiki should ensure that HSTS is sent to the > user and that the user only ever uses HTTPS to connect to Wikipedia. Account creation via tor is explicitly and intentionally disabled. > If the user is abusive and an IP block would normally apply, Wikipedia > would not block by IP but would rather use the normal Wikipedia process > to resolve disputes (in edits, discussions, etc) The blocking of tor (and other IP) addresses is never intended to be a part of the regular "disagreeable behavior for otherwise well meaning and sane contributors" process. It doesn't aid in that process. In theory blocking is really only a measure against people who are malicious or (temporarily?) mentally ill. Wikipedia will try to reason you out of doing something, and if that fails, _tell_ you to stop doing something, and then only block you if you don't listen. > and if the account is > just being used for automated jerk behavior, I think it would be > reasonable to lock the account, perhaps even forcing the user to solve a > captcha, or whatever other process is used when accounts are abused in > an automated fashion. Mostly the really automated behavior is not that huge of an issue— the thousands of wiki administrators have access sophisticated to automated behavioral blocking tools (I think the rule expression language in abusefilter is turing complete), account creation requires solving a captcha... and marketers have discovered that spamming Wikipedia can have certain unexpected negative effects once caught (like completely disappearing from search engine indexes), so only idiot marketers spam overtly. But what is an issue is an issue is _non-automated_ or semi-automated jerk behavior. A single bored kid or irate mentally ill person can easily fully saturate the time of ten or more Wikipedia volunteer editors with a barrage of fake identities making su
Re: [tor-talk] CloudFlare
On Thu, Apr 18, 2013 at 2:51 PM, grarpamp wrote: > Though sure, I do suggest and accept that Tor may present a > different *class* of abuse than other categories of abusable > IP's. Tor exits were not banned prior to their use for abuse. At the point automated exitlist banning was performed a substantial portion were manually blocked. (Which had the three way bad effect of not completely blocking the trolls, while blocking most use by non-free users, while also blocking ex-exits and punishing people for even trying out being an exit). There is no particular blocking efficiency gain that comes from using exitlists relative to other kinds of abuse sources. The site can and does block /16's all by itself. ( http://en.wikipedia.org/wiki/Special:BlockList?wpTarget=&wpOptions[]=addressblocks&limit=5000 ) >> not have a high deployment or operating cost > I think cost is large what they think about. Just a... 'Really? You mean we > can turn a flag and whack 2^8 at zero cost, sweet, we just eliminated a > help desk drone's worth of salary from our costs'. That's pretty cold. Your approach is why the tor community will make absolutely no progress on this subject. Telling me that you don't think the problem is imaginary doesn't help when everything else you say shows that you believe it is. You might think you're being only slightly insensitive to other people's needs, but I am here to tell you that I am inside the both communities and you are coming off as a clueless jerk. This is actually hard and it involves real trade-offs. This attitude of "oh it's easy and you're just being a reactionary" is embarrassing to people who know better... and to people who care less about enabling access than I do it's so completely misguided that it will just get you ignored. > Nyms wouldn't be usable by legitimate anons unless they are > free from linkable properties. I suggest you familiarize yourself with the previously proposed solutions before responding. [snip] > On the other hand, a little development cost by a site can put up some > pretty big walls against abuse in the form of time delayed accounts, > captchas now and then, good filters on your i/o, etc. And often cost > less than whatever service you pay to keep you 'safe'. The purpose of any anti-baddness system must be to distinguish between good and bad users. Things like time delays actually select for _bad_ users: Good users are unlikely to tolerate the delay. Bad users can just pipeline to hide the latency. That things like this reduce badness is only an artifact of that fact that they reduce everything. > And honestly, if you're so fucking tight that you can't pony up for > a proper abuse desk, then both your business model and you > should expect failure. I'm not sure where to begin here. All I can say is that if the Tor community will allow people to approach this issue with this kind of response it "should expect failure". ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] CloudFlare
On Thu, Apr 18, 2013 at 1:01 PM, Matthew Finkel wrote: > Wikimedia is actually willing to discuss an alternative setup if a > usable one is found. Their current implementation is not really > acceptable, but there also isn't really a working/implemented alternative > solution, at this point (and it's not exactly at the top of their list > to implement their own). It's the same old story: There are persistent highly annoying trouble makers— not even many of them— who are effectively deterred by blocking whatever proxies they use. Eventually they hit tor, and thus tor must be blocked from editing. This abuse isn't imaginary. The various magical nymtoken ideas would probably be acceptable— they just need to make it so that an unbounded supply of identities is not any cheaper than it already is— but they need to be implemented and not have a high deployment or operating cost. There are some people who hold the position that instant doubling of identities (w/ and w/o tor) that attackers would get is not acceptable but with things like http://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2013-04-08/News_and_notes and Tor's effectiveness at evading censorship I expect that most can be convinced that it's worth it. Harder would be the fact that English Wikipedia (and many other larger Wikipedias) blocks most data centers and VPS services with large rangeblocks as they get used as account multipliers by socks and an obvious nym implementation would partially defeat that. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Deterministic Builds - was: Bridge Communities?
On Sat, Apr 13, 2013 at 8:44 PM, adrelanos wrote: > I assume you're the Gregory Disney who is also one builder of those > Bitcoin deterministic builds? Since you're involved in Tor as well, I > seems to me you could be a great help by providing some information > about the Bitcoin build process. There is no Gregory Disney involved with Bitcoin as far as I know. > Where are the instructions how I (or someone else) not involved in > Bitcoin development can produce bit identical builds of Bitcoin to match > the hash sums which are also distribiuted on sourceforge? If there are > none, could you provide them please? They're included with the source: https://github.com/bitcoin/bitcoin/blob/master/doc/release-process.txt and https://github.com/bitcoin/bitcoin/tree/master/contrib/gitian-descriptors > Can their system be applied for Tor as well or are there any differences? Yes. It may take a little jiggling to get the builds to actually be deterministic for any particular package, but they should be applicable to anything. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] NSA supercomputer
On Sun, Apr 7, 2013 at 4:31 PM, Mike Perry wrote: > However, it would be interesting to have some benchmarks for high-bit > ECC implementations. It seems to me they should still be faster than > modular exponentiation at the same bitwidth, no? For signing, — If you are willing to have large amounts of data: (and you can almost always move public key bytes into the signature by making the "public key" a hash of the real public key). (1) You can use merkle signatures, which have stronger security properties than the common asymmetric schemes (simply because they already all use a hash function in a way that a second pre-image is a complete break on the signature). They're also stupid fast, and as a class generally secure against hypothetical quantum computers. and/or (2) You could use multiple schemes e.g. RSA && Ed25519 && merkle && lattice such that the composition is no less secure, ... and even if all of the schemes can be attacked the cost of building the distinct attacks may be powerful. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] NSA supercomputer
On Fri, Apr 5, 2013 at 6:51 AM, Andrew F wrote: > I would love to see an analysis of a 128 bit AES encryption VS a 10 exoflop > computer. How long to crack it? Anyone got the math on this? [...] > So what does this mean? Any article that suggest that brute forcing > present day encryption is not possible should be taken with a grain of > salt. While the article may be correct today, come September 2012, Utah [...] > I would love to see an analysis of a 128 bit AES encryption VS a 10 exoflop > computer. How long to crack it? Anyone got the math on this? You really should take just a _moment_ to do a little figuring before posting to a public list and consuming the time of hundreds or thousands of people. Lets assume that decrypting with a key and checking the result is one "Floating point operation" (since you're asking us to reason about apples and oranges, I'll just grant you that one apple stands for all the required oranges). To search a 128 bit keyspace on a classical computer you would expect that on average the solution will be found in 2^127 operations. 2^127 'flops' / 10 exaflop/s = 2^127 flops / 10*10^18 flops/second = 17014118346046923173 seconds = 539,152,256,819 years. ...Or, about 39x the currently believed age of the universe. Surely with a lot of computing power there are many very interesting attacks— particularly in the domain of traffic analysis, weak user provided keys, discovering new faster than brute force attacks, etc. But to suggest that they're going to classically brute force a 128 bit block cipher is laughable, even with very generous thinking. Honestly, these other things are arguably far more worrisome but they're all just handwaving... which is all any of this discussion is... ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
[tor-talk] Reduced latency transport for TOR
This work could be _very_ productive for future transport for TOR: https://www.usenix.org/conference/nsdi12/minion-unordered-delivery-wire-compatible-tcp-and-tls As opposed to a raw datagram transport it still gets through the firewalls and nats that TCP/TLS does and still looks like HTTPS to censorware. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] registration for youtube, gmail over Tor - fake voicemail / sms anyone?
On Tue, Oct 16, 2012 at 12:55 PM, Andrew Lewman wrote: > I guess $20 is more than $1 for 1000 CAPTCHA breaks, but I guess that's > because the survivor isn't criminal minded enough to steal/clone > someone's phone for the sms message. It isn't just the phone— the effort required to perform that set of activities was a non-trivial cost— but one acceptable to a person with an earnest need of increased anonymity—, which also created geographic restrictions which limited the use of cheap labor in other locations. Not to mention the cost of the knowledge of how to do whatever workaround you provided, the cost of convincing an internet privacy expert to help them, etc... Maybe at some point someone will build an industrial infrastructure that abuses the ability to buy disposable phones and resell them and then Google will have to adapt. But at the moment… Fundamentally all of these attacks and their defenses are operating in the space of a constant linear work factor. You must do one unit of "effort" to send a valid email, the attack must do N units _or less_ to send N units of spam/crapflood/etc. No system that puts an attacker at merely a simple linear disadvantage is going to look "secure" from a CSish/cypherpunk/mathmatical basis. And in consideration of the total cost, the attacker often has great advantages: he has script coders in cheap labor markets whilee your honest activist is trying to figure out where the right mouse button is... But the fact of the matter is that the common defense approaches are, in general, quite effective. Part of their effectiveness is that many service providers (including community driven/created ones like Wikimedia) are broadly insensitive to overblocking. Attackers are far more salient: they are adaptive and seek out all the avenues and are often quite obnoxious, and there are plenty of honest fish in the sea if you happen to wrongfully turn away a few. When considering the cost benefit tradeoffs one attacker may produce harm greater than the benefit of ten or a hundred honest users— and almost always appears to be causing significant harm— and so it can be rational to block a hundred honest users for every persistent attacker you reject (especially if your honest users 'value' is just a few cents of ad income). This may be wrong— in terms of their long term selfish interests and in terms of social justice— but thats how it is, and it's something that extends _far_ beyond TOR. For example, English Wikipedia blocks a rather large portion of the whole IPv4 internet from editing, often in big multi-provider /16 swaths at a time. Tor is hardly a blip against this background of over blocking. Educating services where their blocking is over-aggressive may be helpful but without alternatives which are effective it will not go far beyond just fixing obvious mistakes. And— the effectiveness is not just limited to the fact that the blocking rejects many people (good and bad alike) there are many attacks like spamming which are _only_ worthwhile if the cost— considering all factors like time discounting, knowledge, geographic restrictions— is under some threshold.. but below that threshold there is basically an infinite supply of demand. The fact that the level abuse is highly non-smooth with the level of defense makes it quite attractive to make some blunt restrictions that screw a considerable number (if only a small percentage) of honest users while killing most of the abuse. On Tue, Oct 16, 2012 at 1:51 PM, k e bera wrote: > Why are anonymous signups assumed guilty of abuse before anything happens? > How about limiting usage initially, with progressive raising of limits based > on time elapsed with non-abusive behaviour (something like credit card > limits)? People should be able to establish good *online* reputations that > are not tied to their physical identity. I think this common but flawed thinking that prevents progress on this front. You're thinking about this in terms of justice. In a just world there wouldn't be any abusers... and all the just rules you can think of to help things won't matter much because the abusers won't follow them, and we don't know how to usefully construct rules for this space that can't be violated. (...and some alternative ideas like WOTs/reputation systems have serious risks of deeper systematic kafkaesque injustice...). And of course, _anonymous_ is at odds with _reputation_ by definition. The whole thinking of this in terms of justice is like expecting rabbits and foxes to voluntarily maintain equilibrium population so that neither dies out. That just isn't how it works. Is it possible that all the communication advances we've made will be wiped out by increasing attacker sophistication to the point where turing test passing near-sentient AIs are becoming best friends with you just to trick you into falling for a scam and we all give up this rapid worldwide communication stuff? Will we confine ourselves to the extre
Re: [tor-talk] Hidden Services
On Wed, Sep 19, 2012 at 1:36 AM, grarpamp wrote: >> People use robots.txt to indicate that they don't want their site to >> be added to indexes. > And if a site is so concerned about someone else publishing a link, > however obtained, then they should name it something innocent and > password protect it or use better operational security to begin with. And they should all move to places where they won't be killed for disfavored political views, and we should all personally audit the source that we run, and we should anticipate any attack or abuse... It seems to me that there is a common expectation is that onion urls provide a degree of name privacy— generally, if someone doesn't know your name they can't find you to connect to you. If someone violates that expectation it risks harming people until the new risks are well known (and still even then some, as no matter how well known it is some people will miss the fact that something enumerates the darn things). Perhaps the convention is dumb. But that doesn't make it right to act in a way that can be expected to harm people when you know better and can avoid it. Hopefully some kind of NG onion would include addition data in the link which is used for introduction so rendezvous collection couldn't get usable addresses (e.g. something as simple as an additional secret used to complete a challenge-response knock with the end host, or as complicated it could pack in a small ECDSA private key, the onion site provides the RP with the public key, and for a connection to proceed the connecting host must sign a permission slip to get past the RP, before even getting to knock). Though this wouldn't do anything to prevent a service like tor2web from data harvesting. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] How dangerous are DNS leak?
[bouncing back to the list because I think it's useful] On Tue, Sep 18, 2012 at 12:10 PM, Paul Syverson wrote: > On Tue, Sep 18, 2012 at 11:21:13AM -0400, Gregory Maxwell wrote: >> On Tue, Sep 18, 2012 at 11:01 AM, Paul Syverson >> wrote: >> > Logic persnickitiness: 'IFF' is not a more emphatic version of 'if'. >> >> I meant if and only if, thank you very much. >> I grouped my clauses poorly indeed. It is dangerous if and only if >> (the warning is not spurrious && your threat model cares about them). >> > > Ah. Then I disagree with your content not your use of logical > terminology. Just because there is (nonspurious) danger doesn't mean > it's in your threat model. That's one of the fundamental problems for > security in general, and for Tor in particular. So I agree with the > only if part but not the if part. > >> In the future grammatical criticisms are best delivered off-list, so >> that one need not feel defensive and waste everyones time with yet >> another tangent email. :P. > > Hmmm. I don't think of this as a grammar point. I took you to be > making a logical error. I thought it worth clarifying to the list > since I have often seen such mistakes lead to misunderstandings. I > agree that it is not constructive to be pointing out every wording > mistake someone makes. I actually took this to be a substantive > confusion-engendering mistake. (Perhaps an occupational hazard from my > training as a logician.) I added the "persnickitiness" to try to avoid > overstating its significance. > This is different from grammatical mistakes where it is clear what > mistake was being made and what was intended---for example, my doing > a bad edit and thus using "you're" where 'your' was correct in the > message to which you're responding. > > And indeed there was misunderstanding, but it was mine. I was wrong > in taking you to be using the connective incorrectly. You were using > it correctly to make a statement that I disagree with. I assumed you > couldn't have meant that and thus drew the wrong conclusion. So there > is a substantive and relevant disagreement after all. > > I apologize for making you feel defensive. That was not my > intent. Even though I believe I was making a substantive response to > your comment (all the more so now), I'm sending this response off-list > in case you disagree and see it as a further offense and/or waste > time. If you think it is worth responding to the list, please feel > free to do so. If you want to end it here or continue off-list, that's > OK too. You're absolutely right. I was talking about the platonic ideal user with a Rational and Informed grasp on their threat models. I absolutely should know better, since I'm often chiding people that they can't know their real threats in privacy areas until its too late, the very nature of most surveillance means that it's _secret_. My only defense: My primary reason for posting was to point out that Tor's leak alarms are sometimes false. It's a frequent spurious complaint for Bitcoin, because the p2p component of it connects by address for obvious reasons. The sorts of error myopia can cause are funny. I agree that it's good and important to not let assumptions of non-existent ideal users with non-existent complete information stand. Thanks for your patience. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] How dangerous are DNS leak?
On Tue, Sep 18, 2012 at 11:01 AM, Paul Syverson wrote: > Logic persnickitiness: 'IFF' is not a more emphatic version of 'if'. I meant if and only if, thank you very much. I grouped my clauses poorly indeed. It is dangerous if and only if (the warning is not spurrious && your threat model cares about them). In the future grammatical criticisms are best delivered off-list, so that one need not feel defensive and waste everyones time with yet another tangent email. :P. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] How dangerous are DNS leak?
On Tue, Sep 18, 2012 at 9:13 AM, adrelanos wrote: > Jerzy Łogiewa: >> How dangerous are the DNS leak for some user? > > Very dangerous! > > http://www.howdoihidemyip.com/dnsleak.htm > "The DNS leak provides your ISP name and location to the website that > you are visiting, thus undermining your ability to stay anonymous on the It's very dangerous IFF it's real, and IFF you're expecting tor to hide what sites you're visiting and services you're using from someone who can watch your network connection. But it's not always real, some services connect by IP and tor will produce spurious warnings about these. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Tor as ecommerce platform
On Sat, Aug 11, 2012 at 11:51 PM, Greg Norcie wrote: > Some crazy new correlation attack might be possible... but using it as > evidence in court would be quite difficult. Wrong mental model. You're assuming a "lawful" attacker. This is just fundamentally incompatible with any definition of attacker that I care about. A real attacker doesn't follow rules that can be bent or broken. In this case you're assuming a particular threat set—prosecution by law enforcement in a place where the rule of law is largely effective and at least somewhat just. While that may apply to people trafficking banned goods in the US, those aren't the only users of tools where these attacks could be applied. If the analytic tools reliably identify their targets, that may be all that's required for someone to go out and kill them. The threat against people promoting disapproved-of political positions or religions, or people disclosing evidence of unlawful and unethical acts by powerful parties, can be expected to be more like the latter than the former. Even so, fancy correlation isn't used as evidence for a conviction. It's used to identify the actual parties, then regular focused evidence gathering and investigation does the rest. In the US, potentially it gets used to generate probable cause for a search, as the bar there is so low as to be almost non-existent and there is no before the fact adversarial process to challenge them. Even absent it, it's trivial to manufacture ample probable cause against anyone, but doing so doesn't scale as an investigative tool unless the targeting has been highly focused first. Perhaps most importantly: a child sex trafficking ring doesn't need to convince a court of law that a new customer is certainly law enforcement before deciding not to do business with them, and I very much want the people doing socially important enforcement work to have good tools and operating procedures so that they can enjoy the full investigative benefits of privacy technology. The fact that evidence guidelines may make non-cheaters working for social good weaker is actually good motivation for developing protection against techniques which are mostly useful to attackers who don't care about following those rules. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Tor as ecommerce platform
On Sat, Aug 11, 2012 at 1:54 PM, Mike Perry wrote: > But from the paper, it sounds like the BTC flow to Silk Road itself is > quite large and might be measurable or at least can be approximated from > the website itself... [snip] Unless I understood the paper, their measurements appear to be based on watching listings go up and down, which only provides a upper bound on the public activity. > The problem is that even with mixes and batching, bitcoin provides a > Global Passive Adversary for free, which can be used to map and measure > total BTC flow through the network to various sinks (eigenvectors + > eigenflow). Based on the established dogma that still rules the Tor > threat model, "BTC cannot win!!!1" for this reason. When Bitcoin is correctly used the sources and sinks are one-time-use pseudonymous locations and the standard operational practices for private— much less, "I'm a target for wealthy adversaries"— usage is to run bitcoin over tor. the most obvious vulnerable points are on the goods and inexplicable income ends— like in cash. With poor use the activity could be very vulnerable to correlation via compressed sensing techniques. I and the other developers have found it to be surprisingly hard to convince Bitcoin users how non-private their activity can be, even with pointing them to public tracking sites. Regardless, I still expect the high profile trouble making users to eventually succumb to fairly boring police work rather than fancy technical analysis, as usual. > At least, not when > you're a substantial and atypical chunk of the BTC flow versus norm. This is what I really responded to correct. In the last 4 hours the Bitcoin network processed 291,326 BTC in transactions— about 3.3million USD at the current trading prices. In _four hours_. And this doesn't include the significant amount of off-network BTC changing hands inside exchanges and bank like services, though it may well be double counting coin that effectively moved multiple times. (Which cant be measured, because it's not always the same coins moving even if its the same 'value' moving, or the opposite). As long as at least the parties are trusted to not doublespend against their counter parties (bad dealing which can be trivially proven to ensure that a cheater's reputation is destroyed) it's perfectly possible to perform unbounded amounts of party to party transactions totally invisibility to the network too, or to form join transactions which concurrently settle multiple parties in a single act, and other weirdness which makes even estimating the true activity level difficulty. Bitcoin transactions are just a few hundred bytes, and there often is no need to make them public in a hurry. I can think of little else of value which could be made more immune to timing analysis, if people cared to do so. I already think these estimates of underground black-market volumes are exaggerated, but it's impossible to know for sure. But the data simply does not suggest that this is a substantial chunk of the activity. Like Tor, Bitcoin suffers from a fair amount of people eager to play up the most controversial uses: Some do so to attack it, some because it resonates with their juvenile desire to 'stick it to the man', but most importantly: its a lot more exciting to present it by emphasizing those things, regardless of how (in-)significant they are or how much many of the users and developers wish they'd go away. Whatever the reasons, skepticism is healthy all around. > Like I said, it will be very interesting to watch. It's almost like some > aliens came down from space and double-dog-dared the ballsiest, > craziest, most aggro humans on the planet to try to solve timing > correlation attacks and then called them all pussies, threw the bitcoin > source code at their feet, and then flew off. You know, because they > needed that shit to interact with our violent monkey society at a safe > enough distance and everybody else on this planet had given up. The bad > Sci Fi just writes itself. ;) If you had any doubts before: Welcome to the future. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Tor as ecommerce platform
On Fri, Aug 10, 2012 at 10:11 PM, Ted Smith wrote: > The obvious problem with this (((this, right here, is the productive > contribution to discussion this email has: it points out the problem > with your proposed methodologies))) is that it presumes that these top > 50 .onion domains comprise the majority of .onion traffic through your > node. I suspect this is not the case. > > If I'm right, and most of the .onion traffic through any given node is > over the "long tail", it won't be possible to get anything useful > without an automated classifier. It's odd that this thread started with a discussion of some sketchy research which worked by running an automated spider moving enormous amounts of traffic. So the very thing that inspired the conversation ruins the proposed methodology. Tisk tisk. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] hidden services 2.0 brainstorming
On Wed, Jul 11, 2012 at 2:30 PM, Rejo Zenger wrote: > Hi, > >> - You get transparent, free end to end encryption. No flawed root CA system. > > Just curious, maybe I am overlooking something: how would this be better than > a self-signed and self-generated certificate (apart from the user not being > nagged with a warning)? It depends on how you got the name of the site you're visiting. Consider: (1) You get the name from a trusted source over a secure channel. - Onion has complete MITM protection - Selfsigned can be owned up by MITM an active network attacker near you - CA is also secure, if the CA is good. (2) You get the name from a non-trusted source or over an insecure channel - Onion buys you nothing over self-signed - Selfsigned is still completely insecure against active attack - CA model provides little security, even if the CA is good! (e.g. knowing that you've connected to "gaypal" with certainty isn't helpful if it was really "paypal" that you wanted but didn't know the right name) So in (1) onion beats self-signed, and in (2) even a CA is not secure. The (2) case is kinda helpless. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] HTTPS to hidden service unecessary?
On Mon, Jul 9, 2012 at 7:41 PM, proper wrote: > HS + SSL makes sense: I was under the impression that browsers had generally stronger cookie and cross domain policies for SSL sessions but maybe I'm imagining things. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] hidden service on same location as public service
On Mon, Jul 9, 2012 at 5:23 PM, wrote: > Exit enclaves no longer work - > https://trac.torproject.org/projects/tor/wiki/doc/ExitEnclave Bummer, they still work on old nodes (or at least I just tested and it works for me). I liked them for unloading exists and narrowing the exposure of non-targeted / non-"malicious" (censorware) exit misbehavior, and for increasing performance (which they did considerably when I tested them years ago). ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] hidden service on same location as public service
On Mon, Jul 9, 2012 at 5:00 PM, Juenca R wrote: > ok good that was actually my other question, why run exit enclave if you run > a hidden service. > i guess you answered my question. they service different purpose. Right. Enclaves work for people using the global domain names, onion addresses do not. I would always run an enclave for such a service even if all it did was detect tor use and punt people to the onion url. > are there no security-related concerns of running both ways? > (actually three ways; regular i-net, hidden service & exit enclave, all on > same server for same site content) > only problem is docs make it sound like you have to be more careful setting > up for exit enclave > actually docs say this about exit enclave "A great idea but not such a great > implementation" Exit enclaves have a number of limitations. For example, they're just by IP but if the user uses your DNS name they'll make their first request out some other exit (which could MITM redirect them) before switching to the enclave. They also add a hop compared to regular exiting (easily made up for by being able to avoid congested exits)... but fewer hops than hidden services. The only concern I'd see if that you may have some problems sorting out which users are enclaves vs onion, so you wouldn't know what internal absolute URLS to use internally. Though if you gave people who showed up via the enclave onion URLs for further links that wouldn't be the end of the world. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] hidden service on same location as public service
On Sun, Jul 8, 2012 at 7:25 PM, wrote: > wrote: >> i'm wonder if it makes any sense to allow users to access a public web server >> access normal at same time as hidden service on same machine? > > Yes. > - saves exit bandwidth > - will continue to work even if all exits are shut down > - exit policy/ports do not matter > - more diversity It's also useful to run as an exit enclave for these purposes. You configure yourself as an exit but only to your public IP address. Then tor nodes will switch to using you to exit to you even when they use your non-onion address. There are advantages and disadvantages of exit enclaves vs onion hosts, but I don't see a reason to not do both on a site which is accessible both on the public internet and as a onion hidden service. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Custom Hidden Service Name?
On Sun, Jul 8, 2012 at 5:08 PM, Juenca R wrote: > Hallo, > > I see some hidden services name like "name47ghg7i.onion" and I wanna have my > own onion address with "name" in front. how can I do this? sorry I looked in > list archives and docs. https://github.com/katmagic/Shallot ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] blocked exit node IP because of spam
On Sun, Jul 1, 2012 at 11:48 PM, grarpamp wrote: > Do NOT penalize those who need multiple random unlinked accounts > by blocking ip's, making up nym systems, etc. Penalize the accounts > that act up. They are the bad ones, not the former. It's this kind of thinking that will result in the web continuing to be largely read-only for Tor users. People running services that block Tor aren't blocking Tor because they Hate Freedom™, or because they can't help but staying up at night trying to come up with ways of screwing people over. Blocking tor isn't trivial, especially to do it well... and many of the people who have been involved with blocking tor at major sites are themselves Tor supporters and bridge/relay operators and only block tor when it is clear that they must. They block write access from Tor because when an abusive user is blocked their inevitable recourse to evade the block is Tor (if not their first choice). After the umpteenth occurrence of whatever antisocial jerkwad assaulting the site via tor it simply has to go. Arguing that a problem doesn't exist is unconvincing to people who are dealing with it, arguing that blocking tor is ineffective or involves unacceptable tradeoffs is unpersuasive to people who have made the changes and measured the results. One of the great forces which makes online communities viable and not all trivially destroyed by a few byzantine troublemakers is that the cost of excluding people is low, but when tor makes the cost of evading the exclusion nearly zero— the balance is upset. Even captchas are a pretty weak tool: Commercial services will solve them for pennies each, and targeted trouble makers aren't deterred by them at all. Perhaps most importantly, — this has been the ongoing approach used by the Tor community and it is demonstratively ineffective: Write access via tor is frequently inhibited. And yes, sure, there are cases where nym use doesn't solve things. But there are a great many where it does. > I would actually donate much more to Tor/EFF project if I could > earmark it for a formal emissary to talk with some of the sites > I've seen implementing bad policy. And hopefully report back to me > with the positive results ... The Tor project absolutely has done this in the past. Though as far as I can tell it has not hat much success except in areas where the Tor prohibitions are sloppy (blocking read access, blocking relays instead of just the relevant exits). > Exactly! And when I can't use these sites in perfectly good, > responsible, creative and nice ways... because they have implemented > crap blocking policies... it pisses me the fuck off. > > Anonymous != evil. > That is what we need to be teaching. You're making a grave error to characterize the people who've made different calls than you have as foolish or insensitive. I'm sure it's true in some cases, but even the well informed frequently make the dispassionate, considered, and rational decision to block write access from Tor. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] blocked exit node IP because of spam
On Sun, Jul 1, 2012 at 3:32 PM, Sam Whited wrote: > Tor is designed to keep people anonymous; this works for both the good > guys, and the bad. This isn't something the Tor Project needs to fix There are things the tor project and surrounding community could do to help here. For example, If I could anonymously donate $10 to a charity and in return receive a persistent nym which I could use to get around those kinds of blocks... I'd be hesitant to misbehave and get my nym blocked. (And forums should feel good about whatever small residual amount of spammers who do buy donation nyms, because even though they spam their need to keep buying nyms support the charities). But no practical software infrastructure exists for this sort of thing today. And until it does any education/advocacy will not go too far because it doesn't offer much in terms of real alternatives. "It's not really so bad." "Yes it is, or we wouldn't have bothered putting in the blocking in the first place" "er.." ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Anonymous Publishing Is Dead.
On Sat, Jun 30, 2012 at 4:15 PM, Anonymous Person wrote: > I know it is dead, because I have tried to do it, and I can assure you it is > dead. I had a similar experience. When I decided to publish a large collection (30gb) of previously paywalled (but public domain) JSTOR documents[1] I initially planned to do so anonymously— simply to mitigate the risk of harassment via the courts. Ultimately, after more consideration I decided to publish with my name attached and I think it made more of an impact because I did so (even though quite a few journalists reported it as though it were a pseudonym)— though if I didn't have even the prospect that I could publish anonymously I can't say for sure that I would have started down that road at all. I perused anonymous publication for some days prior to deciding to not publish anonymously and I encountered many of the same issues that Anonymous Person above named at every juncture I hit roadblocks— though in my case I already had bitcoins, but I couldn't find anyone to take them in exchange for actually anonymous hosting especially without access to freenode. If I'd wanted to emit a few bytes of text fine— but large amount of data, no. It's also the case that non-text documents can trivially break your anonymity— overtly in the case of things like pdf or exif metadata, or more subtly through noise/defect fingerprints in images. I think I can fairly count myself among the most technically sophisticated parties, and yet even I'm not confident that I could successfully publish anything but simple text anonymously. The related problems span even further than just the anonymity part of it. Even once I'd decided to be non-anonymous I needed hosting that wouldn't just take the material down (for weeks, if not forever) at the first bogus DMCA claim (or even in advance of a claim because the publication was 'edgy'). I ended up using the pirate bay— which turned out pretty well, though there were some issues where discussion of my release was silently suppressed on sites such as facebook because they were hiding messages with links to the pirate bay, and it was blocked on some corporate networks that utilized commercial filtering. So I think that the problems for anonymous publication on the Internet are actually a subset of a greater problem that there is little independence and autonomy in access to publishing online. You can't _effectively_ publish online without the help of other people, and they're not very interested in helping anonymous people, presumably because the ratio of trouble to profit isn't good enough. About the only solutions I can see are: (1) Provide stronger abuse resistant nymservices so that things like freenode don't have to block anonymous parties, thus facilitating person to person interactions. (2) Improve the security and useability of things like freenet and hidden services, so that they are usable for publication directly and provide strong anonymity. I'm disappointed to see some of the naysaying in this thread. It really is hard to publish anything more than short text messages anonymously, at least if you care about the anonymity not being broken and you want to reach a fairly large audience. [1] https://thepiratebay.se/torrent/6554331/ ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] [Bitcoin-development] Tor hidden service support
On Wed, Jun 27, 2012 at 2:33 AM, Fabio Pietrosanti (naif) wrote: > Is bitcoin software going to incorporate tor binaries within the > application standard application There are no plans to do this currently. Maybe it makes sense, but I'm somewhat doubtful about that. > automatically create a Tor Hidden > Service on behalf of end-user? This would be nice— there would need to be some way (preferably over the socks port?) for bitcoin to request a hidden service be created and to discover the address of it. Unlike torchar we don't need access to the keys. It would need to have some way of returning the same address if called multiple times for the same destination service so that the hidden service address doesn't change with every restart. Although there would be some tricky security issues to work out with this functionality— e.g. what happens when some rogue java code you accidentally run starts creating hidden service backdoors to your machine? I thought this was already a proposed tor feature, but I'm not finding it right now. > Regarding the addressing, why not use directly the .onion address? It does use the onion addresses— they just need to be mapped into an IPv6 address in order to be carried by the existing bitcoin p2p protocol. The mapping is a bijection and externally to bitcoin it's identical to using the onion addresses. E.g. the command-line parameters deal with .onion addresses, the logs record .onion addresses, and .onion addresses are passed into tor. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Forbes article: Tor and Bitcoin
On Thu, Jun 21, 2012 at 11:51 PM, grarpamp wrote: > http://www.forbes.com/sites/jonmatonis/2012/06/19/torwallet-sparks-trust-without-jurisdiction-debate/ A word to the wise: Perhaps this is an earnest effort, but it's impossible to tell. From appearances it is indistinguishable from a scam which will accrue a large amount of third party owned bitcoin and either vanish or get "hacked". Promotion on the Forbes.com site shouldn't be taken to signify any evidence of reputability. I've seen first hand that they do not do much research for this sort of thing, and articles there have previously plugged services operated by people known to have stolen from others. Anonymity can an important tool for social good, but it can also be misused people should take great caution in handing over control of valuable information to parties that operate under the veil of anonymity. Many people have been robbed under similar circumstances. The open source Bitcoin client software runs excellently over Tor. If you want to use Bitcoin anonymously its a good combination and you don't need services like this website. The next major release of the Bitcoin software will feature much better support for inbound connections in hidden services and automatic hidden service peer discovery, making it work even better. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Data storage in cached-descriptors
On Wed, May 30, 2012 at 9:07 AM, Fabio Pietrosanti (naif) wrote: > So basically "on top of Tor software and Tor Infrastructure" it would be > possible to build other kind of networks, given that they participate to > the Tor network itself. And the directory authorities could freely attack such usage at will. They're trusted to document tor nodes— but why would you trust them to publish other data? ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Tor's critique of Ultrasurf: A reply from the Ultrasurf developers
On Wed, Apr 18, 2012 at 5:54 AM, Tichodroma wrote: > Hi, > might be of interest: > http://ultrasurf.us/Ultrasurf-response-to-Tor-definitive-review.html This is of more interest then their 'response' itself: http://b.averysmallbird.com/entries/the-need-for-community-participation-and-clear-disclosure-processes-in-the-case-of-ultrasurf ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Choosing a name for a .onon
On Thu, Mar 29, 2012 at 6:47 PM, Adrian Crenshaw wrote: > Hi all, > I was under the impression that the .onion names for Tor Hidden Services > were pseudo-random based on the public key. How was someone able to choose > one/choose some character in one? As an example: > http://silkroadvb5piz3r.onion (hope it is not against policy to post that > link, only example I know. ) How did they choose the first 8 characters? Using a brute force search tool like http://gitorious.org/shallot/shallot/ I'd advise against it— while I don't have a study to back me up I expect 'readable' names like that discourage good security practices— that they cause people to use addresses (spread in that look like yours, perhaps) without verifying the source— and when people do compare they are probably more likely to just compare the readable parts. sure, the computation is a bit of a barrier— but it's easier for the attacker (who may generate fake onions for many sites at once) then it is for the defender. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Tor security on EC2
On Sat, Feb 4, 2012 at 8:09 PM, Marco Gruß wrote: > with https://cloud.torproject.org/ actively promoting it, > I have been thinking about Tor vs. EC2 for a while. I'm unqualified to say anything about the specific questions wrt VM system security... but I thought it might be worthwhile to offer a bit of caution related to risk saliency. Whatever risks you decide exist in EC2 here probably also exist in many other services (certainly ones that are similar to EC2, but probably also in ones that look less like it). Arguably they exist in all cases where the operators don't have physical control over the machines. If these risks are discussed as risks of EC2, rather than more general risks of virtualization, or systems owned by third parties then people may avoid EC2 in favour of alternatives which are less secure in practice. If I were a hostile force which was able to compromise some hosting providers but not EC2, raising public concerns about the security of EC2 specifically would be a smart tactic on my part. :) Food for thought. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] On the arm race with chinese
On Mon, Jan 9, 2012 at 6:39 PM, Fabio Pietrosanti (naif) wrote: > the funny things is that they are among us. > Most probably the guy that wrote the Chinese Tor protocol probe is > subscribed to that mailing list. > And now he feel observed. He's welcome to send patches to evade its effects too. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Hoax?
On Sun, Jan 8, 2012 at 12:59 PM, wrote: > I guess it's just a matter of weeks or a few months before the bomb blows. Perhaps this list should be moderated to at least filter out the crackpots/disinformationists that are hardly even trying? :-/ This sort of trash isn't worth the time it takes to delete. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Deterministic builds?
On Thu, Jan 5, 2012 at 6:15 AM, Jacob Appelbaum wrote: [snip] > If anyone has thoughts on the matter, we'd love to hear how Tor as a > project should tackle verifiable builds of the various software we ship. This isn't generally a challenge which is unique to Tor, though the different dependencies and targets may make for some different details. Bitcoin is using gitian for this purpose https://gitian.org/ though it's still very early on in its attempts to solve the binary builds problem, it does at least manage independently reproducible linux binaries. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Is Taking Checksum of Packet Payloads a Vulnerability?
On Sat, Dec 17, 2011 at 11:49 AM, Daniel Cohen wrote: > Is this a problem with Tor's architecture? If so, has this issue > already been addressed? You're mistaking the normal purpose of entry nodes. Normally if Alice is using Tor then she is running it herself. If she is running it herself the traffic is encrypted between her and the subsequent nodes. I don't just mean the end to end https— Tor itself encrypts the traffic so that the traffic leaving her node can only be read by the next hop. Effectively, for this purpose, the traffic 'enters' the tor network inside Alice's computer. The packets observable as not identifiable as the ones leaving the network later. If Alice was in fact not running Tor herself, then the 'entry' node could completely compromise her privacy without checksums or exit sniffing or anything like that, which is why Tor is not used that way. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] How important is it that the MyFamily option be set correctly?
On Mon, Dec 5, 2011 at 7:36 PM, Pascal wrote: > Note that it does not hurt a server to have itself listed in MyFamily. The > easiest way to maintain this line is to make a list of all your servers and > paste that line verbatim on all of your servers. But it's N^2 work if you add servers one at a time, which is annoying and failure prone. It would be nicer if the family option took a secret string for each specified family that was hashed (e.g. via PBKDF2) and then used as a private key. Then the node ID is signed using that key (e.g. with ECDSA) and the signature is published in the directory. Nodes could then validate the signatures and then treat all nodes with the same public key as the same family. Because the security of this isn't terribly important a fairly small field could be used. This would make directories bigger for small families but smaller for big ones. It would avoid the constant update work and make it less likely that well meaning people would misconfigure. Sadly doing something like this w/ RSA would be very bloating. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Youtube?
On Mon, Nov 7, 2011 at 8:05 AM, Andre Risling wrote: > Is there a way to use Tor and watch Youtube? > > Is there a way to download a Youtube video even though Adobe flash isn't > installed? Go to http://www.youtube.com/html5 and enable html5. Most videos should then work in a recent firefox without flash. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Don't use Google as default search in Tor Browser?
On Fri, Nov 4, 2011 at 10:54 AM, Christian Siefkes wrote: > How should using Google as search engine comprise your anonymity? Either > you're anonymous, then you're anonymous on Google too. Or you aren't > anonymous, then avoiding Google won't help you. Anonymity is not an either or thing. If it were tor would provide little value indeed because being completely anonymous while actually doing anything important is almost impossibly difficult— everything you do leaks information. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Rumors of Tor's compromise
On Wed, Oct 26, 2011 at 4:29 PM, Julian Yon wrote: > If you're not using a > pseudonym and paying by cash in sealed envelopes through a postal proxy, > wearing disposable gloves in a clean room to avoid forensic evidence, > then you could be traced. Whether this is likely depends on who your > adversary is. Or I could make your software start spewing an inexplicable and novel error message— and then I look to see who googles for that error message. Operational security is hard, and also not something tor can fix. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Legal or not on monitoring traffic at a Tor exit?
On Sun, Oct 23, 2011 at 8:42 PM, Xinwen Fu wrote: > I'm a bit curious about the legal issue on monitoring traffic at a Tor exit? > Is monitoring Tor traffic at an exit legal? Since the traffic passes "my" > computer, seems of course I can monitor it or even change it. When people > set up a Tor exit, is there any policy from Tor governing the behavior of > the operators? Is there any legal liability? This is in the FAQ. https://www.torproject.org/eff/tor-legal-faq.html Don't confuse capability with legality. You should expect the same laws which make it unlawful for your ISP to do these things make it unlawful for exit operators. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Dutch police break into webservers over hidden services
On Fri, Sep 9, 2011 at 6:14 AM, Gozu-san wrote: > Alternatively, one could run Tor on VMs that can only access the > internet via OpenVPN-based "anonymity services". OpenVPN clients can be OpenVPN-based "anonymity services" ~= snake oil. If you're running a hidden service you've already got a perfectly good network anonymity service running. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Tor spying
On Wed, Sep 7, 2011 at 10:21 PM, Indie Intel wrote: > Apparently people are spying on Tor users by setting up their own exit nodes > and sniffing traffic?! For some reason the moral standards people abide to online are unlike the ones they'd apply in other contexts. I'm doubtful Moxie Marlinspike would go around jiggling the doorknobs of his neighbors or hold their mail up in front of candles (or at least do so without fear of having a really bad weekend in a police office as a result). But on line… people do. Oh well, not much we can do about that. It's unfortunate and unlawful for people to monitor or modify exit node traffic. You should not do so. At the same time, _all_ internet users should do what they can to protect themselves. These attacks aren't just limited to Tor: regular ISPs perform them too, and if we can't stop it there we certainly can't stop it for tor. >``research'' is more common than not. Wikileaks, Jacob Appelbaum, It's worth pointing out that Wikileaks and Jacob have refuted and rejected the claims that (at least as far as they could be aware) Wikileaks documents came from sniffing tor exits. Of course, its impossible for anyone to prove they haven't been, and thought its possible to do so no one has proven they did. At best its unfounded rumor, at worse its an active smear. I find it somewhat ironic that you complain about the ethics of obviously well intentioned security researchers while simultaneously spreading a reputation destroying rumour. At least we learned something useful from the sslsniff research that might educate us about building practical secure systems. What did we learn from your post? ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] reddit.com wants EFF to disable HTTPS???
On Sun, Aug 7, 2011 at 2:20 AM, Victor Garin wrote: > Can you also point out where exactly (which URL) there is a bug when > the current ruleset is used? The bug is that its probably overloading their site, and/or pushing traffic onto very expensive specialized hosting. > Removing/Disabling the whole site (when it is working) goes against > all the principles that EFF stands for. Unless it doesn't work it > should not be removed. I think this position is silly. If HTTPS everywhere says no to reddit's request, the site will just make it not work. Then users will be even _worse_ off, since at least people can manually go to pay.reddit.com while reddit gets their https upgrade done. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Iran cracks down on web dissident technology
On Tue, Mar 22, 2011 at 11:23 AM, Joe Btfsplk wrote: > Why would any govt create something their enemies can easily use against > them, then continue funding it once they know it helps the enemy, if a govt > has absolutely no control over it? It's that simple. It would seem a very > bad idea. Stop looking at it from a conspiracy standpoint & consider it as > a common sense question. I hesitated in responding because it's just so easy to run of an infinite series of explanations. While any particular reason might not actually be valid, there are enough plausible ones that your argument of inconceivability can not be support. E.g. Because governments are not monolithic entities, because people don't have perfect foresight, because the benefit to your interests can outweigh the benefit against your interests, and communications technology arguably disproportionally benefits larger groups. Interests outweighed: Funding something like TOR may be the most cost effective way to achieve a particular end. In particular, a US government only anonymity network would likely not be very useful ("I don't know who this is, but it's a fed"). Regardless of it helping the enemy too, it can still be a net win to support. Not monolithic entities: If you have an organizational unit charged with accomplishing X they will work to accomplish X. Sometimes they may work so hard at it that stop another unit from accomplishing Y, even if Y was more important to the overall mission. This happens frequently in all kinds of large organizations. No perfect foresight: It's not always obvious to everyone that some move may turn net negative in the future. E.g. the US supporting the Taliban. (http://en.wikipedia.org/wiki/Taliban#United_States) Larger groups: If just you and I want to communicate with secrecy we can do so without something like TOR— we can send coded messages hidden in innocuous usenet posts or Wikipedia articles. The value of a network is related more to the square of its communicating members. If you're the bigger party it can help you more than it helps your smaller enemies. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Making TOR exit-node IP address configurable
On Wed, Mar 9, 2011 at 5:29 PM, Fabio Pietrosanti (naif) wrote: > Yes but that's more complex, with iptables you can redirect TCP ports, > but from your TOR node not all traffic going for example to port 80 is > http, but a lot of it it's TOR. > > If you redirect it to a transparent proxy you'll break intra-tor > communications, and so you can't just make an easy redirect with iptables. > > Still, don't judge good intentions. > It's not censorship but a chance to attract more TOR exit node > maintainer by simplifying the costs and risks in running a TOR exit node. > And that's still an experiment where to look at, it may be useful for a > lot of persons looking to run a less risky exit-node . :-) Tor has currently has no facility for those users who are happy to have random third parties screw with their traffic to opt-into it, or those who would want to avoid it to opt out. This means that anything you to the traffic will have random inexplicable effects on tor users. Even if such a facility existed its use would likely reduce the anonymity provided by ... partitioning the userbase (is there an echo in here?) The tor system does have a facility for dealing with this— flagging the trouble nodes so that no one will use the exit at all. If you are lucky this is all that will be done to your node(s). If you are unlucky tor users who have been harmed by your tampering with their traffic may begin legal action against you, and/or people harmed by traffic exiting your node may argue that your traffic tampering has deprived you of any applicable legal protections as a neutral service provider... ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Making TOR exit-node IP address configurable
On Wed, Mar 9, 2011 at 1:23 PM, Fabio Pietrosanti (naif) wrote: > Hi all, > i've been thinking and playing a lot about the various possible risk > mitigation scenarios for TOR exit node maintainer. > > Now i need to be able to pass all web traffic trough a transparent proxy > in order to implement some kind of filters to prevent specific > web-attacks, web-bruteforce, etc, etc [snip] If you start inspecting and screwing with third party traffic you will be bad-exited. Save yourself and the tor users some trouble and don't provide exit services if you can't do so without engaging in screwy traffic manipulation. ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
Re: [tor-talk] Is "gatereloaded" a Bad Exit?
On Tue, Feb 22, 2011 at 3:49 PM, thecarp wrote: > It is even possible that someone might run tor in lieu of encrypted > services, I know I went and made sure that the whole trick of getting > end-to-end encryption by having a node ON the target hosts worked for me. For that you need an exit policy to yourself, not to the internet. > I have to wonder, how is that so much worst than the situation anywhere else? I suggest an alternative question: How much better is it having more nodes choose not to exclude 443 so that tor will have more 443 capacity, providing faster 443 service and fewer reasons for people to use unsecured http? How does that compare to the potential harm of throwing out a near zero number of nodes with high suspect policies (which are probably either misconfigurated _or_ are sniffing) and which were not previously taking considerable exit traffic in any case? Certainly there are other bad nodes out there, but that doesn't really make it any better. It's a small benefit in each direction. "An incentive for more people to offer 443" vs "a small amount of additional probably tainted capacity". Sensible people might go either way. What sensible people won't do is participate in an epic argument about it (and I apologize for my participation)… Of course, until you factor in the information we received later which is that a researcher has apparently been using a technique to discover "passively" eavesdropping nodes, and the node in question here came up. Sort of mooting the whole discussion until the research is published. [snip] > I am thinking, what if badexits became more like a DNS RBL there > could be multiple sources of truth that people could choose to subscribe > to. Maybe, for some reason, I feel the need to avoid exits in some area > (like china), it would allow me to subscribe to the list that tries to > keep chineese exits banned. There is already support for geographic targeting in the tor software, fwiw. > Maybe someone could make a little side cash (bitcoins?) doing node > contact verification and publishing a badexits list based on faile > docntact info. Shit...maybe implement a "good exits" for them. Just some > thoughts. No reason that this needs to be overly centralized. It's been a long thread so I can understand why you've missed it— but there is an _enormous_ smoking gun reason on why this should be to be somewhat centralized. Consider: What is more anonymous: two anonymity networks each with one user or one anonymity network with two users? The first has no anonymity at all, the latter has a little. This pattern plays out with larger numbers. If you can split up the users of a anonymity network you make it less anonymous. This is a called a partitioning attack. If you have user selected exit subsets then you are partitioning the network and reducing its anonymity. It's especially bad if you know in advance who is in which subset, E.g. "I told everyone except bob this exit was bad, so if someone is using it it's probably bob", but it's can be a bad attack blind e.g. "Mystery person X uses exits 1,2,3 but never 4,5,6, and thec...@gmail.com also uses the same mixture. I bet mystery person X is the same persona as thecarp" Of course, users will sometimes do things which distinguish themselves but the software ought not _encourage_ anonymity weakening behavior like this especially when the implications are subtle and not the practical effect is not especially well understood. It would be very sad if someone thought "I need to be extra secure, so I'll turn all these knobs" and by the resulting unique mix of exits used they ended up uniquely identifying their client. So ideally the default behavior should be as broadly acceptable as possible, and then people who need different behavior should be able to do so thought hopefully minimum changes which result in the least anonymity loss. (And hopefully not without understanding the increased risk that they're taking) ___ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk