Re: Sampled Traffic Analysis by Internet-Exchange-Level Adversaries
Paul Syverson wrote: > On Wed, May 30, 2007 at 02:46:20AM -0700, Mike Perry wrote: > >> Thus spake Paul Syverson ([EMAIL PROTECTED]): >> >> ... I don't understand a single bit of mathematics in this paper. Although one symbol looks like Integration function. Damn.. why are all the academic papers so much alike ? :) Cheers
Re: Sampled Traffic Analysis by Internet-Exchange-Level Adversaries
On Wed, May 30, 2007 at 02:46:20AM -0700, Mike Perry wrote: > Thus spake Paul Syverson ([EMAIL PROTECTED]): > > > Anyway, the main reason I'm writing is that my objection was not just > > that the GPA was too strong but that it was too weak. Thinking you > > could have an adversary powerful enough to monitor all the links > > necessary to watch your whole large network but not able to do any > > active traffic shaping at all anywhere seems obviously nuts. This is > > one reason why padding on an open low-latency (lossless) network is > > problematic: an adversary with any active capability at all can induce > > a timing channel easily. > > Actually, I'm going to disagree slightly because I don't feel like > sleeping yet :). It would take far less resources to passively tap the > traffic and filter out say Tor IPs and do analysis on just that data > offline. Trying to actively do that filter in-path PLUS arbitrarily > delay (ie queue in memory) that traffic in real time, all without > signficantly affecting pass-through traffic seems like it would be a > lot more expensive. > If the traffic patterns can be stored and analyzed offline rather than in real time, it just makes my point stronger. Assume someone with the ability to do truly global monitoring, watching every connection from every client everywhere in the world through every tor node everywhere in the world to every server everywhere in the world (Note that I was effectively assuming the filtering you mentioned. I don't care if the adversary watches non-Tor traffic. I assume they have already made that separation. As you note, it is trivial to recognize traffic going to/from/between Tor IP addresses.) What I am saying is that it is nuts to assume that someone could have monitors on all of these places but can do nothing active at all, not even doing something as trivial as killing a targeted circuit and watching to see if a suspected circuit dies elsewhere. It doesn't even have to be targetted. The adversary can simply arbitrarily induce timing channels in various places or kill circuits or whatever and watch for those patterns elsewhere (in the stored data if this is done offline). > Also, not to mention there is a limited number of bits that can be > reliably encoded in this manner, and the purturbations of padding that > shares the same TLS connection will lower this effectiveness. The > adversary needs enough bits to get through to be able to track all the > parties it is interested in. If padding is in place, it will have to > spend considerable effort in redundancy to make sure that the > timestamp remains present in the exit stream.. Which again means more > queueing and more expense. > Lasse and I saw how incredibly easy it was to find patterns with very limited resources. George and Steven showed how you could induce patterns gross enough to even monitor them by interference (albeit on a much smaller and generally lower bandwidth network). > Of course, it also means more expense on the part of the anonymity > network in wasted bandwidth.. If padding slows down the network to the > point where users start to leave, other, more dangerous effects take > over. > I'm not comparing a global passive adversary with a global active one and claiming that global active is more realistic or practical. I'm saying that it is a mistake to posit a truly global adversary (not just a really big adversary watching, e.g., eighty percent of all the communication we would ever be talking about) that cannot do even the tiniest local thing actively. Nonetheless, that is the adversary from much of the literature. aloha, Paul
Re: Length of new onion addresses
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi Michael, > Length is not nearly as important as bookmarkability. You mentioned > that you are going to be changing stuff every day. That worries me. My bad. No worry, this is just a misunderstanding. What I should have written is that a service's onion address (what clients bookmark or type into their browsers) stays the same all the time. What changes are the descriptor identifiers which are created from the service id and the secret cookie. This allows for storing descriptors on changing nodes all the time, which is a novel security feature that becomes possible from incorporating the secret cookie. It prevents persons from tracking a service's activity or usage pattern. I only mentioned it to stress that the attack of generating a key pair with the same id as an honest service would be limited to one day. Such an attack would become more likely the fewer bits the service id has. But the changing descriptor ids have no impact on the usage by hidden service providers or clients. - --Karsten -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGYFh20M+WPffBEmURAhyaAKDU+qHjsTVn1LNsDIsyBP05kXGkrwCeM3yT v8ziwd3VBWtIyv7AEyW1W9A= =Li4l -END PGP SIGNATURE-
Re: Length of new onion addresses
Length is not nearly as important as bookmarkability. You mentioned that you are going to be changing stuff every day. That worries me. his service. Though, the effect of this is limited, because descriptor ids are automatically changed every day. My idea was to use 32 bits for the service id. Your other ideas: Breaking things into parts: For downward compatibility reasons, those 200 bits could also be distributed by using 80 bits for the service id and 120 bits for the secret key. Then, people could start using the new descriptor by simply adding a dot and a secret cookie to their current (unchanged) onion address. This would look like this: http://6sxoyfb3h2nvok2d.6sxoyfb3h2nvok2d6sxoyfb3.onion/ I like this much, much more. Maybe we shouldn't even extend the onion addresses at all, but allocate the 80 bits in another way, e.g. 24 bits for the service id and 56 bits for the secret cookie? Then we should use another virtual top level domain to distinguish current and new descriptors, resulting in something like the following: http://6sxoyfb3h2nvok2d.hidden/ I think this is the best. Use the current system for sites using the old method (central servers mapping .onion addresses to introduction points), and a onion.secretkey.hidden for stuff using the new, non-central servers. What do you guys prefer? How do you exchange onion addresses? Publishing them on non-hidden web pages, pasting them to IRC chats, writing them on business cards, memorizing and telling them, ...? I think it's important to find a balance between security and usability here. Business cards? Hadn't thought of that. Yea, that would give a bonus to short.
Length of new onion addresses
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 (posting to or-talk and or-dev, because it concerns both, usability and development) Hi, at the moment I am designing the new ASCII-based format for hidden service descriptors, including new security features like encryption of introduction points and the ability to be distributed among onion routers unpredictably for non-clients. This incorporates a secret cookie that needs to be passed between the hidden service provider and his clients in addition to the service id which is the current onion address. You can read about all the details in proposal #114 in the svn repository. As the new descriptor might replace the current descriptor some day and the format of onion addresses would affect all hidden service users, I would like to discuss this decision of the onion address format in public, rather than make a decision on my own and being confronted with incomprehension when it might be introduced. The current onion addresses consist of 80 bits and (as you all know) look like this (address of the hidden wiki): http://6sxoyfb3h2nvok2d.onion/ The new onion addresses would consist of two parts: (1) the service id and (2) the secret cookie. (1) In contrast to the current format, the service id is not used to identify the service (bad name then, I know), but to generate an unpredictable descriptor id where to find the service descriptor. If an adversary can create an own key pair with a fingerprint equal to the service id, she can prevent the actual hidden service from announcing his service. Though, the effect of this is limited, because descriptor ids are automatically changed every day. My idea was to use 32 bits for the service id. (2) The secret cookie is the key for encrypting and decrypting the introduction points and to calculate the current descriptor id. Whoever finds out the secret cookie could observe hidden service activity and attack introduction points which both would otherwise not be possible. My plan was to use a 128 bit key as secret cookie. In total, new onion addresses would be 160 bits long. The question is now, if an onion address of that size is still manageable for human beings? (Is the current size manageable after all?) For illustration purposes, the new addresses would look like this: http://6sxoyfb3h2nvok2d6sxoyfb3h2nvok2d.onion/ Or are my assumptions concerning the length of the service id still too incautious, and would 200 bits (72 bits for service id and 128 bits for the secret cookie), resulting in the following onion address, be better? http://6sxoyfb3h2nvok2d6sxoyfb3h2nvok2d6sxoyfb3.onion/ For downward compatibility reasons, those 200 bits could also be distributed by using 80 bits for the service id and 120 bits for the secret key. Then, people could start using the new descriptor by simply adding a dot and a secret cookie to their current (unchanged) onion address. This would look like this: http://6sxoyfb3h2nvok2d.6sxoyfb3h2nvok2d6sxoyfb3.onion/ To the (probably upcoming) question, why one needs a secret cookie at all, or if it could also be used optionally in the long run: The plan is to distribute the storage of descriptors, primarily for scalability reasons. But this raises new security issues, because anyone running a stable onion router could become responsible for storing a descriptor, so that we simply need new security mechanisms. Otherwise, security would be worse by the distribution, but with the secret cookie, security even gets better than before. But perhaps we should rather aim for usability than for security and use only 120 bit long onion addresses, e.g. by using 32 bits for the service id and 88 bits for the cookie, resulting in the following onion address? http://6sxoyfb3h2nvok2d6sxoyfb3.onion/ Maybe we shouldn't even extend the onion addresses at all, but allocate the 80 bits in another way, e.g. 24 bits for the service id and 56 bits for the secret cookie? Then we should use another virtual top level domain to distinguish current and new descriptors, resulting in something like the following: http://6sxoyfb3h2nvok2d.hidden/ What do you guys prefer? How do you exchange onion addresses? Publishing them on non-hidden web pages, pasting them to IRC chats, writing them on business cards, memorizing and telling them, ...? I think it's important to find a balance between security and usability here. The question is: Does size matter? :) Any comments are welcome! Thanks! - --Karsten -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGX/ks0M+WPffBEmURAg42AJ9l6aDu++f1Ozaesouxxm4d82rdwgCgsC8l l0858q0gkfWYlcOG3odyT+s= =BGOH -END PGP SIGNATURE-
Re: question about A/B communication with dir servers for hidden services
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi, > Are the "streams" from Bob and Alice to put & get the descriptor of a hidden > service always established over Tor circuits Yes, they are. > or sometimes direct streams from > the OP's to the Tor directory server? No, never. > In other words: Is it assured, that the > directory server doesn't know, that "Bob" has established a hidden service and > "Alice" has asked about it? Correct, the directory server never learns about the IP addresses of the service provider and its clients. - --Karsten -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGX9em0M+WPffBEmURAqJaAJ41JL/Vba+WIC2l5Y1oIiNbjGUHrACgvfrn TQPzLmLsOE0ihY2oPwFPjYY= =aGRk -END PGP SIGNATURE-