Re: Relay flooding, confirmation, HS's, default relay, web of trust
On 2010-12-06 09:18, John Case wrote: On Mon, 6 Dec 2010, grarpamp wrote: [...] Maybe there would also be benefit in a web of trust amongst nodes not unlike a keysigning party. As with social networking, people vouch for each other in various ways and strengths based on how they feel that person meets them. I don't see any reason why node operators [descriptors] could not keysign and have that web encoded into the descriptors, directories, DHT, etc. I proposed early in the previous thread that not only should a web of trust be considered, but that this was indeed a classic case of a web of trust ... I didn't see any comment on this from the Big Names on the list, though... The Web of Trust (WoT) concept provides for marginal security benefits and then only in a very narrow set of circumstances that are unlikely to hold true for the larger community of Tor node operators. Starting with the second point, the WoT concept presumes that trust between its members precedes the codification of that trust into attestations attached to digital certificates. In other words, the WoT might provide (but likely will not) security benefits to a group of users that have pre-existing social relations and trust. For example, members of a human rights group that have personally known each other, or at least the bulk of each other, for years. The WoT cannot provide security benefits to a group of users with no pre-existing social trust relationship, such as the set of Tor node operators. The thousands of Tor node operators, a tiny percentage of which have an existing social relationship, have no inherent trust amongst each other. And how could they? Absent an existing real-life WoT, there is no digital WoT to codify. Even within a group that has a strong existing trust and social graph in real life, the digital codification of a WoT offers security benefits only at the extreme margins. This fact is easiest explained by example: o Fire up your preferred OpenPGP software. (If you don't have OpenPGP software, then your understanding of how a WoT works is likely different from what a WoT actually does). o Eliminate all public keys for users with whom you do not intend to communicate. (No communication security system can provide security benefits to communications that will never take place). o List the public keys that show as valid. (Meaning they are signed by one or more keys that you trust to some degree). o Eliminate all the public keys that are signed by your key. (Those keys are not authenticated by the WoT, they were authenticated by you directly). o Eliminate all the public keys that are signed by keys that you chose to trust because they are the equivalent of CA root keys. This includes Debian distribution signing keys, the keys of any commercial CA, and the signing keys of auto-responder key servers such as the PGP Global Directory. (Signatures performed by such keys do not employ the WoT). o Look at the small number of public keys remaining. The keys are likely from deep inside your social circle. Now eliminate all the public keys that you could trivially authenticate directly, such as by asking a key holders, who are well known to you, to provide you with their key's fingerprint at work, at the next security conference, or the next time you meet at the pub. (The WoT may have authenticated those keys, but the WoT was not necessary to do so since you could have trivially authenticated those few keys yourself). o Lastly, count the remaining public keys. The number will likely be zero (no real life benefit to the WoT) or close to zero (benefit only in the extreme margins). In summary, the WoT is not a suitable solution to increasing the security of the Tor network. --Lucky Green *** To unsubscribe, send an e-mail to majord...@torproject.org with unsubscribe or-talkin the body. http://archives.seul.org/or/talk/
The Case for Banning Reduced Hop Count Implementations
Folks, I have followed various discussions lately about the creation of reduced hop Tor clients that implement fewer than the three hops considered by Tor's design. Such clients represent an attack on Tor as a whole. Indeed, defenses against reduced hop clients leveraging the Tor network should be built into Tor's design to defend against this attack. Today and for the foreseeable future, Tor's network latency relates to the maximum latency that Tor users are willing to accept. As Tor gets faster, it attracts more users and more traffic, which in turn increases latency. As the Tor network increases in latency, it loses users for whom the latency becomes unacceptably high. Latency in turn relates to the number of hops. The more hops, the higher the latency. Which not coincidentally is why some with lower anonymity requirements may prefer fewer hops. Here is the catch: as traffic from those with lower anonymity and hop requirements increases, it drives the latency of three hop connections above the latency acceptable for those seeking higher anonymity. The end state, if lower than three hop implementations are permitted to use the Tor network, is that Tor's network performance will acceptable only to users of lower hop clients. This fact alone drives a need to block reduced hop clients from the network. But it gets worse. Many of those that would be satisfied with fewer hops engage in comparatively low risk behavior (which is why they are satisfied with lower anonymity), such as downloading large files of questionable origin. The protocols commonly used for such downloads can accept higher latency than the interactive protocols needed by the part of the user population seeking higher anonymity levels. Pushing the latency of three hop clients farther out of the usability envelope. Though the above is more than sufficient cause to block reduced hop clients from corrupting the Tor network, it deserves mention that single hop clients in particular remove the protection that Tor's design until now afforded to exit node operators. If only three hop clients can use the Tor network, the Tor exit node operator can be confident that capture of an exit hop's connection log will fail to provide the attacker with useful tracking information. This discourages both legal and illegal attacks on Tor exit hops and thus increases the overall number and capacity of Tor exits. Removing this protection will lead to an increase in attacks on exit hops, which in turn will lead to decreased exit capacity. Further negatively impacting Tor network latency. In summary, reduced hop clients are deleterious to Tor a whole and users with the level of anonymity that Tor was design to provide in particular. Users with lower anonymity needs should be guided towards the many other systems available today that provide lower anonymity than Tor. Most importantly, Tor should implement a (potentially blinded) hop verification that ensures that lower hop count clients cannot abuse the Tor network. --Lucky Green *** To unsubscribe, send an e-mail to majord...@torproject.org with unsubscribe or-talkin the body. http://archives.seul.org/or/talk/
Re: Problems runing Tor on Vista x64
[EMAIL PROTECTED] wrote: On Mon, Nov 10, 2008 at 09:51:00AM +0100, [EMAIL PROTECTED] wrote 0.7K bytes in 16 lines about: : Nov 10 09:34:42.445 [err] Error from libevent: evsignal_init: : socketpair: No error It reads like libevent doesn't like something in the wow32 subsystem inside 64-bit vista. Do you get a drwatson crash dump? The venerable Dr. Watson chose to enter well-deserved retirement with the release of Vista. The good doctor's successor is WinDbg. Both 64-bit and 32-bit versions can be found at: http://www.microsoft.com/whdc/DevTools/Debugging/default.mspx Users of Windows XP, 2000, and even NT4 are equally encouraged to let the good doctor rest by installing the tools found at the above URL. Enjoy, --Lucky
Reduced Tor Traffic [was: Re: peculiar server...]
Roger Dingledine wrote: On Tue, Sep 09, 2008 at 05:15:15AM -0500, Scott Bennett wrote: [...] That brings us back to something I've already posted on OR-TALK, namely, the apparent slowdown in tor traffic that has reduced the traffic through my tor server by at least 30% and, judging from the reduced peaks shown for a lot of the high-volume servers listed on the torstatus page, the tor network at large. We're working on plans to start gathering more methodical data about how the network has run and is running, with the goal of being able to answer questions like this more usefully. I am very much looking forward to more diagnostic instrumentation in the Tor network. I am seeing a 30% difference in the traffic through basically identical servers that are neither bandwidth nor CPU limited with identical uptimes. Something about the path selection appears to lead to favor one server over another. Also interesting to me is the overall reduced amount of traffic over the last few months that I have been seeing with my middleman nodes. The most likely explanation is that the overall Tor network capacity is exit node bound and that middleman nodes have grown disproportionately over time. Still, it sure would be nice to be able to perform rigorous analysis on the network. --Lucky
Re: Your system clock just jumped on Debian+VMware ESX
Marco Bonetti wrote: On Thu, February 28, 2008 06:14, Lucky Green wrote: NTP: ntp is installed on the guest. ntpq -p shows a solid lock. remove ntp from the guest, it causes troubles. also, search vmware kb for clock issues, the most common fixes are removing ntp services from guest, installing tools on the guest and selecting the clock synchronization (with the host). another common pitfall is the bitness of host and guest: keep 32bit hosts with 32bit guests and the same with 64bit, mixing them could raise clock problems. Long time ago I had the very same problem with a 64bit ubuntu host running vmware server and a 32bit debian guest. Thank you all for your good advice! I tried several potential fixes. Unfortunately none of them worked. What I tried so far: 1) removed ntp from the guest 2) Enabled time sync from the guest to the host using vmware-guest --cmd vmx.set_options synctime 0 1 (which, unlike editing the .vmx file by hand is permanent) 3) verified that the ESX host has the correct time with ntp enabled and is shown as being in the correct time zone. 4) verified that the hardware clock on both host and guest are set correctly 5) Scoured the VMware forums for advice. I see evidence of drifts, but not of jumps. I continue to see Tor report errors of jumps in the system time from 4397-4399 seconds. Strangely, this jump in time does not appear to be reflected in the time stamp that Tor assigns to the error message. What I have not yet tried: I have not tried setting clocksource=pitt and similar grub modifications suggested in the VMware forums. I don't see how such changes could help in this case. We are not talking about a slow drift. We are talking about (supposed) system clocks jumps of over an hour only minutes apart. Suggestions for next steps would be much appreciated. Thanks, --Lucky
Your system clock just jumped on Debian+VMware ESX
I am seeing the following errors in the Tor log: [...] Feb 28 04:54:43.008 [notice] Your system clock just jumped 4398 seconds backward; assuming established circuits no longer work. Feb 28 04:54:46.020 [notice] Your system clock just jumped 4399 seconds forward; assuming established circuits no longer work. [...] Tor version: Tor v0.2.0.20-rc (r13715) Standard Debian package Guest OS: Debian etch Kernel 2.6.22-4-amd64 Host OS: VMware ESX 3.5 VMware Tools installed NTP: ntp is installed on the guest. ntpq -p shows a solid lock. Details: I recognize that there have been long-standing issues with system time on VMware Workstation. Though I don't believe this is the case on ESX, certainly not with VMware Tools installed. I ran another Tor test server on this very same ESX host (though different VM) for a couple of weeks earlier this year without issues. Does anybody here have a suggestion how to determine root cause? Thanks, --Lucky Green