[tor-dev] Support for clients using shutdown(SHUT_WR)
Hello! I'm in the process of re-writing Shadow's traffic generation tool called tgen [0], which now depends on shutdown() for two communicating tgen nodes to inform each other that they have no more to send (i.e. to perform graceful connection shutdowns). The shutdown() functionality is useful now that tgen uses Markov models to generate more realistic Tor traffic [1]. Currently, it appears that when the tor client receives a FIN on the AP connection (because the tgen client calls shutdown(SHUT_WR)), the tor client sends a RELAY_END cell to the exit relay to instruct it to close() the exit TCP connection to the server. The behavior I expected was something more like what is described in tor-spec in Section 6.3 (search for "RELAY_FIN") [2], where the tor client sends a RELAY_FIN cell to the exit to instruct the exit to perform a shutdown(SHUT_WR) on the exit TCP connection to the server. Is there any plan to support shutdown(SHUT_WR) using RELAY_FIN cells now that Tor is itself using shutdown()? (I didn't see any tickets about it after a brief search.) Thanks! Peace, love, and positivity, Rob [0] https://github.com/shadow/tgen [1] https://tmodel-ccs2018.github.io/ [2] https://gitweb.torproject.org/torspec.git/tree/tor-spec.txt ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Connections failed to default obfs4 bridges
> On Mar 28, 2018, at 12:23 PM, David Fifield <da...@bamsoftware.com> wrote: > > On Wed, Mar 28, 2018 at 10:57:13AM -0400, Rob Jansen wrote: >> In a recent connectivity test to the default obfs4 bridges [0], we found >> that we are unable to connect to 10 or so of them (from open networks, i.e., >> no local filtering). >> >> Is this a feature, like some of them only respond to users in certain parts >> of the world? Or is this a bug, like the default list of bridges refers to >> old bridges that are no longer available? Or am I misunderstanding >> functionality here? > > Do you mean 10 distinct IP addresses, or 10 ports on a few IP addresses? > Not all the IP addresses in the list are distinct. > Turns out this was 10 ports on the same IP address. And we did the measurements back in December, so they are already a bit dated. > Even while Lynn Tsai, Qi Zhong, and I were closely monitoring default > bridge reachability, a lot of the default bridges were often offline, > because of reboots, iptables problems, etc. See for example the "Orbot > bridges" strip of Figure 5.2 here; the gray and red areas that precede > blocking are where the bridge was simply offline: > https://www.bamsoftware.com/papers/thesis/fifield-thesis.pdf#page=43 > > We have a lot of past measurements of default bridges. The rows with > site="eecs-login" are from the U.S. > https://www.bamsoftware.com/proxy-probe/ (download the repo, not > probe.csv.gz, which isn't as recent) Ahh, this is great, thanks for sending! Best, Rob signature.asc Description: Message signed with OpenPGP ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
[tor-dev] Connections failed to default obfs4 bridges
Hi, In a recent connectivity test to the default obfs4 bridges [0], we found that we are unable to connect to 10 or so of them (from open networks, i.e., no local filtering). Is this a feature, like some of them only respond to users in certain parts of the world? Or is this a bug, like the default list of bridges refers to old bridges that are no longer available? Or am I misunderstanding functionality here? Thanks! Rob [0] https://gitweb.torproject.org/builders/tor-browser-bundle.git/tree/Bundle-Data/PTConfigs/bridge_prefs.js signature.asc Description: Message signed with OpenPGP ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] The case for Tor-over-QUIC
Thanks for the detailed write-up Mike! Theoretically, moving to QUIC seems great; it seems to solve a lot of problems and has enough advantages that we could just run with it. I'll reiterate some of my primary concerns that I gave in Rome: - I think it would be a mistake to push forward with such a significant change to Tor's transport layer without significant testing and experimentation. We have tools that allow us to do full-network-sized experiments (Shadow), and we have interested researchers that want to help (including me). - However, I am much less optimistic than you that it will just work and instantly improve performance. You mention that Google has done lots of tests, but my guess is they test in environments that look like the Internet - i.e., fast core and slow edges. Do we know how it would perform when the path contains additional 6 edges sandwiching 3 potentially low bandwidth Tor relays? Tor is a significantly different environment than the Internet; for example, an end-to-end congestion signal in Tor will take orders of magnitude longer to reach the client than in traditional networks. - Because of the above, I'm not sure that an end-to-end design is the right way to go. As I mentioned, we have simulators and researchers, so we should be able to learn more and make a more informed decision before committing to a design that will be difficult to change later. - We should be sure to pay close attention to how this will affect emerging networks and applications, e.g., mobile devices and onion services. - The DoS attacks will change form, but I don't think they will disappear. I think it would be wise to understand how DoS might change, which is much easier once we have a design to analyze. Your summary helps with that. I think it would be worth including R effort to investigate these issues in any proposal that gets written. Cheers, Rob > On Mar 23, 2018, at 7:18 PM, Mike Perrywrote: > > In Rome, I held a session about network protocol upgrades. My intent was > to cover the switch to two guards, conflux, datagram transports, and > QUIC. We ended up touching only briefly on everything but QUIC, but we > went into enough depth on QUIC itself that it was a worthwhile and very > productive session. > > Our notes are here: > https://trac.torproject.org/projects/tor/wiki/org/meetings/2018Rome/Notes/FutureTorNetworkProtocolUpgrades > > I wanted to give a bit of background and some historical perspective > about datagram transports for Tor, as well as explain those notes in > long form, to get everybody on the same page about this idea. With > the forthcoming IETF standard ratification of QUIC along with several > solid reference implementations (canonical list: > https://github.com/quicwg/base-drafts/wiki/Implementations), I believe > we are close to the point where we can finally put together a plan (and > a funding proposal) to undertake this work. > > Consider this mail a pre-proposal to temperature check and solicit early > feedback about a Tor-over-QUIC deployment, before we invest the effort > to deep dive into the framing, protocol, and implementation details that > will be necessary for a full proposal. signature.asc Description: Message signed with OpenPGP ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Is there strictly a one-to-one BW scanner to BW auth relationship?
> On Mar 25, 2018, at 12:13 PM, Sebastian Hahn <hahn@web.de> wrote: > > >> On 24. Mar 2018, at 13:50, Rob Jansen <rob.g.jan...@nrl.navy.mil> wrote: >>> I think moria1 runs its own, and Faravahar runs its own. I've lost track >>> of the others, but I'd guess that bastet also runs its own, and that >>> maatuska pulls numbers from a bwauth that tjr runs. >>> >>> https://consensus-health.torproject.org/#bwauthstatus >> >> Hmm. I wish we were more transparent about who is running scanners and which >> bwauths consume results from which scanners. Something to keep in mind for >> those of us working on next-gen replacement scanners. > > It is at the discretion of the bwauth operator to ensure that > they're using a trusted source for their data. To me, that > means anything other than running the code myself is utterly > unacceptable, other operators might make other choices. I > think it makes sense to say that the operator of a given bw > auth is *responsible* for whatever they're voting on, whether > they run the bwauth themselves or not. I totally agree! Though, I do think that the decisions of which data sources are used could be made public - not as a means to call into question or criticize the choice of the data source, but more as a means to understand how the system works. Eventually (in an ideal world where the scanners report their status) the community could help monitor the health of the scanners. If this makes the job of a bwauth more difficult (we should design it so it doesn't), that should certainly be considered as well. Best, Rob signature.asc Description: Message signed with OpenPGP ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Is there strictly a one-to-one BW scanner to BW auth relationship?
> On Mar 23, 2018, at 8:21 PM, Roger Dingledinewrote: > > I believe there are no scanners that supply answers to multiple directory > authorities. > Great! IIRC, at one point in the distant past this was not the case. > You could in theory check whether this is true in practice by seeing if > any dir auths vote the same numbers. Did that; nobody is voting the same numbers. So presumably that means all bwauths are using independent numbers. I was concerned because bastet and moria1 both stopped voting anything for each of two of my >2 year old relays during distinct time intervals yesterday. I mean there were missing votes, rather than votes of no or low bandwidth. This caused a consensus of Unmeasured=1 and BW=20 for several hours. See [0] if you want to see what I mean by missing votes - I noticed that this happens in every consensus I viewed for at least some number of relays. I guess maybe I restarted my relays at just the right time to cause a scanner to go nonlinear or something. Things seem to be back to normal now, though. I'm going to chalk this up as bad error handling in the TorFlow code, because that accusation is easy and generally agreeable :D > > I think moria1 runs its own, and Faravahar runs its own. I've lost track > of the others, but I'd guess that bastet also runs its own, and that > maatuska pulls numbers from a bwauth that tjr runs. > > https://consensus-health.torproject.org/#bwauthstatus Hmm. I wish we were more transparent about who is running scanners and which bwauths consume results from which scanners. Something to keep in mind for those of us working on next-gen replacement scanners. > On Mar 23, 2018, at 10:02 PM, Matthew Finkel wrote: > > Not the original question Rob asked, but a year ago there was a session > and the (reformatted) notes contain: Thanks Matt! That is useful :) Best, Rob [0] Warning, this page is quite large, it contains an entry for every relay in the consensus: https://consensus-health.torproject.org/consensus-health-2018-03-23-11-00.html#0DA9BD201766EDB19F57F49F1A013A8A5432C008 signature.asc Description: Message signed with OpenPGP ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
[tor-dev] Is there strictly a one-to-one BW scanner to BW auth relationship?
Hi, I understand that the current bandwidth measurement system is far from ideal, but I am wondering about how the current system works. Does each bandwidth authority also run a bandwidth scanner? Or is it possible that the results from a bandwidth scanner is supplied to multiple authorities? Thanks! Rob signature.asc Description: Message signed with OpenPGP ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Proposal Waterfilling
> On Mar 7, 2018, at 12:34 PM, Rob Jansen <rob.g.jan...@nrl.navy.mil> wrote: > > Hi Florentin, > > I've added some comments below. (I just found out that Aaron responded to your reply this morning, but I didn't get that email (it probably got stuck somewhere in the NRL email filters). Sorry if I made any points that were already made, but they were made independently.) Best, Rob signature.asc Description: Message signed with OpenPGP ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Proposal Waterfilling
Hi Florentin, I've added some comments below. Overall, I think a useful discussion for the community to have is to discuss whether or not we think Waterfilling is even a good idea in the first place, before you go ahead and do a bunch of work writing and fixing a proposal that may just end up in the pile of old grad student research ideas. (Maybe I'm too late, or maybe you want a proposal out there in any case.) > On Mar 7, 2018, at 3:28 AM, Florentin Rochet> wrote: > > Hi Aaron, > > Thanks for your comments, you are definitely touching interesting aspects. > > Here are thoughts regarding your objections: > > 1) The cost of IPs vs. bandwidth is definitely a function of market offers. > Your $500/Gbps/month seems quite expensive compared to what can be found on > OVH (which is hosting a large number of relays): they ask ~3 euros/IP/month, > including unlimited 100 Mbps traffic. If we assume that wgg = 2/3 and a water > level at 10Mbps, this means that, if you want to have 1Gbps of guard > bandwidth, > - the current Tor mechanisms would cost you 3 * 10 * 3/2 = 45 euros/month > - the waterfilling mechanism would cost you 3 * 100 = 300 euros/month > > We do not believe that this is conclusive, as the market changes, and there > certainly are dozens of other providers. > Have you purchased service from OVH and run relays yourself? Have you talked to anyone who has? I strongly believe that you will not find a provider that legitimately offers you continuous 100 Mbit/s over a long period of time for 3 euros. Providers tend to use "unmetered" and "unlimited" bandwidth as marketing terms, but they don't actually mean what you think unlimited means. What they mean is that you have a 100 MBit/s network card, and they allow you to burst up to the full 100 MBit/s. However, they usually have a total bandwidth cap on such service, or become angry and threaten to disconnect your service if you don't cut down your usage (this has happened to me). It is far more expensive to obtain *continuous*, i.e., *sustained* bandwidth usage over time. Generally, it's cheaper to buy in bulk. In the US, the cheapest bandwidth service we found (that also allows us to run Tor relays) was one that offers sustained 1 Gbit/s for an average of $500/month (including service fees). > The same applies for 0-day attacks: if you need to buy them just for > attacking Tor, then they are expensive. If you are an organization in the > business of handling 0-day attacks for various other reasons, then the costs > are very different. And it may be unclear to determine if it is > easier/cheaper to compromise 1 top relay or 20 mid-level relays. > It's hard to reason about this, since I'm not in the business. However, it you already have a zero-day, why would you want to waste it on a Tor relay? You would risk being discovered accessing the machine of a likely security-consious relay operator, and you could just run your own relays. Running your own relays does have some cost, but is far easier to manage and more reliable since you don't have to worry about being discovered or losing access because the software is patched. > And we are not sure that the picture is so clear about botnets either: bots > that can become guards need to have high availability (in order to pass the > guard stability requirements), and such high availability bots are also > likely to have a bandwidth that is higher than the water level (abandoned > machines in university networks, ...). As a result, waterfilling would > increase the number of high availability bots that are needed, which is > likely to be hard. > I think its much more likely that bots are running on my parents Windows machines than on high-bandwidth University machines. Sure, there might be some machines with outdated OSes out there on University networks, but they are also monitored pretty heavily for suspicious activity by the University IT folks, who regularly check in with the machine owners with anything suspicious occurs on the network. > 2) Waterfilling makes it necessary for an adversary to run a larger number of > relays. Apart from the costs of service providers, this large number of > relays need to be managed in an apparently independent way, otherwise they > would become suspicious to community members, like nusenu who is doing a > great job spotting all anomalies. It seems plausible that running 100 relays > in such a way that they look independent is at least as difficult as doing > that with 10 relays. > But not much more difficult, and not difficult enough that an intern could not whip up a managed deployment in a few weeks. There are various tools out there that can automate software installation and configuration. Ansible, Chef, and Puppet are popular ones, but here is a longer list: https://en.wikipedia.org/wiki/Comparison_of_open-source_configuration_management_software I would be
Re: [tor-dev] Should we disable the collection of some stats published in extra-infos?
> On Mar 28, 2016, at 5:04 AM, Karsten Loesing <kars...@torproject.org> wrote: > > Signed PGP part > On 25/03/16 16:24, Rob Jansen wrote: > > > >> On Feb 11, 2016, at 2:51 PM, Rob Jansen > >> <rob.g.jan...@nrl.navy.mil> wrote: > >> > >>> These statistics are being collected for years now, and it > >>> might take another year or so for relays to upgrade to stop > >>> collecting them. So what's another month. > >>> > >> > >> Agreed. > > > > Hi Karsten, > > > > Could you please summarize your current plans regarding stopping > > or replacing the collection method for sensitive statistics, based > > on discussions in Valencia? I'm particularly interested in the > > plans for the client IP and the exit stats that are categorized by > > exit port. > > Hi Rob, > > are you asking for a summary of our discussion in Valencia or for the > current state of things? > > Here's a quick update on the two stats you're mentioning: > > - I'm working on replacing client IP stats by implementing stats on > directory requests by transport and IP version (#8786). I got a bit > distracted by the fact that we're currently not counting IPv6 > directory requests at all (#18460), but once that's solved, I'll > resume working on the other ticket. > > - I asked a metrics team volunteer to look at exit stats and the > possible benefits from gathering them and then forgot about that task. > I'll look at these stats myself this week. > > I also added an item to the metrics roadmap, so that we can revisit > this discussion at metrics team meetings: > > https://trac.torproject.org/projects/tor/wiki/org/teams/MetricsTeam#RoadmapOctober2015toOctober2016 Ahh, great! This is what I was looking for, thank you Karsten :) -Rob signature.asc Description: Message signed with OpenPGP using GPGMail ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Should we disable the collection of some stats published in extra-infos?
> On Feb 11, 2016, at 2:51 PM, Rob Jansen <rob.g.jan...@nrl.navy.mil> wrote: > >> These statistics are being collected for years now, and it might take >> another year or so for relays to upgrade to stop collecting them. So >> what's another month. >> > > Agreed. Hi Karsten, Could you please summarize your current plans regarding stopping or replacing the collection method for sensitive statistics, based on discussions in Valencia? I'm particularly interested in the plans for the client IP and the exit stats that are categorized by exit port. Thanks, Rob signature.asc Description: Message signed with OpenPGP using GPGMail ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Should we disable the collection of some stats published in extra-infos?
> On Jan 19, 2016, at 3:45 AM, Karsten Loesing <kars...@torproject.org> wrote: > > Signed PGP part > On 15/01/16 23:00, Rob Jansen wrote: > > Hello, > > Hi Rob, > > I'm moving this discussion from metrics-team@ to tor-dev@, because I > think it's relevant for little-t-tor devs who are not subscribed to > metrics-team@. Hope you don't mind. > No problem. > > Should Tor still be collecting these things? Should Tor disable the > > collection of these statistics until we have a more > > privacy-preserving way to collect and aggregate them? > > > > The good news is that privacy-preserving techniques exist that can > > reduce information leakage. I'm developing a tool based on the > > secret-sharing variant of PrivEx [3] to collect some of these types > > of statistics while providing privacy guarantees. We are currently > > using it to collect only those stats that are useful for producing > > Tor traffic models. A great advantage of this tool is that the > > various counters that we store during the collection phase get > > noise added and are randomized during initialization; only the > > aggregates are ever known and revealed by the aggregation server, > > limiting the information that is lost if a relay is compromised. > > This is a large improvement over the current collection method, > > which only adds noise before publication and reveals statistics on > > a per-relay basis. > > Suggestion: How about we evaluate these statistics published by relays > in the past years to see if there are other benefits or risks we > didn't think of, and then we decide whether to leave them in, modify > them, or take them out? > Sounds great, though I'm not sure how this evaluation will happen. > The reason is that I'd want to avoid removing this code only to > realize shortly after that we overlooked a good reason for keeping it. The problem is that it is unlikely that anyone will speak up until *after* we remove them, so it may be difficult to realize all use cases until they have already been removed. At least for me, it's not just a matter of thinking hard enough about it. That said, I think that for some of these stats, the risk is such that it is hard to imagine collecting it the way Tor does currently. > These statistics are being collected for years now, and it might take > another year or so for relays to upgrade to stop collecting them. So > what's another month. > Agreed. To be clear, I am not suggesting that we simply remove everything and never look back. I'm actually suggesting using secure aggregation to *replace* the current method for counting and aggregating. Maybe the secure counting/aggregation happens occasionally, or maybe continuously. The details there still need to be worked out (working on it). I would suggest that we wait until those details are in fact worked out and we discuss a transition plan before removing the old collection methods, but I think that some stats have enough risk that it may not be worth waiting. Maybe we can remove the riskiest stats (IP addresses, exit ports, exit bytes) and wait to remove the others until I have more details about a replacement. > Thanks for (re-)starting this discussion! Cheers, Rob signature.asc Description: Message signed with OpenPGP using GPGMail ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
[tor-dev] New Shadow versions now available
Hello, Shadow versions 1.11.0 and 1.11.1 are now released! https://github.com/shadow/shadow/releases/tag/v1.11.0 https://github.com/shadow/shadow/releases/tag/v1.11.1 These versions represent a huge engineering effort to provide transparent support for plug-ins that use pthreads and that make blocking IO or system calls. We are using a fork of GNU portable threads (http://www.gnu.org/software/pth/) to enable this thread and blocking support. Shadow now literally just calls `tor_main()` and handles the rest internally. More details can be found in the release notes here: http://mailman.cs.umn.edu/archives/shadow-support/2016-January/000475.html All the best, Rob signature.asc Description: Message signed with OpenPGP using GPGMail ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
[tor-dev] The Shadow Simulator will soon use cooperative threading to better support plug-ins
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 Hello Shadow! Hello Tor! For the last couple of weeks, I have been overhauling the way Shadow interacts with plug-ins, and I want to make sure you (and others developing/using Shadow plug-ins) are aware of the upcoming changes. I am enhancing Shadow to support the following two capabilities: 1. Run plug-ins that make blocking system calls 2. Run plug-ins that create and use pthreads To achieve this, I am using a user-space cooperative threading library called pth: http://www.gnu.org/software/pth/ Pth works by storing and swapping stack states using makecontext and swapcontext (see man pages), so that we can intelligently switch back to Shadow's execution stack when the plug-in makes blocking libc function calls, and switch back to the plug-in stack only when its file descriptors are waiting for I/O or timers have expired. This same technique allows us to support would-be plug-in pthreads as application layer pth threads instead, which will help us avoid expensive kernel context switches. This will be especially useful/efficient when running thousands of nodes simultaneously, which normally would consume thousands of pthreads which are not Shadow aware. I have modified the pth library to be reentrant and thread-safe using caller-supplied states and thread-specific data, which I needed because Shadow itself runs with multiple simulator worker threads. Cool, huh!? So what does this mean? With this fancy new approach, we must still build each plug-in using LLVM/Clang in order to make use of Shadow's cool state-swapping trick, but we may no longer need to do other customization, particularly those historically needed for programs that block or use threads. Also, with these changes, Shadow's API has been removed, and Shadow now just calls the main function symbol in each plug-in directly, and the plug-in need not know anything about Shadow. I'm excited about these changes, and hope they will improve support for new plug-ins while also reducing maintenance in the long term. So far I have Shadow's built-in traffic generator running correctly with the new approach, and am now working on Tor. None of this code is public yet, but if you are interested in an example of how this would work or want some early access to the code, drop me a message off-list. Cheers! Rob - -- Rob Jansen, Ph.D. Computer Scientist Center for High Assurance Computer Systems U.S. Naval Research Laboratory rob.g.jan...@nrl.navy.mil robgjansen.com -BEGIN PGP SIGNATURE- Version: GnuPG v2 iQIcBAEBCgAGBQJVu4lYAAoJEO3Z5w0UVGXoASAP/3qjW0551xxnrLwWQcrXhbiN cf7hryzU7vDzKRkdzzNbEive0JFZhDONbruJ5Lcx1BW4/uvFZ1emW0FuZFri1m7b xWW6Pp43cMDGeIaMzmWZ9UAea59wirFk/C47R/jDBkXWnZpz5Arc1DjryPbj2LqN oPcfd9JDyE9zk2Y0sSytYt3IulKA62ZOV7+WbIwhHUTj70OgXFvVLhLMMVgOBEDO rHWxDg/+5mC8WqEg4L7lS8/ivAjr1YboA4PV0uSPzgZ+208iQBbsF3DoCG/8D9kY 83xpX/+3aGdWl7XjrncpA5x7gt392fX/3EL9k8lUYu2PKPaOWmJ0/xWbGoXGJOhA epAWPr95X7DFzpcsS59sq6usDeiltFllzfxgA5TbDsUgVwOa9227ygYtK/MVifi6 gYkpZzC6YEMDR6id0ZOwwqYKTsx1Sxomyz30w5iAp1l+Grb10bhUZlfqsNx9PkuI HdBMfvNziApSOQBge36QwskGFlJobfGdpL5w0yep1MSmBJCLVSCopCllLVcbDzQX ipbjmHjJXHNaIL7DHm073VJe2XAjcoMPHHs/nhq0vTgrMFGMDsNvMJ7+66xz3iwZ V3WUjEP0fPjgGZ8MTWj2HjGOQyENa5yAmAiBJUosg1fzgeinushWfE+Zm4m0sVEJ BJttAL/8SnI4DpbIqq6e =c1wf -END PGP SIGNATURE- ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Draft of proposal Direct Onion Services: Fast-but-not-hidden services
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 I like the term onion service to refer to a service that requires the use of Tor to access. For onion services whose operators do not want their identities discovered, I suggest renaming hidden service to anonymous onion service. For onion services that do not need anonymity (location is known but Tor must be used to access), I think known onion service is a great name (identified onion service also make sense, but I prefer known to identified). These suggestions are added to the wiki page: https://trac.torproject.org/projects/tor/wiki/org/sponsors/SponsorR/Term inology Cheers, Rob -BEGIN PGP SIGNATURE- Version: GnuPG v2 iQIcBAEBCgAGBQJVOpjiAAoJEO3Z5w0UVGXoKWQP/iMz4jWNZrS57dKEmp46NEIX 9cGib/RR2VQayLIGz+W8uFQ9IRtUPYS/rB4krmFH58JhGUJggzw7lUCU2SJMmdWG 4qu6kW2DT0iy2r/2hpKYPPYW5mmoySjGYN4qi7cNxGdhhR/899sN1S5qCUVlIFqS ZtAkuG0AqTTjuNr7u86Xt5EwLYGQ2XB8w4LOhSzS6WwrRfR48A5LS+1d5dOf+eo8 FIVTRUw0+yV64O460dCgdZVLzO/bkxpejoctdifnN0OVsd2SvskGc+/Xiq5YENqa EXFuUBE79Eka2VVvj+Ts+BVnu0B+kgDl1bWBRy5wEuM7PptSqxGS8cgr4CmWqZQH rE8Xpxvq97Aah6tK5OSrR6r4QuP99f+PdBm9rPBZFphCIub8nU4XuOIsWUXrFRAi hXJeayNslSlwyiFrQ2aWF03k5pw16VfcKC4PJ8ABT5pPIDmuRJcVAnLCLyS6Eosc Tm4To1ZVbZFm3YIW9VSsfUMDbou4QEUR7xkV3nasZQgSHY40+Ae3bHDHULc7NzGQ QvEOkabBe+INjjn2d+5Wj1RuNstl8ja9J3PRDNjtCluWJ7d0YHOAoFgzX7FTjjwF x+02tVXGvcLJ2LZrfKbaaZZC/S8l76owcRLzAbF5DeMvsrNQdWS2HyKOGTGW8GDM /oNLsCuEZOVzCq0NZWhd =xN7v -END PGP SIGNATURE- ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Performance and Security Improvements for Tor: A Survey
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi Ian, Thanks for the link, and for working on the survey - this was long overdue. I especially enjoy the mind map (Figure 5) which gives a quick view of all of the work over the years. The community has been busy! On the incentives front, I believe the survey is missing a few papers. - -Proof-of-Work as Anonymous Micropayment: Rewarding a Tor Relay FC 2015 Short Paper, http://fc15.ifca.ai/preproceedings/paper_71.pdf - -Paying the Guard: an Entry-Guard-based Payment System for Tor FC 2015 Short Paper, http://fc15.ifca.ai/preproceedings/paper_112.pdf - -From Onions to Shallots: Rewarding Tor Relays with TEARS HotPETs 2014, http://www.robgjansen.com/publications/tears-hotpets2014.pdf - -Payment for Anonymous Routing PETS 2008, http://cs.gmu.edu/~astavrou/research/Par_PET_2008.pdf While the TEARS paper only appeared at HotPETs (so far), I feel like it should be included because TorCoin is cited and TEARS is more viable than the TorCoin approach (IMHO) - the reasons for this are explained in the Tor incentives blog post: https://blog.torproject.org/blog/tor-incentives-research-roundup-goldstar-par-braids-lira-tears-and-torcoin Also, all of the above, as well as LIRA, are missing from Incentives node of the mind map in Figure 5. I realize that this isn't necessarily an incentives survey, but most incentive schemes affect performance and some schemes were included so it may make sense to include them all. Also, it looks like there is some whitespace below the Throttling node, so they may fit fairly easily. Finally, there is no section on Tor simulators/emulators!? I was surprised by this, as that is definitely an area of research that has greatly helped explore performance questions. It would be great to include a section on it so that researchers reading this survey and looking to work on performance know which tools they can use to get started. Shadow, ExperimenTor, SNEAC, and Chutney are the main tools that immediately come to mind that may be useful in exploring performance questions. Hope this is useful! All the best, Rob On Mar 16, 2015, at 12:38 PM, Ian Goldberg i...@cs.uwaterloo.ca wrote: Oh, please *do* comment. We can easily (and definitely plan to) update the ePrint tech report, incorporating the feedback we get from all of you, and giving credit in the acknowledgements. (Do let us know how you'd like to be credited.) Once we're happy with the result, we'll submit a condensed (due to page limits) version to a journal. - Ian -BEGIN PGP SIGNATURE- Version: GnuPG v1 iQIcBAEBAgAGBQJVC0mjAAoJEO3Z5w0UVGXoFGYP/iX5JBXA6XivMzEPAWouuIgH +9IphOnK6NTyatjmzSbGSmiR0W4zwSq93UsWrK/7b+NpY5RCj3TF2WK5zr2EpoZ7 GJjBmkZyOjK2ELIik/yMsc93l5e39CAllm3/fy9R7y9VJBItlWYNH3aM4ZFlr5DK 08k0FSi8vSwIHL+hpn6QF2pmuCdD6Zej4JjRUP6Uh31XGIPyvNxkDOEvQyuvDP+v 3beNEyOFtfB5kYq1wartdgIiEkCOIaVHUHp5OmnmdwNaa9EeSKsoAOsLsI6QqJ0X 3FddYrERpuvoLuDP+WXFsI9l7xtK4cTcoUkO0vI14/lzYiHC0SMmf0ZjVHoPn2k8 cr7rfyAqC5QE5T25nK4M1Bh0i43RVUkxLf36CqaFKj9hiyn6FSlPUhO/GKRpQ2oQ 2WUGbQ8PnGwCEytTr1IBpE7HLDB6i1MkLP8Djg6Hpu1e4l8LIYNj5fppSO3fBunn aPAc7/AKhG1x8doG6ur5F8G/wVykAmmgOmLBp1vsEDDNwXybrrKZZkYCUOucrqQq Of06fsys7JRT9984y/9mf4skjwoAsPCBEJWssyKnXhAlVQdN4hmlkMFvcZbIdB4B Za3Ev2ZtFp9BHCaHQNH5O8KfGb6lw8SPBp2Dyu+bABf5Js8K+B84EUKl6W4+K/lY IEfJ5GgSNQ4dd/ocyirc =2x71 -END PGP SIGNATURE- ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
[tor-dev] Shadow v1.10.1 is released
Hi all, In case anyone missed it, I released a new version of Shadow last week. Download links and the release notes are available on the release page here: https://github.com/shadow/shadow/releases/tag/v1.10.1 I've also copied them below in markdown for your convenience. All the best, Rob I just tagged `Shadow v1.10.1`, and new links are available to [download the release][dlpage]. This is a major release, and includes: + the Tor plug-in is now split into [its own github repository][shadow-plugin-tor] for better modularity + a new Bitcoin plug-in that can run the Bitcoin software in Shadow, in [its own github repository][shadow-plugin-bitcoin] + a new traffic generator plug-in that can model client behavior using actions in a dependency graph format (replaces the old filetransfer plugin, which moved to [shadow-plugin-extras][shadow-plugin-extras]) + updated [Shadow wiki][shadow-wiki] and [Shadow website][shadowweb] refresh, new [Tor wiki][shadow-plugin-tor-wiki], and new [Bitcoin wiki][shadow-plugin-bitcoin-wiki] + several enhancements to support running Bitcoin, including support for the `timerfd` timer interface, `AF_LOCAL` local Unix sockets, and more accurate handling of socket and file descriptors + new scripts for parsing stats from shadow log files and visualizing them + other core improvements and [bugfixes][r110issues] Happy simulating! [dlpage]: https://github.com/shadow/shadow/releases/tag/v1.10.1 [shadowweb]: https://shadow.github.io [shadow-wiki]: https://github.com/shadow/shadow/wiki [shadow-plugin-tor]: https://github.com/shadow/shadow-plugin-tor [shadow-plugin-tor-wiki]: https://github.com/shadow/shadow-plugin-tor/wiki [shadow-plugin-bitcoin]: https://github.com/shadow/shadow-plugin-bitcoin [shadow-plugin-bitcoin-wiki]: https://github.com/shadow/shadow-plugin-bitcoin/wiki [shadow-plugin-extras]: https://github.com/shadow/shadow-plugin-extras [r110issues]: https://github.com/shadow/shadow/issues?q=milestone%3Arelease-1.10.0+is%3Aclosed ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Tor Attack Implementations (Master's Thesis: Tor Mixes)
On Feb 16, 2015, at 5:43 AM, Florian Rüchel florian.ruechel@inexplicity.de wrote: It would also help me a lot if you can direct me to papers or articles that have shown specific attacks that are known to work on the current network. You might want to look into the Sniper Attack as an example of how to evaluate attacks on Tor safely using Shadow: http://www.robgjansen.com/publications/sniper-ndss2014.pdf For those wanting to follow the Shadow thread on this topic, that starts here: http://mailman.cs.umn.edu/archives/shadow-support/2015-February/000312.html Best regards, Rob ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] KIST somewhat implemented
Thanks for getting this started, Nick! I’ve added some comments and changes to the trac ticket #12890. All the best, Rob On Dec 20, 2014, at 12:51 PM, Nick Mathewson ni...@torproject.org wrote: Hi! So, Rob came up with this clever way to use help from an OS kernel to improve Tor's performance: http://www.robgjansen.com/publications/kist-sec2014.pdf I've got a sketchy implementation together now. See my comments on the end of ticket 12890. I have a branch which I believe implements something like the KIST algorithm for Linux as I discussed with Rob. I made a list of not-yet-implemented issues. Rob, could you please let me know which of the issues I listed need to be fixed before this is reasonably testable? cheers, -- Nick ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Using Shadow to increase agility of Sponsor R deliverables
On Nov 20, 2014, at 4:59 PM, David Goulet dgou...@ev0ke.net wrote: On 20 Nov (14:45:12), Rob Jansen wrote: Are there other HS performance improvements that we think may be ready by January? On my part, I have a chutney network with an HS and clients that fetch data on it. I'm currently working on instrumenting the HS subsystem so we can gather performance data and analyze it for meaningful pointers on where are the contention points, confirm expected behaviors, etc... I'll begin soon updating the following ticket with more information on the work I'm doing. (I'm in Boston right now collaborating with Nick for the week so things are a bit more slow on this front until monday). https://trac.torproject.org/projects/tor/ticket/13792 This could be used also with shadow I presume. Since the deadline is near us, I choose chutney for simplicity reasons here. Chutney is the right tool for tracing CPU resource problems. Shadow is the right tool when trying to gather realistic network level performance statistics, and testing code at scale. Also, Shadow potentially runs faster than real time if you are only using a handful of nodes. If you are not using Shadow because it is too complex, then please, please let me help with that. I'll have a talk with Nick tomorrow on how we can possibly have this instrumentation upstream (either logs, controller event or/and tracing). That would be great! Making it easy to gather data, even if only in TestingTorNetwork mode, will pay dividends. Things are going forward, we still have some work ahead to gather the HS performance baseline and start trying to improve it. I'm fairly confident that the performance statistics in a private network will give us a good insight on the current situation. Feel free to propose anything that could be useful to make this thing more efficient/faster/useful :). I totally agree that a private network is the right approach. A small network will be useful to isolate some performance issues, but I think we also need to make sure we test at a larger scale with the addition of realistic background traffic, etc, so that we understand the performance benefits in a more realistic environment. Shadow allows us to do this and have stats across the entire network on the order of hours. I have the resources to run at least 6000 relays and 3 clients in a private ShadowTor deployment, and I hope that having results on this scale will impress our funder in January. Perhaps after you finish your traces in chutney and work out some of the code bottlenecks, I can run some more realistic network experiments in Shadow. (Separate branches for each improvement would help here.) Would this actually be helpful? Or do we think that by the time we get to the Shadow step we would have already learned everything we need to know? -Rob ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Feedback on obfuscating hidden-service statistics
On Nov 21, 2014, at 9:39 AM, George Kadianakis desnac...@riseup.net wrote: I also believe that some of these extra stats (e.g. How many failures/anomalies do we observe?) should first be done on a privnet instead of the real network. That can give us some preliminary results, and then we can consider doing them on the real network. Maybe we can also have some privnet stats by January. I think this is a great idea. While we might not learn (at first) the absolute number of failures that occur in the *real* network, we will at least be able to say things about the *fraction* of requests that fail. That can be collected in a large ShadowTor network. In fact, I would advocate implementing the collection of all of the stats that Aaron has requested; for those stats that still need some security analysis before people are convinced, we can enable those in TestingTorNetwork mode for now. If they are blessed, collecting them in “normal” mode becomes easier. In fact, might we also consider doing this for even more of the statistics from the etherpad (the ones that make sense for privnets)? I suppose at some point there will exist an implementation bottleneck, but being able to say as much as possible in January - even if it is only from privnet stats - is a win. We can then hope to be able to say more about the real network by the following deadline. -Rob ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Using Shadow to increase agility of Sponsor R deliverables
On Nov 21, 2014, at 10:40 AM, David Goulet dgou...@ev0ke.net wrote: Please see https://trac.torproject.org/projects/tor/ticket/13802 about the instrumentation part. We'll definitely have to talk more on the integration of Shadow and a userspace tracer but of what I got from Nick, it sounds totally doable without too much trouble. If we want the tracer to also work inside of Shadow, then the biggest potential problem I can think of right now is thread safety. Shadow uses several worker threads, each of which are assigned to run hundreds to thousands of Tor nodes. If Tor is using lttng as a dynamic library and it is not thread-safe, we will run into issues. One way to avoid those issues could be to statically link lttng to Tor. However, even this could go bad if lttng uses global state, because that would mean that those hundreds of Tor nodes assigned to a Shadow worker thread would be sharing that state. Probably not what we want. To get around the global state issue, Shadow would have to compile lttng specially, using the same LLVM pass to hoist out the global variables as we use for Tor. That may get messy. So it really depends on how robust lttng is, and as I have no experience with it, I can only speculate. But if you let me know when you have some minimal instrumentation ready, I can test in Shadow early enough that we could adjust if needed. -Rob ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Using Shadow to increase agility of Sponsor R deliverables
On Nov 21, 2014, at 1:06 PM, David Goulet dgou...@ev0ke.net wrote: On 21 Nov (12:59:43), Rob Jansen wrote: On Nov 21, 2014, at 10:40 AM, David Goulet dgou...@ev0ke.net wrote: Please see https://trac.torproject.org/projects/tor/ticket/13802 about the instrumentation part. We'll definitely have to talk more on the integration of Shadow and a userspace tracer but of what I got from Nick, it sounds totally doable without too much trouble. If we want the tracer to also work inside of Shadow, then the biggest potential problem I can think of right now is thread safety. Shadow uses several worker threads, each of which are assigned to run hundreds to thousands of Tor nodes. If Tor is using lttng as a dynamic library and it is not thread-safe, we will run into issues. One way to avoid those issues could be to statically link lttng to Tor. However, even this could go bad if lttng uses global state, because that would mean that those hundreds of Tor nodes assigned to a Shadow worker thread would be sharing that state. Probably not what we want. To get around the global state issue, Shadow would have to compile lttng specially, using the same LLVM pass to hoist out the global variables as we use for Tor. That may get messy. LTTng is an inprocess library and spawns a thread to handle all the tracing and interaction with the main tracing registry of lttng (that manages the buffers, clients, consumers, streaming, etc...). Nick told me that Shadow moves forward the clock so as long as you highjack clock_gettime for monotonic time, we'll be fine :). Great! Shadow does interpose clock_gettime (among other time functions). So it really depends on how robust lttng is, and as I have no experience with it, I can only speculate. But if you let me know when you have some minimal instrumentation ready, I can test in Shadow early enough that we could adjust if needed. The LTTng userspace tracer is thread safe, no issue with that :). That’s a relief! I already have a couple of tracepoints in the HS client subsystem as we speak, I'm currently adding more to do some very basic measurements on the timings of each client HS cell (in rend_process_relay_cell()). Once I have something that you can try, I'll send you a link to the branch with the instrumentation and you can see if you can make it happen with shadow :). OK, great! -Rob Cheers! David -Rob ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
[tor-dev] Using Shadow to increase agility of Sponsor R deliverables
Hi David, Roger, I think it would be great if we could show off some HS performance improvements at the next Sponsor R PI meeting in January, both as a way to show that we are making progress on the performance front and also to help demonstrate our agility and how quickly we can get results on new designs. I’m here to advocate the use of Shadow to help in this regard. So far, the list of deliverable that we could possibly show improvements is quite small: https://trac.torproject.org/projects/tor/wiki/org/sponsors/SponsorR#Tickets From the list of completed tickets, the only one that stands out to me as providing a performance boost is #13211 (Allow optimistic data on connections to hidden services): https://trac.torproject.org/projects/tor/ticket/13211 This seems like a somewhat small change; despite that, do we think this may be something worth simulating to verify it works as expected and understand the extent to which it reduces time to first byte for HS downloads? Are there other HS performance improvements that we think may be ready by January? Best, Rob ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] help with minimal Tor build for Shadow
Let me try again more concisely: Whats the most minimal (fastest) way to generate ‘orconfig.h' and 'or_sha1.i’ from a clean git clone of Tor master? Is there a make target for this? -Rob On Jul 11, 2014, at 5:17 PM, Rob Jansen rob.g.jan...@nrl.navy.mil wrote: Hi, Shadow uses CMake and a custom build process to build the Tor source files into a shadow-plugin (using Clang/LLVM and a custom pass). However, because my CMake build file is not smart enough to know how to turn orconfig.h.in into orconfig.h, I still have to run Tor through autotools first (and then build again with Clang). I’m hoping to eliminate at least parts of the Tor configure/make build process. Shadow for sure needs orconfig.h to exist, and I presume ./autogen.sh and ./configure is the best way to generate that. Does anyone know of a faster approach? (I’d like to avoid adding the rules to my CMakeLists.txt file, as it will be hard to keep up with Tor’s upstream changes to its Makefile.in.) However, still missing after the configure is at least or_sha1.i, and possibly other files. Presumably these are generated when running make (after configure). My custom Clang build fails without them. Is there a special make target in Tor that generates these “helper” files without building the entire Tor binary? If not, should/can there be? All the best, Rob ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
[tor-dev] help with minimal Tor build for Shadow
Hi, Shadow uses CMake and a custom build process to build the Tor source files into a shadow-plugin (using Clang/LLVM and a custom pass). However, because my CMake build file is not smart enough to know how to turn orconfig.h.in into orconfig.h, I still have to run Tor through autotools first (and then build again with Clang). I’m hoping to eliminate at least parts of the Tor configure/make build process. Shadow for sure needs orconfig.h to exist, and I presume ./autogen.sh and ./configure is the best way to generate that. Does anyone know of a faster approach? (I’d like to avoid adding the rules to my CMakeLists.txt file, as it will be hard to keep up with Tor’s upstream changes to its Makefile.in.) However, still missing after the configure is at least or_sha1.i, and possibly other files. Presumably these are generated when running make (after configure). My custom Clang build fails without them. Is there a special make target in Tor that generates these “helper” files without building the entire Tor binary? If not, should/can there be? All the best, Rob ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Statistics on fraction of connections used uni-/bidirectionally
On Dec 21, 2013, at 4:13 AM, Karsten Loesing wrote: On 12/18/13 2:03 PM, Rob Jansen wrote: On Dec 18, 2013, at 4:51 AM, Karsten Loesing wrote: I also aggregated observations similar to Torperf measurements, by plotting only median and interquartile range. Here's the result: https://people.torproject.org/~karsten/volatile/connbidirect-2013-09-19-2013-12-18.png The old graph containing the same data is still there: https://metrics.torproject.org/performance.html?graph=connbidirectstart=2013-09-19end=2013-12-18#connbidirect Do you like the new graph? Do you have further ideas for improving it? I do like the new graph, its much cleaner than the old one. But I like the mostly reading/writing parts of the old one too. Maybe we can create two more graphs like the new one (1 for mostly reading and 1 for mostly writing). Ah okay, then let's put the unidirectional parts back into the graph. I made another graph with all three parts (both reading and writing, mostly writing, and mostly reading) displayed with medians and interquartile ranges on the same y axis. I find it easier to compare the three parts in this graph than in three separate graphs with possibly different y axis scales. https://people.torproject.org/~karsten/volatile/connbidirect-2-2013-09-19-2013-12-18.png How's this one compared to the other two? Awesome! This is even better than have 3 separate graphs. I think this achieves the best balance between summarizing the data and showcasing the data that is available. I also think a stacked percentage area graph (e.g. http://www.highcharts.com/demo/area-stacked-percent) could work here, as a way to get all the data on the same chart. I'm not really sure how that would work with our data. We could only display medians, not interquartile ranges. And our three medians don't even add up to 100%; using means instead of medians might fix this, though I didn't check. Ah, I see. I assumed they added to 100%. Do you think this graph would be easier to understand than the one I posted above? Likely not, given the above comment. I'd say ignore this suggestion. This graph is only there to show what kind of data we have. If somebody is really interested in the data, they'll have to download the CSV file and do their own analysis. Here's the specification of the file format: https://metrics.torproject.org/stats.html#connbidirect All the best, Karsten If the main goal is to show the data that exists, I think the old graph does that fine. But I think an important subgoal is also to have graphs that make it clear how the data is useful, not only that it exists. Perhaps keep both/all versions? Agreed, the graph should be useful, not just show that we have the data. Though I'd want to avoid adding a second or third graph and instead pick the most useful one we can come up with here. Thanks for your input! Much appreciated. All the best, Karsten I think your newest graph (the one with the three median+range plots on the same graph) is the best, and would be happy if we switched to that one. Best, Rob ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Simulating a slow connection
Date: Thu, 26 Jan 2012 16:39:34 + From: Steven Murdoch steven.murd...@cl.cam.ac.uk To: tor-dev@lists.torproject.org Subject: Re: [tor-dev] Simulating a slow connection Message-ID: cae309a8-034e-4e6a-b05c-6b9b3ea1c...@cl.cam.ac.uk Content-Type: text/plain; charset=us-ascii Hi Adam, On 20 Jan 2012, at 10:55, Adam Katz wrote: Well, I myself didn't have anything specific in mind but i have some experience with the linux tc utility as well as with generating realistic background traffic. I was wondering whether I could help on any of the existing projects or help establish a new one. I think Nick's comments summarized the current state of thinking. ExperimenTor and Shadow are the best Tor simulators to use for this project. The big missing pieces are: - an automated framework for setting up experiments with slow Internet connections with ExperimenTor and Shadow, then collecting and summarizing results - Tools for generating realistic link characteristics (delay and packet dropping), and for collecting data on the link properties found in particular locations Steven. As Steven stated, this would be very easy to explore with Shadow. The network topology is passed in as an XML file: node properties include bandwidth up/down and link properties include latency, jitter, and packetloss. I already have some python scripts to generate topologies, and would be happy to share them once you have realistic measurements/values for the slow links you'd like to explore. I'd also be happy to explain more about how Shadow works or the structure of the XML file. Rob signature.asc Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] tor-dev Digest, Vol 9, Issue 8
On 10/14/2011 08:00 AM, tor-dev-requ...@lists.torproject.org wrote: On Tue, Sep 27, 2011 at 4:20 PM, Rob Jansenrob.g.jan...@nrl.navy.mil wrote: If you have the need to run Tor experiments, or are just interested in the Software, please try it out. We would love any feedback or comments or suggestions if you have them! i've got more, but first off: - why secondary dependencies not in git and not opt-in? e.g. downloading http://shadow.cs.umn.edu/downloads/shadow-resources.tar.gz - if you run fully offline build systems this default behavior breaks builds. True, this is a good point. My original reason is that I thought the latency data we collected from PlanetLab was too large to put in git. Since then, my experience with git and its way of handling changes leads me to believe this isn't an issue after all. Also, it is unclear to me that resources should be versioned the same way that source code is. - what about hw acceleration in performance estimates? e.g. openssl dynamic engines in virtual CPU processing. We incorporate CPU delays into Shadow by giving it a CDF distribution of real measurements of application and CPU throughput, and delaying the processing of events based on this distribution. You can model HW accel by skewing the distribution, or better yet creating a new one based on real measurements of an accelerated OpenSSL/Tor. - is there a shadow-dev in addition to shadow-support?:) Not yet. Since you asked, it will be up in a few days:) We are continuously working on improving the simulator, including more efficient use of multiple CPU cores and a command-line interface to help with installing Shadow and some of its dependencies. https://github.com/shadow/shadow-cli also handy. thanks! Thanks for the comments, feel free to share your others. Rob ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Running Real Tor Code Over Simulated Networks
- is there a shadow-dev in addition to shadow-support?:) Not yet. Since you asked, it will be up in a few days:) Join the shadow dev list at https://wwws.cs.umn.edu/mm-cs/listinfo/shadow-dev or send mail to shadow-...@cs.umn.edu Thanks, Rob ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
[tor-dev] Dynamic Throttling in Tor
Hello, We've been exploring dynamic algorithms for throttling Tor clients. We've come up with three algorithms that are easier to maintain than static throttling (i.e. don't need to be reconfigured often), and produce significant performance benefits for web clients at the expense of bulk clients. Details are available in our technical report: http://www-users.cs.umn.edu/~jansen/papers/throttling-umntr11-019.pdf We are looking for feedback and are wondering if these algorithms look useful to Tor. They have already been implemented, and patches can be prepared if there is interest. Best regards, Rob -- Rob G. Jansen U.S. Naval Research Laboratory rob.g.jan...@nrl.navy.mil 202.767.2389 ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev