#29370: Measure mode with arbitrary tgen traffic models ---------------------------------------+------------------------------ Reporter: irl | Owner: metrics-team Type: enhancement | Status: reopened Priority: Low | Milestone: Component: Metrics/Onionperf | Version: Severity: Normal | Resolution: Keywords: metrics-team-roadmap-2020 | Actual Points: 0.1 Parent ID: #33321 | Points: 1 Reviewer: | Sponsor: Sponsor59 ---------------------------------------+------------------------------
Comment (by robgjansen): Replying to [comment:10 karsten]: > Thinking about different traffic models, what if we wanted to measure something like an `HTTP POST` rather than the `HTTP GET`? I'd assume that we'd have to provide a different TGen ''server'' model file as well, but I don't know for sure. I think this only requires changes to the client side model, i.e., you would increase the `sendsize` and reduce the `receivesize` values. > If that's still possible with replacing just the TGen client model, there's probably another model that requires a custom TGen server model which we just didn't think of yet. I designed TGen so that the server config is minimal: log level, how often to print a heartbeat message, and the port it should listen on. (See the [https://github.com/shadow/tgen/blob/master/doc/TGen-Options.md#start- options start options table].) Otherwise it just responds to commands send from the client. More complicated models than we have been discussing are possible, though, through the use of Markov models. Creating Markov models for TGen is even more complicated than creating TGen config files, but also really really powerful. I did my best to [https://github.com/shadow/tgen/blob/master/doc /TGen-Markov-Models.md document how to create the Markov models], but I'm hoping that we won't need them for OnionPerf. (I use them to generate traffic flows in Shadow that are based on actual traffic flows that we measured at Tor relays.) > All in all, it's more than just ''the'' TGen model. We'd have to write a fair amount of code in order to implement a useful ping model in OnionPerf. Agreed. > The internally generated model also has the advantage that it's easier to use. All it takes to start a measurement is a (potentially quite long) command with several parameters. But it doesn't require a (still potentially long) command plus one or two files. Describing the experiment would then be a matter of listing all software versions and the OnionPerf command used to start measurements. > > My suggestions are that we: [snip] Agreed with all of this too! I think that as much as we can, we should make OnionPerf a self-contained tool that is primarily useful for generating and visualizing Tor metrics data. But, we could document that other models are possible but unsupported. If some researchers want to use OnionPerf to help them answer some research problems, they could fairly easily adapt OnionPerf to their specific needs. -- Ticket URL: <https://trac.torproject.org/projects/tor/ticket/29370#comment:11> Tor Bug Tracker & Wiki <https://trac.torproject.org/> The Tor Project: anonymity online
_______________________________________________ tor-bugs mailing list tor-bugs@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-bugs