Re: [tor-dev] iObfs: obfs4proxy on iOS
On Mon, Apr 04, 2016 at 12:04:45AM -0400, Mike Tigas wrote: > [again, cross-posted to tor-dev and guardian-dev.] > > A quick status report on this: it works! Hit a big epiphany, figured out > how to get `gomobile` to emit the necessary bits, then went wild. > > Some example stdout from Onion Browser connecting to Tor via obfs4, > meek_lite (google), and scramblesuit: > https://gist.github.com/mtigas/f1b9a3a8befa6f60d517eb2340f3cdd4 > > There are trivial forks of obfs4[1] and goptlib[2] that simply hard-code > some options that are normally sent as environment variables because > obfs4proxy runs in managed mode[3]. (It's the best I have right now > until I can figure out a better way to communicate between obfs4proxy > and the iOS bits.) I’ve tacked a few other quick thoughts at the bottom > of the iObfs readme[4]. As a quick test I've started building it into > Onion Browser (iobfs branch[5]), which is what got the output linked above. > > [1]: > https://github.com/mtigas/obfs4/compare/1df5c8ffe8f4aa2614323698e8008f1ab1fb7a18...mtigas:iObfs-201604-dev > [2]: > https://github.com/mtigas/goptlib/compare/f17a5f239f705d7e39a8bccbebdf9927cc99dbeb...mtigas:iObfs-201604-dev This is radical. Maybe you don't need the fork of goptlib if you do os.Setenv on the relevant variables before calling pt.ClientSetup in obfs4? ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] iObfs: obfs4proxy on iOS
[again, cross-posted to tor-dev and guardian-dev.] A quick status report on this: it works! Hit a big epiphany, figured out how to get `gomobile` to emit the necessary bits, then went wild. Some example stdout from Onion Browser connecting to Tor via obfs4, meek_lite (google), and scramblesuit: https://gist.github.com/mtigas/f1b9a3a8befa6f60d517eb2340f3cdd4 There are trivial forks of obfs4[1] and goptlib[2] that simply hard-code some options that are normally sent as environment variables because obfs4proxy runs in managed mode[3]. (It's the best I have right now until I can figure out a better way to communicate between obfs4proxy and the iOS bits.) I’ve tacked a few other quick thoughts at the bottom of the iObfs readme[4]. As a quick test I've started building it into Onion Browser (iobfs branch[5]), which is what got the output linked above. [1]: https://github.com/mtigas/obfs4/compare/1df5c8ffe8f4aa2614323698e8008f1ab1fb7a18...mtigas:iObfs-201604-dev [2]: https://github.com/mtigas/goptlib/compare/f17a5f239f705d7e39a8bccbebdf9927cc99dbeb...mtigas:iObfs-201604-dev [3]: https://github.com/mtigas/iObfs/blob/master/notes/obfs4-nonmanaged.md [4]: https://github.com/mtigas/iObfs/ [5]: https://github.com/OnionBrowser/iOS-OnionBrowser/tree/iobfs There’s quite a bit to clean up and document. We also might want a more minimal testcase than full-blown (and cruft-filled) Onion Browser? Though the iObfs repo[4] *does* contain an Xcode project which builds an “iObfs.app” that can successfully link and executes obfs4proxy as a thread[6] (as long as the framework has been built with the `buildobfs4.sh` script). stdout on that app properly shows the transport “CMETHOD” lines, though that’s all that app does. [6]: https://github.com/mtigas/iObfs/blob/master/iObfs/iObfs/ObfsWrapper.m This is probably near some "maximum viable bad idea", having the iOS browser app *and* Tor *and* go-powered obfs4proxy within the same process. (But of course, there's no easy way to get around the restriction against subprocesses on iOS.) It seems to work really well in my limited testing so far. Will continue working on it in the coming weeks and keep y’all posted. Best, Mike Tigas @mtigas | https://mike.tig.as/ | 0xA993E7156E0E9923 signature.asc Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] [::]/8 is marked as private network, why?
On 3/29/16, Tim Wilson-Brown - teor wrote: > /** Private networks. This list is used in two places, once to expand the > So I think we should keep [::]/8 in the list of private addresses. > That said, the list of IPv4 and IPv6 private addresses in tor is incomplete, > https://www.iana.org/assignments/iana-ipv6-special-registry/iana-ipv6-special-registry.xhtml > https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml I'd only bother with what's in these two lists, primarily the Global False. Otherwise you end up determining and maintaining your own "bogon" style lists which was not really the original intent of tracking IETF provided rfc1918 style "private" address space list. Thus I'd remove it. ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Yawning's CFC, web caching, and PETs
On Sun, 03 Apr 2016 19:08:38 +0200 Jeff Burdges wrote: > Should we try to organize some public chat about web caching at PETs > or HotPETs this summer? Might be neat, though I'm not much one for conferences. CFC is just a proof of concept/tech demo, and all the real cleverness/scary stuff goes on at the cache end... > By that I mean, a discussion with anonymity researchers on security > and anonymity concerns around making tools like Yawning's CFC a > long-term solution to the CloudFlare problem? > > Aside from our not knowing if CloudFlare will become more > accommodating, a trustworthy web cache would enable more serious > efforts towards alpha-mixing, either in Tor itself, or with mixnets > on the side of Tor. And archival tools make the web better in > numerous ways, like by making it harder to removing anything. Agreed. IIRC part of Greatfire's ant-GFW circumvention system is basically cached web content (https://github.com/greatfire/redirect-when-blocked), so this sort of approach clearly has potential. > There are interesting problems in this space like : Big scary > adversary issues. Archiving TLS sessions along with HTML > transformations so that subsequent clients can verify the original > site's certificate. How best to one distribute the cache. Another technical issue is "what to do about certain kinds of dynamic content, such as page scripts". The safe behavior is "ignore/not cache" but the user experience isn't all that great. 'course caching random active payload is utterly horrible from a security standpoint. There is a liability/legal can of worms lurking here too. In *any* given jurisdiction, there is content, the possession/redistribution of which, will make various people mad (for various definitions of mad). Eg: * Pornography (Legal in the civilized world, or that which is illegal in the civilized world) * Most forms of political speech * Most social commentary * Blasphemy * Lèse-majesté * 09F911029D74E35BD84156C5635688C0 * Sploits * Data dumps * etc etc etc. Since I tend to lean towards being a freedom of thought/expression absolutist (in the sense of the principle behind the ever eroded legal right), a well designed cache system should be capable of holding anything, no matter how unpopular, while keeping operators protected from as much fallout as possible. Something that sits on top of Tahoe-LAFS perhaps... I have a sense that the problem space overlaps. I'm probably just dreaming, and some random person is probably going to tell me why I'm wrong to want such properties, or "why not use $project, ur dumb lol". Regards, -- Yawning Angel pgpLCK4JbVHMA.pgp Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
[tor-dev] Is it possible to leak huge load of data over onions?
On 4/3/16, Griffin Boyce wrote: > How do you transmit an elephant? One byte at a time... > > But on a serious note, it's possible to transfer 2.6TB over Tor in small > pieces (such as file by file or via torrent). Given the size, however, I'd > suspect they mailed hard drives after establishing contact with > journalists. Even on a fairly fast connection, 2.6TB would take quite a > while... That amount of data would take 27 days at 10Mbps. Few would be willing to sit supervising in a hotseat that long when they can physically mail 3TB for $100 and 8TB for $230. Though they might spend 3 days pushing 100Mbps via shells, etc. Overlay networks move data reasonably well, and reliability could be handled by chunking protocols. Available link speeds (thus path speeds) are likely to be limiting factor, ie: 10Mbps limits you to 100GiB a day. Though at 1Mbps, DVD torrenting on say I2P seems to be a thing. ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Is it possible to leak huge load of data over onions?
NB: Sorry for breaking the threading. Replying to the right message. dawuud: > Alice and Bob can share lots of files and they can do so with their > Tor onion services. They should be able to exchange files without > requiring them to be online at the same time. Are you sure you've > choosen the right model for file sharing? I haven't chosen any storage model. I'm just wondering about technical capabilities of Tor to act as _anonymous_ transport for this data. "Will one be anonymous when they transmit big amount of data?" "What the limits are?" "What step should the source take to be safe?" > If Alice and Bob share a confidential, authenticated communications > channel then they can use that to exchange key material and secret > connection information. That should be enough to bootstrap the > exchange of large amounts of documents: The Internet is not confidential. Surely the opposite. > Anyone who hacks the storage servers she is operating gets to see > some interesting and useful metadata such as the size of the files > and what time they are read; not nearly as bad as a total loss in > confidentiality. Yes, but there are much more adversaries. Any AS near the endpoints poses big threat. > No that's not necessarily correct; if the drives contain ciphertext > and the key was not compromised then the situation would not be > risky. The source can easily fail by compromising fingerprints, chemical traces, serial number of the hard drive (with proprietary firmware!), place of origin and other 'physical' metadata. It's not "just ciphertext" in a vacuum. -- Ivan Markin ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Is it possible to leak huge load of data over onions?
On 4/04/2016 10:31 AM, Griffin Boyce wrote: How do you transmit an elephant? One byte at a time... rsync is a beautiful thing. Have different clients / nodes accessing separate file paths. If the transfer drops out / is too slow, start up rsync again.. ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Is it possible to leak huge load of data over onions?
I've never seen anything download faster than ten megabits per second on Tor. Presumably the inverse is true if you have upload. ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Is it possible to leak huge load of data over onions?
Hi. My general feeling here is that it's more useful for me to tell you how I think people should share files than it would be for me to answer your questions; sorry, not sorry. Alice and Bob can share lots of files and they can do so with their Tor onion services. They should be able to exchange files without requiring them to be online at the same time. Are you sure you've choosen the right model for file sharing? If you want reliability then you should desire to not have single points of failure such as a single Tor circuit or a single onion service; further, the high availability property might be important for certain types of file sharing situations. If Alice and Bob share a confidential, authenticated communications channel then they can use that to exchange key material and secret connection information. That should be enough to bootstrap the exchange of large amounts of documents: - Alice is clueful about distributed content-addressable ciphertext storage so she decides to operate a Tahoe-LAFS storage grid over onion services. - Alice uploads her ciphertext to the tahoe grid. - Alice sends Bob the secret grid connection information and cryptographic capability to read her files. In this situation Alice really doesn't care where her storage nodes are hosted and if the virtual server hosting provider can be depended on to not get hacked or receive a national security letter. Why does Alice give zero fucks? ciphertext. "They" have her ciphertext and it's useless without a key compromise. Anyone who hacks the storage servers she is operating gets to see some interesting and useful metadata such as the size of the files and what time they are read; not nearly as bad as a total loss in confidentiality. https://gnunet.org/sites/default/files/lafs.pdf However what if Alice decides that Bob is a useless human being and she should instead publicize the documents herself? She writes her own badass adversary resistent distributed ciphertext storage system and convinces several organizations world wide to operate storage servers in various countries and thus several legal jurisdictions. She can now gleefully upload ciphertext via onions services to the storage servers and then simply publicize the key material for specific files she wishes to share with the world or an individual. She can make this system censorship resistant by utilizing an erasure encoding for storing the ciphertext. For instance Tahoe-LAFS uses Reed Solomon encoding such that any K of N shares can be used to construct the ciphertext of the file. In this case if an adversary wanted to censor Alice's ciphertext publication they would have to DOS-attack N-K+1 servers. > Recently someone leaked enormous amount of docs (2.6 TiB) to the > journalists [1]. It's still hard to do such thing even over plain old > Internet. Highly possible that these docs were transfered on a physical > hard drive despite doing so is really *risky*. No that's not necessarily correct; if the drives contain ciphertext and the key was not compromised then the situation would not be risky. > Anyways, in the framework of anonymous whistleblowing, i.e. SecureDrop > and Tor specifically it's seems to be an interesting case. I'm wondering > about the following aspects: > > o Even if we use exit mode/non-anonymous onions (RSOS) > is such leaking reliable? The primary issue here > is time of transmission. It's much longer than any > time period we have in Tor. > > o What is going to happen with the connection after > the HS republishes its descriptor? Long after? > [This one is probably fine if we are not using > IPs, but...] > > o Most importantly, is transferring data on >1 TiB > scale (or just transferring data for days) safe at > all? At least the source should not change their > location/RP/circuits. Or need to pack all this stuff > into chunks and send them separately. It's not > obvious how it can be done properly. So at what > point the source should stop the transmission > (size/time/etc)/change location or the guard/ > pick new RP? > > -- > [1] http://panamapapers.sueddeutsche.de/articles/56febff0a1bb8d3c3495adf4/ > -- > Happy hacking, > Ivan Markin > ___ > tor-dev mailing list > tor-dev@lists.torproject.org > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev signature.asc Description: Digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
[tor-dev] Is it possible to leak huge load of data over onions?
Recently someone leaked enormous amount of docs (2.6 TiB) to the journalists [1]. It's still hard to do such thing even over plain old Internet. Highly possible that these docs were transfered on a physical hard drive despite doing so is really *risky*. Anyways, in the framework of anonymous whistleblowing, i.e. SecureDrop and Tor specifically it's seems to be an interesting case. I'm wondering about the following aspects: o Even if we use exit mode/non-anonymous onions (RSOS) is such leaking reliable? The primary issue here is time of transmission. It's much longer than any time period we have in Tor. o What is going to happen with the connection after the HS republishes its descriptor? Long after? [This one is probably fine if we are not using IPs, but...] o Most importantly, is transferring data on >1 TiB scale (or just transferring data for days) safe at all? At least the source should not change their location/RP/circuits. Or need to pack all this stuff into chunks and send them separately. It's not obvious how it can be done properly. So at what point the source should stop the transmission (size/time/etc)/change location or the guard/ pick new RP? -- [1] http://panamapapers.sueddeutsche.de/articles/56febff0a1bb8d3c3495adf4/ -- Happy hacking, Ivan Markin ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Is it possible to leak huge load of data over onions?
How do you transmit an elephant? One byte at a time... But on a serious note, it's possible to transfer 2.6TB over Tor in small pieces (such as file by file or via torrent). Given the size, however, I'd suspect they mailed hard drives after establishing contact with journalists. Even on a fairly fast connection, 2.6TB would take quite a while... ~Griffin -- On Sun, Apr 03, 2016 at 5:24 PM, Ivan Markin < t...@riseup.net [t...@riseup.net] > wrote: Recently someone leaked enormous amount of docs (2.6 TiB) to the journalists [1]. It's still hard to do such thing even over plain old Internet. Highly possible that these docs were transfered on a physical hard drive despite doing so is really *risky*. Anyways, in the framework of anonymous whistleblowing, i.e. SecureDrop and Tor specifically it's seems to be an interesting case. I'm wondering about the following aspects: o Even if we use exit mode/non-anonymous onions (RSOS) is such leaking reliable? The primary issue here is time of transmission. It's much longer than any time period we have in Tor. o What is going to happen with the connection after the HS republishes its descriptor? Long after? [This one is probably fine if we are not using IPs, but...] o Most importantly, is transferring data on >1 TiB scale (or just transferring data for days) safe at all? At least the source should not change their location/RP/circuits. Or need to pack all this stuff into chunks and send them separately. It's not obvious how it can be done properly. So at what point the source should stop the transmission (size/time/etc)/change location or the guard/ pick new RP? -- [1] http://panamapapers.sueddeutsche.de/articles/56febff0a1bb8d3c3495adf4/ -- Happy hacking, Ivan Markin ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
[tor-dev] Is it possible to leak huge load of data over onions?
Recently someone leaked enormous amount of docs (2.6 TiB) to the journalists [1]. It's still hard to do such thing even over plain old Internet. Highly possible that these docs were transfered on a physical hard drive despite doing so is really *risky*. Anyways, in the framework of anonymous whistleblowing, i.e. SecureDrop and Tor specifically it's seems to be an interesting case. I'm wondering about the following aspects: o Even if we use exit mode/non-anonymous onions (RSOS) is such leaking reliable? The primary issue here is time of transmission. It's much longer than any time period we have in Tor. o What is going to happen with the connection after the HS republishes its descriptor? Long after? [This one is probably fine if we are not using IPs, but...] o Most importantly, is transferring data on >1 TiB scale (or just transferring data for days) safe at all? At least the source should not change their location/RP/circuits. Or need to pack all this stuff into chunks and send them separately. It's not obvious how it can be done properly. So at what point the source should stop the transmission (size/time/etc)/change location or the guard/ pick new RP? -- [1] http://panamapapers.sueddeutsche.de/articles/56febff0a1bb8d3c3495adf4/ -- Happy hacking, Ivan Markin ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] A few ideas about improved design/modularity in Tor
Nick Mathewson writes: > ZeroMQ and its competitors are pretty good, but overkill. They're > designed to work in a distributed environment where with unreliable > network connections, whereas for this application I'm only thinking > about splitting a single Tor instance across multiple processes on the > same host. ZeroMQ has an "INPROC" transport that works for inter-thread communication (and it's way faster than the networked ones, even unix-sockets, at least a few years back when I benchmarked some things involving ZeroMQ in C++). -- meejah ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] A few ideas about improved design/modularity in Tor
On Mon, Mar 28, 2016 at 6:49 AM, Rob van der Hoeven wrote: >> 2. Add backend abstractions as needed to minimize module coupling. These >>should be abstractions that are friendly to in- and multi-process >>implementations. We will need at least: >> >>- publish/subscribe{,/acknowledge}. >> >> (See >> https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern. >> The 'acknowledge' abstraction >> allows the publisher to wait for every subscriber to acknowledge >> receipt. More on this in section 4 below.) >> > > Maybe ZeroMQ can do this. See: > > https://en.wikipedia.org/wiki/ZeroMQ > > and: > > http://zeromq.org/ ZeroMQ and its competitors are pretty good, but overkill. They're designed to work in a distributed environment where with unreliable network connections, whereas for this application I'm only thinking about splitting a single Tor instance across multiple processes on the same host. > Question: how are these modules you write about implemented? Do you plan > to make each module a Dll? Will it be possible to only load a Dll if its > functions are needed? I ask this because I currently have Tor running on > my router and much of its functions (hidden services, node, etc.) are > not needed. I haven't been working on the problem from that angle, but I would expect that making the code more modular will make it easier only to compile the modules required. best wishes, -- Nick ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
[tor-dev] Yawning's CFC, web caching, and PETs
Should we try to organize some public chat about web caching at PETs or HotPETs this summer? By that I mean, a discussion with anonymity researchers on security and anonymity concerns around making tools like Yawning's CFC a long-term solution to the CloudFlare problem? Aside from our not knowing if CloudFlare will become more accommodating, a trustworthy web cache would enable more serious efforts towards alpha-mixing, either in Tor itself, or with mixnets on the side of Tor. And archival tools make the web better in numerous ways, like by making it harder to removing anything. There are interesting problems in this space like : Big scary adversary issues. Archiving TLS sessions along with HTML transformations so that subsequent clients can verify the original site's certificate. How best to one distribute the cache. Jeff signature.asc Description: This is a digitally signed message part ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Quantum-safe Hybrid handshake for Tor
On 04/03/2016 10:37 AM, Jeff Burdges wrote: > I should read up on this compression business since I'd no idea they > were so small. At first blush, these SIDH schemes must communicate > curve parameters of the curve the isogeny maps to and two curve points > to help the other party compute the isogeny on their prime's subgroup, > so maybe 3-4 times the size of a curve point, but the curve is far > larger than any used with normal ECDH too. "Key Compression for Isogeny-Based Cryptosystems". Here's just the abstract: https://eprint.iacr.org/2016/229 and the full paper can be found here: https://eprint.iacr.org/2016/229.pdf -- Jesse V signature.asc Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Request for feedback/victims: cfc-0.0.2
On Sat, 2 Apr 2016 18:14:26 -0400 Ian Goldberg wrote: > On Sat, Apr 02, 2016 at 07:19:30PM +, Yawning Angel wrote: > > It's not a request header set by the browser. archive.is is acting > > like a HTTP proxy and explicitly setting X-F-F. > > I wonder what would happen if the browser *also* set X-F-F...? Unfortunately, it appears that archive.is tramples over X-F-F if it is already set. Maybe others will have better luck engaging with the operator(s) of archive.is than I have. Regards, -- Yawning Angel pgpHSqIn1dO_s.pgp Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Quantum-safe Hybrid handshake for Tor
On Sun, 03 Apr 2016 16:37:45 +0200 Jeff Burdges wrote: > On Sun, 2016-04-03 at 06:52 +, Yawning Angel wrote: > > Your definition of "reasonably fast" doesn't match mine. The > > number for SIDH (key exchange, when the thread was going off on a > > tangent about signatures) is ~200ms. > > What code were you running? I think the existing SIDH implementations > should not be considered optimized. Sage is even used in : > https://github.com/defeo/ss-isogeny-software > I've no idea about performance myself, but obviously the curves used > in SIDH are huge, and the operations are generic over curves. And > existing signature schemes might be extra slow due to this virtual > third or fourth party. I know folks like Luca De Feo have ideas for > optimizing operations that much be generic over curves though. http://cacr.uwaterloo.ca/techreports/2014/cacr2014-20.pdf Is "optimized" in that, it is C with performance critical parts in assembly (Table 3 is presumably the source of the ~200 ms figure from the wikipedia article). As i said, i just took the performance figures at face value. I'm sure it'll go faster with time, but like you, I'm probably not going to trust SIDH for a decade or so. Regards, -- Yawning Angel pgp9GCP5wFcaj.pgp Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Quantum-safe Hybrid handshake for Tor
On Sat, 2016-04-02 at 18:48 -0400, Jesse V wrote: > I just wanted to resurrect this old thread to point out that > supersingular isogeny key exchange (SIDH) is the isogeny scheme that > that you're referring to. Using a clever compression algorithm, SIDH > only needs to exchange 3072 bits (384 bytes) at a 128-bit quantum > security level. This beats SPHINCS by a mile and unlike NTRUEncrypt, > fits nicely into Tor's current cell size. I don't know about key > sizes, though. If I recall correctly, SIDH's paper also references > the "A quantum-safe circuit-extension handshake for Tor" paper that > lead to this proposal. I should read up on this compression business since I'd no idea they were so small. At first blush, these SIDH schemes must communicate curve parameters of the curve the isogeny maps to and two curve points to help the other party compute the isogeny on their prime's subgroup, so maybe 3-4 times the size of a curve point, but the curve is far larger than any used with normal ECDH too. Warning : The signature schemes based on SIDH work by introducing another virtual party employing a third prime. And another more recent scheme needs two additional parties/primes! A priori, this doubles or triples the key materials size, although it's maybe not so bad in practice. Also, these signature scheme have some unusual properties. > Again, I have very little understanding of post-quantum crypto and > I'm just starting to understand ECC, but after looking over > https://en.wikipedia.org/wiki/Supersingular_isogeny_key_exchange > and skimming the SIDH paper, I'm rather impressed. SIDH doesn't > seem to be patented, it's reasonably fast, it uses the smallest > bandwidth, and it offers perfect forward secrecy. It seems to me > that SIDH actually has more potential for making it into Tor than > any other post-quantum cryptosystem. It'll be years before anyone trusts SIDH because it's the youngest. And Ring-LWE has a much larger community doing optimizations, etc. I like SIDH myself. I delved into it to see if it offered the blinding operation needed for the Sphinx mixnet packet format. It seemingly does not. And maybe no post-quantum system can do so. All these post-quantum public key systems work by burring the key exchange inside a computation that usually goes nowhere, fails, etc.* In SIDH, it's replacing the kernel of the isogeny, which one can move between curves, with two curve points that let the other party evaluate your isogeny on their subgroup. As the isogenies themselves form only a groupoid, algorithms like Shor observe almost exclusively a failed product, so the QFT rarely yields anything. As usual, there are deep mathematical questions here like : Has one really hidden the kernel by revealing only the isogeny on a special subgroup? Are there parameterizations of appropriate isogenies in ways that make the QFT dangerous again? As an aside, there are new quantum query attacks on symmetric crypto like AEZ in: http://arxiv.org/abs/1602.05973 We believe a quantum query attack against symmetric crypto sounds unrealistic of course: http://www.scottaaronson.com/blog/?p=2673 A quantum query attack is completely realistic against a public key system though, so one should expect renewed effort to break the post-quantum systems by inventing new QFT techniques. On Sun, 2016-04-03 at 06:52 +, Yawning Angel wrote: > Your definition of "reasonably fast" doesn't match mine. The number > for SIDH (key exchange, when the thread was going off on a tangent > about signatures) is ~200ms. What code were you running? I think the existing SIDH implementations should not be considered optimized. Sage is even used in : https://github.com/defeo/ss-isogeny-software I've no idea about performance myself, but obviously the curves used in SIDH are huge, and the operations are generic over curves. And existing signature schemes might be extra slow due to this virtual third or fourth party. I know folks like Luca De Feo have ideas for optimizing operations that much be generic over curves though. Around signatures specifically, there are deterministically stateful hashed based, or partially hash based, scheme that might still be useful : One might for example pre-compute a unique EdDSA key for each consensus during the next several months and build the Merkle tree of the public keys of their hashes. Any given consensus entry is vulnerable to a quantum attack immediately after the key gets used, but not the whole Merkle tree of EdDSA keys. A signature costs O(log m) where m is the number of consensuses covered by a single key. It's maybe harder to attack such a scheme while keeping your quantum computer secret. ** Jeff * I'd be dubious that any non-abelian "group-based" scheme would remain post-quantum indefinitely specifically because they lack this "usually just fails" property. It's maybe related to the issues with blinding operations an the difficulties in making si
Re: [tor-dev] Quantum-safe Hybrid handshake for Tor
On 04/03/2016 02:52 AM, Yawning Angel wrote: > Your definition of "reasonably fast" doesn't match mine. The number > for SIDH (key exchange, when the thread was going off on a tangent > about signatures) is ~200ms. > > A portable newhope (Ring-LWE) implementation[0] on my laptop can do one > side of the exchange in ~190 usec. Saving a few cells is not a good > reason to use a key exchange mechanism that is 1000x slower > (NTRUEncrypt is also fast enough to be competitive). I have yet to see any SIDH benchmarks either. I checked the citation but I wasn't able to confirm where the ~200ms number came from. Thanks for throwing out specific numbers on Ring-LWE, I wasn't aware that it was so fast. -- Jesse V signature.asc Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Advice regarding Cloudflare
On Sun, Apr 3, 2016 at 4:04 PM, Yawning Angel wrote: > Well, I did write an addon that just fetches content from archive.is > whenever I get a Captcha. Does that count? That's cool Yawning. Got a link to that? I'd like to try it. -V ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Advice regarding Cloudflare
On Sun, 3 Apr 2016 00:37:45 -0700 Ryan Carboni wrote:> > > > (as opposed to the people that seem to think that Exits > > should actively combat abuse by having the capability for > > censorship). > > > > > Well, a large number of exit nodes already have the capability for a > man-in-the-middle attack. This capability could very well be a default > option. There's legal/ethical issues with that sort of thing. In the bright future (more modern versions of HTTP for example), encryption is going to be the default. An anonymity system that mounts active-man-in-the-middle attacks against TLS (or QUIC's encryption) isn't anything I'll be working on. > b) In your magic world, how would accessing any site that uses > > multiple hosts for content to work? > [snip] > This might seem patronizing, but you seem genuinely ignorant. No. I was wondering how a poorly thought out idea is supposed to not negatively impact anonymity given that bundling multiple endpoints over a single circuit is good for anonymity. It was a genuine technical question. [snip] > By any reasonable definition of ethics, one must find a middle > ground, and essentially, Cloudflare has all the negotiating power, > unless you plan on personally battering down the doors of Cloudflare. Well, I did write an addon that just fetches content from archive.is whenever I get a Captcha. Does that count? > Perhaps a maximum of 63 domain names (forgot Cloudflare only has a > dozen IPs) per Tor circuit could be done. You have a definition of "a dozen" that doesn't match one that I'm familiar with (https://archive.is/eSl37). Anyway, it's easy for clients to request multiple circuits. An anonymity system where the Exit possesses linkable client identifiers between circuits/sessions is also a poor anonymity system. *plonk* -- Yawning Angel pgpMTdGCtT5sV.pgp Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Advice regarding Cloudflare
> > (as opposed to the people that seem to think that Exits > should actively combat abuse by having the capability for censorship). > > Well, a large number of exit nodes already have the capability for a man-in-the-middle attack. This capability could very well be a default option. b) In your magic world, how would accessing any site that uses > multiple hosts for content to work? > > Yes, yes. It is you who is being imposed upon, not Cloudflare, not the businesses that serve content. In my magic world, people produce things for free! This might seem patronizing, but you seem genuinely ignorant. Cloudflare runs a business. They also get paid for it. That business is to protect email addresses from scrapes, and so forth. If they tell their customers that malicious actors can do those things, but only through Tor, because Tor does good work, their customers will take their business elsewhere. In a libertarian world, people can bar entry to their property from people who seem suspicious. You do not believe that Cloudflare should be allowed to bar entry out of some egocentric concern. By any reasonable definition of ethics, one must find a middle ground, and essentially, Cloudflare has all the negotiating power, unless you plan on personally battering down the doors of Cloudflare. A good step would be to ask Cloudflare for statistics on Tor misuse. Perhaps a maximum of 63 domain names (forgot Cloudflare only has a dozen IPs) per Tor circuit could be done. ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev