Re: manifest files and forwarding
Yes, this was proposed years ago and is in the bug tracker. It's usually called "bundles" and is regarded as a poor man's tunneling system, in that it doesn't actually tunnel but maintains many of the advantages of tunnels. There are performance problems with it, especially with the current load limiting system. Some of these could be mitigated on darknet, e.g. allowing more requests to be in flight etc. A lot of the work around tunnels is reusable here e.g. deciding what requests to group together. And as Arne points out, if the bad guy sees a bundle, there's a high chance that the directly connected peer is the originator. But this is true now as well, and right now Mallory not only has the HTL, but also the number of requests. Note that on darknet we could eventually implement a full blown tunnel system - if PISCES can be turned into something implementible, which is unclear to me at the moment. But lets not let the "perfect" be the enemy of the good. On 03/05/17 08:16, Arne Babenhauserheide wrote: > Stefanie Roos writes: > >> Thanks. >> >> Sorry, bit late in answering > > I documented an easier to implement version of your idea as note in > > https://freenet.mantishub.io/view.php?id=3640#c12321 > (0003640: Bundles (unencrypted tunnels)) > > The difference is that it bundles based on the source peer and mixes in > local requests (routing by peer only uses information we have at > routing time). > > This is the note I added — does it fit your proposal well enough? > > - For each source peer for which we do not decrement HTL18, select a target > peer at random to whom we forward all HTL18 requests of the source peer. > - Mix all our local requests with those of one of the source peers, selected > at random. > > This approach should defeat attacks based on the distribution of requests for > chunks from known files in the requests of a given node. > > possible drawbacks: > > - disconnecting during the fetch would restart all the non-finished blocks on > another peer. Therefore local requests would have increased latency, and > strongly increased jitter in latency (losing the wrong peer during transfer > would require restarting all non-finished requests) > > - The anonymity set *with complete knowledge* (which might be attainable via > timing attacks by selective DoSing of your peers, one by one or grouped) is > just about 2-3 against one of the nodes (the number of HTL18 hops). There is > one node for which capturing packets from you means, that there’s a 30-50% > probability that you’re the originator. However for all other nodes none of > your traffic goes through them — you’re merely forwarding. So the actual > probability that you’re the originator of any captured stream of randomly > sorted request keys is only 6-10% (with 5 peers for whom you do not decrement > HTL). > > - A small subset of these bundles might propagate very far (in a network of > 1000 nodes, the average result for the longest forwarding should be 10 hops, > in a network of 16000 14 hops, and so on as log2(N)), so peers might have to > replace a target if the latency for the bundled requests is very high (this > will limit the maximum length of the forwarding). I’m not sure if our > existing connection dropping conditions will fire here (due to timeouts or a > too small success rate). Churn should also limit the length of the tunnels: > If the average session uptime of a peer is 2 hours, a 10 hops forwarding > should typically be broken within 12 minutes while a 2 hops forwarking should > live for one hour. > > Best wishes, > Arne > -- > Unpolitisch sein > heißt politisch sein > ohne es zu merken > signature.asc Description: OpenPGP digital signature
Re: manifest files and forwarding
Original Message On May 3, 2017, 3:16 AM, Arne Babenhauserheide < arne_...@web.de> wrote: Stefanie Roos writes: > Thanks. > > Sorry, bit late in answering I documented an easier to implement version of your idea as note in https://freenet.mantishub.io/view.php?id=3640#c12321 (0003640: Bundles (unencrypted tunnels)) The difference is that it bundles based on the source peer and mixes in local requests (routing by peer only uses information we have at routing time). This is the note I added — does it fit your proposal well enough? - For each source peer for which we do not decrement HTL18, select a target peer at random to whom we forward all HTL18 requests of the source peer. - Mix all our local requests with those of one of the source peers, selected at random. This approach should defeat attacks based on the distribution of requests for chunks from known files in the requests of a given node. If looking only at request distribution, yes, but there are still request rates to consider. Files that aren't all routed to a single peer have higher request rates because the node can use all its peers. (We'd also need to make sure to have rate limits to impair distinguishing a bundle-receiver from a non-bundling requester through request rate. Which is another performance hit.) It introduces a new vulnerability as well: the only time an attacker can expect to receive a bundle - requests for all blocks in a large file - is when connected to the source of the request, and that gives all-but-certainty that the connected node is the requester. I guess this ends up being a numbers game? Currently I think long links are more liable to exceed a uniform share because they cover more of the location space, and those are 30% of the links. [0] This would maintain that problem, but for 50% of files (or whatever it ends up being) one peer has the potential to gain all-but-certain knowledge of what is being fetched. This means attacker certainty for 1/(peer count) for half the files fetched. Is that an improvement for the alternative of 30% of connections suspicion for all files that still applies when not bundling? [0] https://github.com/freenet/fred/blob/build01478/src/freenet/node/OpennetManager.java#L98 possible drawbacks: - disconnecting during the fetch would restart all the non-finished blocks on another peer. Therefore local requests would have increased latency, and strongly increased jitter in latency (losing the wrong peer during transfer would require restarting all non-finished requests) - The anonymity set *with complete knowledge* (which might be attainable via timing attacks by selective DoSing of your peers, one by one or grouped) is just about 2-3 against one of the nodes (the number of HTL18 hops). There is one node for which capturing packets from you means, that there’s a 30-50% probability that you’re the originator. However for all other nodes none of your traffic goes through them — you’re merely forwarding. So the actual probability that you’re the originator of any captured stream of randomly sorted request keys is only 6-10% (with 5 peers for whom you do not decrement HTL). - A small subset of these bundles might propagate very far (in a network of 1000 nodes, the average result for the longest forwarding should be 10 hops, in a network of 16000 14 hops, and so on as log2(N)), so peers might have to replace a target if the latency for the bundled requests is very high (this will limit the maximum length of the forwarding). I’m not sure if our existing connection dropping conditions will fire here (due to timeouts or a too small success rate). Churn should also limit the length of the tunnels: If the average session uptime of a peer is 2 hours, a 10 hops forwarding should typically be broken within 12 minutes while a 2 hops forwarking should live for one hour. Best wishes, Arne -- Unpolitisch sein heißt politisch sein ohne es zu merken
Re: manifest files and forwarding
Stefanie Roos writes: > Thanks. > > Sorry, bit late in answering I documented an easier to implement version of your idea as note in https://freenet.mantishub.io/view.php?id=3640#c12321 (0003640: Bundles (unencrypted tunnels)) The difference is that it bundles based on the source peer and mixes in local requests (routing by peer only uses information we have at routing time). This is the note I added — does it fit your proposal well enough? - For each source peer for which we do not decrement HTL18, select a target peer at random to whom we forward all HTL18 requests of the source peer. - Mix all our local requests with those of one of the source peers, selected at random. This approach should defeat attacks based on the distribution of requests for chunks from known files in the requests of a given node. possible drawbacks: - disconnecting during the fetch would restart all the non-finished blocks on another peer. Therefore local requests would have increased latency, and strongly increased jitter in latency (losing the wrong peer during transfer would require restarting all non-finished requests) - The anonymity set *with complete knowledge* (which might be attainable via timing attacks by selective DoSing of your peers, one by one or grouped) is just about 2-3 against one of the nodes (the number of HTL18 hops). There is one node for which capturing packets from you means, that there’s a 30-50% probability that you’re the originator. However for all other nodes none of your traffic goes through them — you’re merely forwarding. So the actual probability that you’re the originator of any captured stream of randomly sorted request keys is only 6-10% (with 5 peers for whom you do not decrement HTL). - A small subset of these bundles might propagate very far (in a network of 1000 nodes, the average result for the longest forwarding should be 10 hops, in a network of 16000 14 hops, and so on as log2(N)), so peers might have to replace a target if the latency for the bundled requests is very high (this will limit the maximum length of the forwarding). I’m not sure if our existing connection dropping conditions will fire here (due to timeouts or a too small success rate). Churn should also limit the length of the tunnels: If the average session uptime of a peer is 2 hours, a 10 hops forwarding should typically be broken within 12 minutes while a 2 hops forwarking should live for one hour. Best wishes, Arne -- Unpolitisch sein heißt politisch sein ohne es zu merken signature.asc Description: PGP signature
Re: manifest files and forwarding
Hi Stef, I’m sorry for only answering now (this got stuck in my TODO list). Stefanie Roos writes: >> I have a question regarding the anonymity of downloaded a file that is >> split in a reasonable large number of blocks. Are the requests for all >> blocks forwarded independently using FoF routing (or the hill-climbing >> algorithm if FoF is disabled). As far as I know yes. >> If so, doesn't that enable the following attack: Assume an adversary >> wants to find out if one of her peers is downloading the file. She can >> obtain the manifest file and thus the CHK keys of all blocks. Someone >> downloading the file will request all blocks, forwarding the requests to >> different peers. These will forward the request to their peers. So >> likely their peers will receive more block requests than non-peers. So, >> if the adversary wants to find out if she is connected to the requester, >> shouldn't receiving a high number of requests for the different blocks >> of the same file be a really good indicator that this peer is the actual >> requester and not only forwarding? As far as I understand it, in the case of a uniform network without backoff, this would be true. Having a single slow peer for which you are the only long-distance peer, most requests would flow through you, though. (30% of peers are long-distance, 70% are short distance — as by the forced link length distribution we added a while ago). >> Wouldn't it be better to add the possibility of forwarding all block >> requests along the same link initially? It could be tied to the >> probabilistic HTL decrease: Initiator/forwarder with HTL=18 of a request >> uses random peer. If HTLDecrement==false is set for that connection, all >> block requests are forwarded to that peer (or rather one request >> including the manifest file), otherwise all of them are routed >> individually as it is now (if that is what is happening now). Now, the >> adversary can use the above attack to tell which peer started routing >> rather than random forwarding but that might not be the requester. Do you mean pooling the non-decrementing HTL18 requests and serving a all of those which arrived during a certain timeframe to a random peer in round-robin fashion? And doing the same for our own requests, so there would be a stream of HTL18 requests representing the whole file which are routed randomly and only get split when HTL is decremented? It looks like this could work for small files (where all requests fit a certain timeframe). Or do you mean choosing a random fixed target for each peer for which we do not decrement HTL18 and then forwarding all HTL18 requests through this static route? And also choosing one peer at random for our own requests? B always forwards HTL18 requests by A to C, and B always sends all its own requests to C? Then C would either decrement HTL18 by B (and start the actual requests) or not decrement HTL18 by B and forward all those requests to D, potentially mixing in its own requests (but that cannot be seen from outside except by controlling both B and D). This should actually defeat the attack, I think, with only 20-30% increase in bandwidth consumption (because due to the small size of the network we currently only have an average of around 4 hops - at HTL16 they are already in close-routing with distance below 0.001, see success rates at http://127.0.0.1:/stats/?fproxyAdvancedMode=2). The disadvantage would be that there could be routes which never decrement HTL, so some peers would have all their requests blackholed. But these routes could be detected a the originator (and might actually be already detected: in that case no own requests would succeed at all). Best wishes, Arne -- Unpolitisch sein heißt politisch sein ohne es zu merken signature.asc Description: PGP signature
Re: manifest files and forwarding
Thanks. Sorry, bit late in answering >> Original Message >> Subject: Re: manifest files and forwarding >> Local Time: April 12, 2017 11:48 PM >> UTC Time: April 13, 2017 3:48 AM >> From: st...@asksteved.com >> To: devl@freenetproject.org >> >> Sorry for the top reply; my mobile email client's capabilities are >> lacking. >> >> Yes. This category of attack is a thing. It sounds similar to a >> (flawed) law enforcement attack described in a paper dated 2013 that >> leaked a while back: >> https://www.reddit.com/r/Freenet/comments/4ebw9w/more_information_on_law_enforcements_freenet/ >> The site providing it has since been password protected, but the >> reaction remains. >> >> Do I understand your countermeasure proposal correctly: for each >> file, probabilistically choose a peer to route all requests for it? >> Interesting! Essentially yes, in the same random but deterministic manner that the htl is decreased or not. It would likely require one extra hop or less per request, depending on how exactly it is done. >> It would of course worsen routing, but maybe not too much. IIRC >> routing is more a function of link length distribution than >> individual misrouting. I worry that because a single node cannot >> accept as many requests files could alternate between current and >> worse performance. The temptation would be to hide it behind some >> higher network security level, but this kind of thing is only useful >> if it's the default. Do we know whether a peer can evaluate HTL >> distribution, or location distribution of blocks known to be in a >> file, or something to guess whether probabilistic decrement is in >> use? That would be my concern about tying it to the same decision. >> >> Other than that I like this proposal! >> Original Message >> On Apr 12, 2017, 12:25 PM, Stefanie Roos < >> stefanie.r...@uwaterloo.ca> wrote: >> >> >> >> > Hi, >> > >> > we are currently looking into different methods for >> censorship-resistant >> > publication in distributed systems using different replication >> techniques. >> > >> > I have a question regarding the anonymity of downloaded a file >> that is >> > split in a reasonable large number of blocks. Are the requests >> for all >> > blocks forwarded independently using FoF routing (or the >> hill-climbing >> > algorithm if FoF is disabled). >> > >> > If so, doesn't that enable the following attack: Assume an >> adversary >> > wants to find out if one of her peers is downloading the file. >> She can >> > obtain the manifest file and thus the CHK keys of all blocks. >> Someone >> > downloading the file will request all blocks, forwarding the >> requests to >> > different peers. These will forward the request to their peers. So >> > likely their peers will receive more block requests than >> non-peers. So, >> > if the adversary wants to find out if she is connected to the >> requester, >> > shouldn't receiving a high number of requests for the different >> blocks >> > of the same file be a really good indicator that this peer is >> the actual >> > requester and not only forwarding? The math is a bit more >> complicated, >> > as the the number of files per peer will not be uniform. Nodes >> have few >> > peers at a large distance and those have a higher chance of >> being the >> > closest peer to a CHK block (or have the closest peer to a key >> if FoF >> > routing is enabled). Nevertheless, I think this is clearly a >> serious >> > problem, if I understand what is happening correctly. >> > >> > Wouldn't it be better to add the possibility of forwarding all >> block >> > requests along the same link initially? It could be tied to the >> > probabilistic HTL decrease: Initiator/forwarder with HTL=18 of >> a request >> > uses random peer. If HTLDecrement==false is set for that >> connection, all >> > block requests are forwarded to that peer (or rather one request >> > including the manifest file), otherwise all of them are routed >> > individually as it is now (if that is what is happening now). >> Now, the >> > adversary can use the above attack to tell which peer started >> routing >> > rather than random forwarding but that might not be the requester. >> > >> > Any thoughts on that? >> > >> > Thanks, >> > >> > Stef >> > >> >> -- >> Stefanie Roos >> Postdoctoral Fellow >> CrySP, University of Waterloo >> https://cs.uwaterloo.ca/~sroos/ >> > -- Stefanie Roos Postdoctoral Fellow CrySP, University of Waterloo https://cs.uwaterloo.ca/~sroos/
Re: manifest files and forwarding
New information on the law enforcement attack is now public record: https://www.reddit.com/r/Freenet/comments/66f0n3/missouri_law_enforcements_freenet_attack_now/ Original Message Subject: Re: manifest files and forwarding Local Time: April 12, 2017 11:48 PM UTC Time: April 13, 2017 3:48 AM From: st...@asksteved.com To: devl@freenetproject.org Sorry for the top reply; my mobile email client's capabilities are lacking. Yes. This category of attack is a thing. It sounds similar to a (flawed) law enforcement attack described in a paper dated 2013 that leaked a while back: https://www.reddit.com/r/Freenet/comments/4ebw9w/more_information_on_law_enforcements_freenet/ The site providing it has since been password protected, but the reaction remains. Do I understand your countermeasure proposal correctly: for each file, probabilistically choose a peer to route all requests for it? Interesting! It would of course worsen routing, but maybe not too much. IIRC routing is more a function of link length distribution than individual misrouting. I worry that because a single node cannot accept as many requests files could alternate between current and worse performance. The temptation would be to hide it behind some higher network security level, but this kind of thing is only useful if it's the default. Do we know whether a peer can evaluate HTL distribution, or location distribution of blocks known to be in a file, or something to guess whether probabilistic decrement is in use? That would be my concern about tying it to the same decision. Other than that I like this proposal! Original Message On Apr 12, 2017, 12:25 PM, Stefanie Roos < stefanie.r...@uwaterloo.ca> wrote: > Hi, > > we are currently looking into different methods for censorship-resistant > publication in distributed systems using different replication techniques. > > I have a question regarding the anonymity of downloaded a file that is > split in a reasonable large number of blocks. Are the requests for all > blocks forwarded independently using FoF routing (or the hill-climbing > algorithm if FoF is disabled). > > If so, doesn't that enable the following attack: Assume an adversary > wants to find out if one of her peers is downloading the file. She can > obtain the manifest file and thus the CHK keys of all blocks. Someone > downloading the file will request all blocks, forwarding the requests to > different peers. These will forward the request to their peers. So > likely their peers will receive more block requests than non-peers. So, > if the adversary wants to find out if she is connected to the requester, > shouldn't receiving a high number of requests for the different blocks > of the same file be a really good indicator that this peer is the actual > requester and not only forwarding? The math is a bit more complicated, > as the the number of files per peer will not be uniform. Nodes have few > peers at a large distance and those have a higher chance of being the > closest peer to a CHK block (or have the closest peer to a key if FoF > routing is enabled). Nevertheless, I think this is clearly a serious > problem, if I understand what is happening correctly. > > Wouldn't it be better to add the possibility of forwarding all block > requests along the same link initially? It could be tied to the > probabilistic HTL decrease: Initiator/forwarder with HTL=18 of a request > uses random peer. If HTLDecrement==false is set for that connection, all > block requests are forwarded to that peer (or rather one request > including the manifest file), otherwise all of them are routed > individually as it is now (if that is what is happening now). Now, the > adversary can use the above attack to tell which peer started routing > rather than random forwarding but that might not be the requester. > > Any thoughts on that? > > Thanks, > > Stef > -- Stefanie Roos Postdoctoral Fellow CrySP, University of Waterloo https://cs.uwaterloo.ca/~sroos/
Re: manifest files and forwarding
Sorry for the top reply; my mobile email client's capabilities are lacking. Yes. This category of attack is a thing. It sounds similar to a (flawed) law enforcement attack described in a paper dated 2013 that leaked a while back: https://www.reddit.com/r/Freenet/comments/4ebw9w/more_information_on_law_enforcements_freenet/ The site providing it has since been password protected, but the reaction remains. Do I understand your countermeasure proposal correctly: for each file, probabilistically choose a peer to route all requests for it? Interesting! It would of course worsen routing, but maybe not too much. IIRC routing is more a function of link length distribution than individual misrouting. I worry that because a single node cannot accept as many requests files could alternate between current and worse performance. The temptation would be to hide it behind some higher network security level, but this kind of thing is only useful if it's the default. Do we know whether a peer can evaluate HTL distribution, or location distribution of blocks known to be in a file, or something to guess whether probabilistic decrement is in use? That would be my concern about tying it to the same decision. Other than that I like this proposal! Original Message On Apr 12, 2017, 12:25 PM, Stefanie Roos wrote: > Hi, > > we are currently looking into different methods for censorship-resistant > publication in distributed systems using different replication techniques. > > I have a question regarding the anonymity of downloaded a file that is > split in a reasonable large number of blocks. Are the requests for all > blocks forwarded independently using FoF routing (or the hill-climbing > algorithm if FoF is disabled). > > If so, doesn't that enable the following attack: Assume an adversary > wants to find out if one of her peers is downloading the file. She can > obtain the manifest file and thus the CHK keys of all blocks. Someone > downloading the file will request all blocks, forwarding the requests to > different peers. These will forward the request to their peers. So > likely their peers will receive more block requests than non-peers. So, > if the adversary wants to find out if she is connected to the requester, > shouldn't receiving a high number of requests for the different blocks > of the same file be a really good indicator that this peer is the actual > requester and not only forwarding? The math is a bit more complicated, > as the the number of files per peer will not be uniform. Nodes have few > peers at a large distance and those have a higher chance of being the > closest peer to a CHK block (or have the closest peer to a key if FoF > routing is enabled). Nevertheless, I think this is clearly a serious > problem, if I understand what is happening correctly. > > Wouldn't it be better to add the possibility of forwarding all block > requests along the same link initially? It could be tied to the > probabilistic HTL decrease: Initiator/forwarder with HTL=18 of a request > uses random peer. If HTLDecrement==false is set for that connection, all > block requests are forwarded to that peer (or rather one request > including the manifest file), otherwise all of them are routed > individually as it is now (if that is what is happening now). Now, the > adversary can use the above attack to tell which peer started routing > rather than random forwarding but that might not be the requester. > > Any thoughts on that? > > Thanks, > > Stef > -- Stefanie Roos Postdoctoral Fellow CrySP, University of Waterloo https://cs.uwaterloo.ca/~sroos/
Re: manifest files and forwarding
> Hi, > > we are currently looking into different methods for censorship-resistant > publication in distributed systems using different replication techniques. > > I have a question regarding the anonymity of downloaded a file that is > split in a reasonable large number of blocks. Are the requests for all > blocks forwarded independently using FoF routing (or the hill-climbing > algorithm if FoF is disabled). > > If so, doesn't that enable the following attack: Assume an adversary > wants to find out if one of her peers is downloading the file. She can > obtain the manifest file and thus the CHK keys of all blocks. Someone > downloading the file will request all blocks, forwarding the requests to > different peers. These will forward the request to their peers. So > likely their peers will receive more block requests than non-peers. So, > if the adversary wants to find out if she is connected to the requester, > shouldn't receiving a high number of requests for the different blocks > of the same file be a really good indicator that this peer is the actual > requester and not only forwarding? The math is a bit more complicated, > as the the number of files per peer will not be uniform. Nodes have few > peers at a large distance and those have a higher chance of being the > closest peer to a CHK block (or have the closest peer to a key if FoF > routing is enabled). Nevertheless, I think this is clearly a serious > problem, if I understand what is happening correctly. > > Wouldn't it be better to add the possibility of forwarding all block > requests along the same link initially? It could be tied to the > probabilistic HTL decrease: Initiator/forwarder with HTL=18 of a request > uses random peer. If HTLDecrement==false is set for that connection, all > block requests are forwarded to that peer (or rather one request > including the manifest file), otherwise all of them are routed > individually as it is now (if that is what is happening now). Now, the > adversary can use the above attack to tell which peer started routing > rather than random forwarding but that might not be the requester. > > Any thoughts on that? > > Thanks, > > Stef > -- Stefanie Roos Postdoctoral Fellow CrySP, University of Waterloo https://cs.uwaterloo.ca/~sroos/