Hi Stef,

I’m sorry for only answering now (this got stuck in my TODO list).

Stefanie Roos <[email protected]> writes:

>> I have a question regarding the anonymity of downloaded a file that is
>> split in a reasonable large number of blocks. Are the requests for all
>> blocks forwarded independently using FoF routing (or the hill-climbing
>> algorithm if FoF is disabled).

As far as I know yes.

>> If so, doesn't that enable the following attack: Assume an adversary
>> wants to find out if one of her peers is downloading the file. She can
>> obtain the manifest file and thus the CHK keys of all blocks. Someone
>> downloading the file will request all blocks, forwarding the requests to
>> different peers. These will forward the request to their peers. So
>> likely their peers will receive more block requests than non-peers. So,
>> if the adversary wants to find out if she is connected to the requester,
>> shouldn't receiving a high number of requests for the different blocks
>> of the same file be a really good indicator that this peer is the actual
>> requester and not only forwarding?

As far as I understand it, in the case of a uniform network without
backoff, this would be true. Having a single slow peer for which you are
the only long-distance peer, most requests would flow through you,
though. (30% of peers are long-distance, 70% are short distance — as by
the forced link length distribution we added a while ago).

>> Wouldn't it be better to add the possibility of forwarding all block
>> requests along the same link initially? It could be tied to the
>> probabilistic HTL decrease: Initiator/forwarder with HTL=18 of a request
>> uses random peer. If HTLDecrement==false is set for that connection, all
>> block requests are forwarded to that peer (or rather one request
>> including the manifest file), otherwise all of them are routed
>> individually as it is now (if that is what is happening now). Now, the
>> adversary can use the above attack to tell which peer started routing
>> rather than random forwarding but that might not be the requester.

Do you mean pooling the non-decrementing HTL18 requests and serving a
all of those which arrived during a certain timeframe to a random peer
in round-robin fashion? And doing the same for our own requests, so
there would be a stream of HTL18 requests representing the whole file
which are routed randomly and only get split when HTL is decremented?

It looks like this could work for small files (where all requests fit a
certain timeframe).

Or do you mean choosing a random fixed target for each peer for which we
do not decrement HTL18 and then forwarding all HTL18 requests through
this static route? And also choosing one peer at random for our own
requests?

B always forwards HTL18 requests by A to C, and
B always sends all its own requests to C?

Then C would either decrement HTL18 by B (and start the actual requests)
or not decrement HTL18 by B and forward all those requests to D,
potentially mixing in its own requests (but that cannot be seen from
outside except by controlling both B and D).

This should actually defeat the attack, I think, with only 20-30%
increase in bandwidth consumption (because due to the small size of the
network we currently only have an average of around 4 hops - at HTL16
they are already in close-routing with distance below 0.001, see success
rates at http://127.0.0.1:8888/stats/?fproxyAdvancedMode=2).

The disadvantage would be that there could be routes which never
decrement HTL, so some peers would have all their requests
blackholed. But these routes could be detected a the originator (and
might actually be already detected: in that case no own requests would
succeed at all).

Best wishes,
Arne
--
Unpolitisch sein
heißt politisch sein
ohne es zu merken

Attachment: signature.asc
Description: PGP signature

Reply via email to