On Tuesday 08 Jan 2013 10:09:57 Arne Babenhauserheide wrote:
> I only partially understood the rest. Especially I did not understand the 
> advantage of #3 over #4. 

#3 might be faster than #4. #4 does a full insert at each stage, and there are 
2 or 3 stages (with 1 or 2 reveals). Hence it costs approx 36-54 hops. 1/2 to 
1/3 the speed of a clean insert.

#3 uses two tunnels and then a regular insert. So it might be 10*2 + 18 = 38 
hops. Half the speed of a clean insert.

(#1 is very close to the speed of a regular insert)

However these are assuming the hop numbers are reasonable; diameter of 10 hops 
may be too high, insert to 18 hops is probably too high too. The number of 
nodes to insert to should remain a bit bigger than the diameter AFAICS. 

We can tune these numbers with various kinds of probes. E.g. we can tune 
inserts with test inserts at varying HTLs, or with "data probes" (test-only 
keys that can tell us how many nodes they are stored on on request).

And of course on opennet we could use direct connections for tunnels - but I 
have serious doubts that Sybil can ever be defeated on opennet.

> For example I don’t see the problem with the UI in 
> #4: “veiled upload in progress” … “lifting the veil”. Or easier: “hidden 
> upload in progress”…“revealing”.
> 
> That would turn our current 2 stages into 3 stages:
> 
> - compressing/splitting/encoding
> - uploading
> - revealing
> 
> That would even make it easier to explain, why you only get the keys when the 
> insert has finished :)

Right, the UI for #4 is fine. #3 is designed to be fire-and-forget, so its UI 
is likely more opaque.
> 
> I cannot really solve priorizing (not sufficient knowledge), but at least I 
> can ask questions:
> 
> - What’s the attack we want to stop right now? 

There are several important attacks. Some of them are tradeoffs with usability. 
In order of feasibility (both of implementing the attack and beating it):

1. MAST on top blocks and chat posts. This is unavoidable.
2. MAST on reinserts of known content. It would be extremely helpful for 
filesharing if we could do reinserts safely. Even if they are selective, there 
will still be a significant number of predictable blocks inserted. IMHO we want 
to avoid complex client layer workarounds such as healing keys if possible.
3. One connection to every node (especially on opennet), global surveillance. 
(Correlation attack on every node)
4A. Controlling most nodes and thus controlling the keyspace, probably 
compromising most tunnels, etc.
4B. Controlling all connections to each of a large group of nodes, slowly 
moving across the network eliminating targets.

IMHO we have credible solutions to #1-#3. #4 is harder, and may be intractable 
on opennet, if the attacker has moderate resources.

MAST is the critical one because it can be done without any significant 
resources. #3 requires some computing power, although IMHO it is quite feasible 
for many plausible attackers. #4 requires more.
> 
> - What’s the easiest way to fix it (in terms of necessary development time) 
> which does not lead us into a dead end?

Random route on high HTL on inserts (easy to implement, monitoring performance 
is probably the blocker)
- Reduces the number of samples for inserts, should be same performance as 
current code (because of current no-cache-at-high-HTL rule).
- Can be applied to all inserts as no real performance cost.

Now, to beat MAST, we only need to tunnel the predictable blocks: Usually this 
is the top block, or chat posts. It's always an SSK if we assume no same-key 
reinserts.

The easiest way to do that is rendezvous tunnels.

The best way is probably full-network source-routed real-time tunnels. But this 
is only for a single key, so it avoids a lot of the client-layer complexity 
associated with longer-lived tunnels. And of course it's slow - but that's not 
a problem for a single block after a large insert.

To protect inserts of known content, we need to tunnel *every block*. Ways to 
do this, in order:
- #1: Rendezvous tunnels. Nearly as cheap as inserts now. If we tweak HTL at 
the same time we might even get the same performance. Mediocre security, but 
still a great improvement on the present.
- #2: Real-time tunnels. Lots of them.
- #3: Non-real-time tunnels. More reliable per tunnel due to backups, fewer 
such tunnels but more opportunities for attacker per tunnel. UI issues and 
feedback/completion complexity. Can't be used for "realtime". True 
fire-and-forget possible on darknet; on opennet, size of tunnels may have to be 
limited. Probably significantly safer than #2.
- #4: Preinsert/reveal. May use a single preinsert per key, in which case 
avoids some complexity associated with tunnel reuse, or not.

Whichever we implement, we need to deal with the fact that we are sending many 
blocks down each tunnel, and probably using many tunnels to get acceptable 
performance. This involves both problematic performance/security/robustness 
tuning and some client layer complexity.

To beat attack #3 (one connection to every node on the network) we have pretty 
much the same requirements. Except that #1 is probably too weak given we are 
using multiple tunnels.

To protect requests, we have to use #1 or #2, although a variant on #3 is 
conceivable, but expensive, for bulk requests.

For all of these strategies I'm not sure exactly how to quantify them. I saw a 
paper years ago arguing that tunneling on any structured DHT will never be 
better than (probability of compromising a route is proportional to) p ~ c 
(number of compromised nodes) / n (number of nodes); #1 is ~ c/n, the others 
*ought* to be ~ c^2/n^2, but tunnel setup might reduce that to c/n? Plus, we 
move many actual hops within a single source-routed move; does this give the 
attacker any more information? The constant might be fairly large. And of 
course if we have to use many tunnels it's larger.
> 
> Best wishes,
> Arne

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
Devl mailing list
[email protected]
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to