Matthew Toseland wrote:
> Umm, please read the presentation on 0.7. Specializations are simply
> fixed numbers in 0.7.  The problem with probabilistic caching according
> to specialization is that we need to deal with both very small networks
> and very large networks.  How do we sort this out?

It's quite simple - on smaller networks, the specialisation of the node 
will be wider. You use a mean and standard deviation of the current 
store distribution. If the standard deviation is large, you make it more 
likely to cache things further away.

> Also we need to have
> an element of LRU, because otherwise nothing will ever be dropped.  So we
> have LRU for deletion, and something to decide whether or not to cache
> in the first place.

LRU might well be perfectly OK for deletion if specialisation is 
strongly encouraged elsewhere. I would have thought, however, that a 
compromise between LRU and distance from the core specialisation would 
be better in terms of making content available for longer without an 
obvious drawback.

The way I see it, LRU along would only work if specialisation is very 
strongly exhibited (which means that specialisation needs to be encouraged).

> The problem with local requests is quite simply the Register attack. In
> other words, if we cache everything, and the store isn't full, and maybe
> even if it is, then once they have seized your store they can see what
> you have been browsing.

So the argument is that your store will not contain what you have been 
browsing UNLESS somebody else requested it through your node? What is 
the point? If caching is done by specialisation, then your node will 
contain a specialised subset of the keyspace. It would be equally 
unprovable (even statistically) that you requested the data in your 
node, because even in case of large splitfiles, your node would only 
cache the ones it would cache if they were relayed for a different node.

As long as you don't conspicuously have precisely 2/3 of a FEC splitfile 
in your node (which you wouldn't if the node is specialised, unless the 
network is so small that the your node's specialisation standard 
deviation is _huge_.

> On Wed, Nov 30, 2005 at 02:51:59PM +0000, Gordan Bobic wrote:
> 
>>How about simply encourage specialisation, and cache only according to that?
>>
>>On insertion, every node (0-th or n-th hop, all the same) caches the 
>>content, depending on how near it's specialisation it is. Each not 
>>should have only one specialisation, IMO. If that is insufficient, the 
>>one specialisation should broaden and shift, but there should be only 
>>one peak in the key space concentration.
>>
>>Each node should figure always be aware of it's current specialisation. 
>>As keys get cached and dropped, this will change, but the node must be 
>>aware of it. The further the key is from the node's specialisation, the 
>>less likely it should be to get cached.
>>
>>If each node specialises, the routing should improve, and so will the 
>>deniability. The policy of never caching local requests seems crazy. 
>>They should be treated the same as any other requests WRT caching.
>>
>>Everything else seems to be an unnecessary complication with benefits 
>>that are at best academic.
>>
>>Node specialisation should arise by design, not by coincidence.
>>
>>Just MHO.
>>
>>Gordan
>>
>>Matthew Toseland wrote:
>>
>>Two possible caching policies for 0.7:
>>1. Cache everything, including locally requested files.
>>PRO: Attacker cannot distinguish your local requests from your passed-on
>>requests.
>>CON: He can however probe your datastore (either remotely or if it is
>>seized). (the Register attack)
>>BETTER FOR: Opennet.
>>2. Don't cache locally requested files at all. (Best with client-cache).
>>PRO: Attacker gains no information on your local requests from your store.
>>PRO: Useful option for debugging, even if not on in production.
>>CON: If neighbours then request the file, and don't find it, they know
>>for sure it's local.
>>BETTER FOR: Darknet. But depends on how much you trust your peers.
>>
>>Interesting tradeoff. Unacceptable really.
>>
>>We all know that the long term solution is to implement premix routing,
>>but that is definitely not going to happen in 0.7.0.
>>
>>So here are some possibilities:
>>
>>1. For the first say 3 hops, the data is routed as normal, but is not
>>cached. This is determined by a flag on the request, which is randomly
>>turned off with a probability of 33%.
>>PRO: Provides some plausible deniability even on darknet.
>>CON: Doesn't work at all on really small darknets, so will need to be
>>turned off manually on such.
>>
>>2. Permanent, random routed tunnels for the first few hops. So, requests
>>initially go down the node's current tunnel. This is routed through a
>>few, randomly chosen (on each hop, so no premix), nodes. The tunnel is
>>changed infrequently. A node may have several tunnels, for performance,
>>but it will generally reduce your anonymity to send correlated requests
>>down different tunnels.
>>PRO: More plausible deniability; some level of defence against
>>correlation attacks even. But anon set is still relatively small.
>>CON: Number of tunnels performance/anonymity tradeoff.
>>CON: A few extra hops.
>>CON: Sometimes will get bad tunnels.
>>
>>Anyway, this seems best to me.
>>_______________________________________________
>>Tech mailing list
>>Tech at freenetproject.org
>>http://emu.freenetproject.org/cgi-bin/mailman/listinfo/tech
> 
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> Tech mailing list
> Tech at freenetproject.org
> http://emu.freenetproject.org/cgi-bin/mailman/listinfo/tech


Reply via email to