On Monday 19 January 2009 17:34, Robert Hailey wrote:
> 
> On Jan 17, 2009, at 9:19 AM, Matthew Toseland wrote:
> 
> > On Friday 16 January 2009 23:11, Robert Hailey wrote:
> >>
> >> On Jan 16, 2009, at 4:31 PM, Matthew Toseland wrote:
> >> Surely the added latency for 3 round trips at the high/request
> >> priority would not be bad, and they wouldn't be full round trips
> >> anyway because as the paths converge on the network they would be
> >> coalesced and not repeated.
> >
> > They could be, but if they all run in parallel coalescing is a  
> > problem (think
> > loops). But I don't think this is going to do good things for  
> > routing. Medium
> > term we will have our peers' bloom filters and will hopefully just  
> > fetch from
> > one hop.
> 
> There would have to be logic to handle pre-requests coming from a  
> different node for the same request-id, but this almost exactly  
> mirrors the chkhandler and recently-failed-table. If all the requests  
> have the same request id, the handling of parallel requests is very  
> straight forward. To me the more interesting question is 'who-do-we- 
> ask' (the three closet nodes? two close, one far?).

No. We do not coalesce requests any more, because of loops.
> 
> With only the current transfer mechanisms, getting the data in one hop  
> does not help latency (it is transferred just like the other requests  
> from bunches of other nodes). It may be over the slowest link. At best  
> it reduces latency to the average chk transfer time, no?

Yes, it helps both throughput and latency, by avoiding multiple hops (reducing 
latency and increasing global throughput) and probably some retries (by 
increasing success rates) in many cases.
> 
> >> One simple workaround to re-add the given security property you
> >> mention would be to translate some number of pre-requests into actual
> >> requests (perhaps if the latency is low enough, or just a  
> >> percentage).
> >> Although, I'm not sure I totally understand the attack you mention,
> >> because a prerequest coming from a node would only indicate that the
> >> data 'could' be fetched through that node...
> >
> > I don't see why that would help. If we know that a node has the  
> > data, or is
> > close to the data (which is closely related), we can still kill it.  
> > Granted
> > this is most powerful on opennet, where with path folding we will  
> > often have
> > the opportunity to connect to the original data source...
> 
> You're right, it does not help nearly as much as the present system;  
> but (if for some reason we wanted to) we could even have a extremely- 
> low-priority-queue which will request all pre-requests we have ever  
> received (one-at-a-time). I only meant to demonstrate that the  
> mechanism can be conserved, and at the cost of throughput if we desire  
> (instead of latency).
> 
> >> One extra consideration in the time it takes to get the low-latency
> >> request back (as opposed to just the latency value), for datastore'd
> >> requests. A security delay would have to be added there to, and would
> >> only negligibly effect the overall latency because it would only be
> >> seen at the end of the chain (at the datastore).
> >
> > No. We have considered this. Any delays we add, while they may add  
> > some
> > uncertainty for a single request, will be known by the attacker, and  
> > timing
> > attacks are still viable statistically speaking. Really, deniability  
> > on what
> > is in your datastore doesn't work - it's a myth, timing attacks are  
> > just too
> > easy.
> 
> I disagree (but have not participated in, nor read, those  
> discussions); the purpose would be to simulate even one link that the  
> attack does not know about. Unless the attacker is powerful enough to  
> prove no such connection exists, surely intelligently placed delays  
> are worthwhile?

The attacker knows about such mechanisms. While you may be able to introduce 
some deniability for one request (but that will break sometimes too as there 
must be a random element), in aggregate it becomes obvious that an artificial 
delay is the only delay involved.
> 
> > Hence it is essential that we separate the contents of your datastore
> > from your actual requests (which we will do in 0.9). However, the  
> > property
> > that finding where data is necessarily propagates it is very useful  
> > from the
> > point of view of resisting censorship. Which does open the question of
> > whether bloom filters are a good idea ...
> 
> That's a different topic... and to me routing by bloom filters looks a  
> lot like NG-routing.

No. NG-routing was based on the idea of modelling the probability of success 
and the likely transfer time, as a function of the key being requested, for 
each node. One problem with it was that, like classic Freenet, specialisation 
is expected to emerge, rather than designed in. This means it takes a long 
time for a network to settle... in fact, what happened was the network tended 
to route load rather than keys, with routing dominated by finding 
non-overloaded nodes.
> 
> -
> 
> Back to the best-of-both-worlds (BOBW)... I think that we already  
> agree that a priority transfer system is needed. What is your plan for  
> how to accept realtime requests?

I believe I have explained this? Requests will have a flag. Realtime requests 
will have a higher message priority especially for transfers. We have two 
separate bandwidth liability limiters (these determine how many requests to 
accept in parallel based on assuming they will all succeed and estimating how 
long they will take to transfer), one for bulk requests with a long target 
time (probably more than the current 90 seconds), and one for realtime 
requests with a short target time (much less). Realtime requests would also 
have a short transfer timeout time, although we may keep turtling as often 
other nodes want the data. Hence we accept a small number of realtime 
requests, and they should complete quickly; we accept a large number of bulk 
requests, and they can take a longer period.

What have I left out?
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 827 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090120/2bb0382b/attachment.pgp>

Reply via email to