> > And clearly you can't cache just a byte range. The point of using byte
> > range  requests rather than splitting files is that the files are atomic.
> > Thus parts of files won't fall out separately from the whole file.
>
> Yes, but in order to have caching work, you have to move the entire file,
> not just a byte range, as you suggest below.

I'm suggesting that you move the entire file, not just the byte range.

> > The solution, it would seem, would be to cache the whole file whenever a
> > byte range is requested. This is tricky since what you basically need to
> > do is transfer the requested byte range first and then transfer the rest
> > of the file.
> Thats not going to happen.  Doing this allows people to get early access
> to a byte range in a file and then terminate a request to prevent the file
> from spreading.

Yeah, the whole point is to get early access to a byte range. Why is this
bad? Because you can keep requesting byte ranges until you have the whole
file without ever finishing a request, thus probing the network without
changing its state. Why not just have nodes continue  transferring after
an aborted request? Because then an attacker can fill  the network with
useless traffic with little effort. So the solution, it would seem, would
be to have aborted requests probabilistically continue aborted requests.

> This is unnecessary complexity.  If the problem that you trying to avoid
> is files dropping out of Freenet, then stop dancing around it and deal
> with ordinary split files, and fix the real problem.

I'm not dancing around the problem. Split files inherently suck. They suck
because the parts can fall out independently and what we want is for them
to all fall out at the same time. They also suck because the compromise
deniability. No one seems to have ideas as to how to solve either of these
problems.

Reply via email to