On Sun, May 04, 2003 at 11:29:27PM -0400, Edward J. Huff wrote: > On Sat, 2003-05-03 at 19:35, Toad wrote: > > I am not sure that we could safely allow a get file closest to > > this key command... it might allow some kinds of probing/mapping that > > might lead to attacks we would want to avoid. On the other hand, it is > > an interesting idea. > > > > Thanks for taking the time to read the idea. I think we might > not need that "get closest file" command. Instead, the node just > picks some files it happens to have available when it needs to > entangle the data.
That's even worse. We do not want to be able to connect the file to the origin node in any way. We would need a fetch random file in this size range command. We could implement this but it would then be possible to ask a given node at HTL 0 for a random file from its datastore... it might forward the request, but we could tell from timing... I think this is going to be very hard to do safely, although eventually we need to deal with the timing problem for other requests. We could exempt it from HTL, just have a request termination probability, and send the file back through a mixmaster chain... after we implement premix routing... > > --- > > I'm thinking of alternatives. Maybe an independent system of > nodes which do almost nothing but exchange fixed length files > in an attempt to keep them sorted in order by the hash key. > Nodes try to replicate the data by requesting random keys from > other nodes, checking that the resulting file matches the hash, > mangling the files (XORing them together, encrypting them, etc.) > and re-inserting the results. Nodes also accumulate entropy and > insert the resulting random files, together with the claim that > the files are random. Or they take a file which claims to > be random, encrypt it with a really random key, forget the key, > insert the result, and certify it as encrypted with a forgotten > random key. Finally, somehow the nodes distribute information > about how files were mangled. CHK URIs include decryption keys. We would be using a different random decryption key, because we do not know the decryption key when we get the file. > > This activity goes on continually in the background, so that > the "real" work can go on unseen. Nodes accept fixed length > files from external sources, and satisfy requests for files > with a specified hash, or for certified random files close to > a specified hash. These activities need some clever > schemes to anonymize the requests in the presence of Trojan > nodes. This seems excessive, there is plenty of content already... > > Files would be deleted at random. Nodes would accept > requests to delete any given key, but would only do so at a > limited rate. Files which are frequently requested would > be more likely to be replicated by mangling. Requests for > missing files would be satisfied by reconstructing the > file if possible, without disclosing the files used for > the reconstruction. > > I suspect that this limited feature set would be very useful > for the anonymous distribution of large files, given an > appropriate application layer. It should also be fairly > easy to implement efficiently. The nodes would attempt > to find out about _all_ other nodes. Every node would > give its median key on request. When inserting a file, > the originating node would send it directly to the node > which advertises the closest median. Reputations would > be maintained based on the ability to produce files which > match the specified hash. When a file is originally > created, the originating node calculates several challenge > response values and saves them. These are checked later > and the results contribute to the reputation of the > destination node. Surely it wouldn't scale? > > I can't predict what would happen in court, but the fact > that these nodes never deliver anything but apparently > random bits ought to count for something. And the fact > that they will delete any key on request (although this > doesn't necessarily prevent subsequent delivery of the > corresponding file). > > -- Ed Huff _______________________________________________ devl mailing list devl at freenetproject.org http://hawk.freenetproject.org:8080/cgi-bin/mailman/listinfo/devl
