On Dec 20, 2007 9:01 PM, Matthew Toseland <toad at amphibian.dyndns.org> wrote: > On Friday 21 December 2007 01:08, cbreak wrote: > > Matthew Toseland wrote: > > > > > > That is how it already works. There is nothing wrong with reusing > previously > > > inserted files, the best way to do it is probably to reinsert only the top > > > part of the metadata, inside the container. (We *don't* do that). > Referring > > > to files via the previous edition is dodgy IMHO as it requires the > previous > > > edition not fall out, so jSite's compromise of manually inserting big > files > > > and then feeding in their CHKs is probably pretty close if inconvenient. > > > Always inserting every file as a CHK, as pyFreenet does iirc, is bad, > because > > > it avoids opportunities for containerising, which can save a lot of space. > > > > Downloading a large file, such as a linux iso, can fail. The usual thing > > to do then is to request a reinsert from the original inserter, and then > > continuing the download. If every insert is randomly encrypted with a > > new key, this will not work. The new file will consist of completely new > > blocks, and the file would have to be re-downloaded, even if only one > > block is missing. > > Yep, and it sucks for ULPRs too. But we make a major class of attacks *much* > harder (against inserts), if we make it impossible for an attacker to > correlate keys before we announce the key. I was hoping somebody would > propose some practical middle ground solution ... > > _______________________________________________ > Devl mailing list > Devl at freenetproject.org > http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl >
Can't we just insert the upper levels of metadata last and call it good for now? -- I may disagree with what you have to say, but I shall defend, to the death, your right to say it. - Voltaire Those who would give up Liberty, to purchase temporary Safety, deserve neither Liberty nor Safety. - Ben Franklin
