Matthew Toseland wrote: >>incremental verification >> >> > >Not sure what you mean. > > It should be possible for intermediate nodes to verify each block of the file before forwarding it. That's possible if you insert every block under a different key, but that requires a huge number of inserts and requests and also exposes you to the predecessor attack. Hash trees allow you to retrieve an entire file with a single request, verify it incrementally, and request a range if the connection breaks.
>>plausible deniability >> >> > >I.e. they are encrypted using a key which is kept in the URI. > > Exactly, and the key isn't included in the request, so intermediate nodes can verify the file but they can't read it. >Hrrrm. Well the are two ways to do this: >1. The CHK is the CHK of a list of sub-CHKs to fetch and reassemble >(traditional Freenet way). >2. The CHK can be resolved to a list of sub-CHKs. These are then fetched >and if, when combined, they produce the right data, then we have >success; if not, and the sub-blocks verify, we have to discredit the >manifest somehow. > >The latter is what you are proposing, right? > I don't think so... what I'm proposing is that the file should be hashed and encrypted with its hash, as with a CHK, but then a hash tree of the encrypted file is created, and the root hash of the hash tree becomes the file's identifier. This allows nodes that don't know the decryption key to retrieve and verify the file, one block at a time, while keeping minimum state. There's no need to discredit manifests, as far as I can see. >You might want some redundancy. We use onion FEC codes (which are based >on Vandermonde and ultimately are perfectly space efficient Reed-Solomon >codes). > > Publishers and readers can use FEC, but I don't think the intermediate nodes need to know about it. >Also you need to include the hash of the name (after encrypting it with >the symmetric key). > Yup, sorry about that. >And do you need to include the data needed for >decryption in the actual request? In the hyperlink, sure, but don't tell >the nodes/servers, or you lose plausible deniability. > > Right, the decryption data should be left out of the request but the verification data must be included. >Right. These are manifests and ZIP file manifests. Which means, in the >first case, a big block of metadata that maps names to CHKs, and in the >latter case, a ZIP file with a load of files in it and some metadata >indicating content types (which are vital in an anonymous system IMHO). > > I hadn't thought about content types - I suppose the network representation of a file should start with its content type. >>From a single entry point, the entire web can be browsed without >>needing to contact any specific server. This could be an advantage for >>anonymity, because it prevents long-term intersection attacks. >> >> > >How so? > > The publisher doesn't have to be online for the document to be available, unlike a Tor rendezvous point, for example. Obviously Freenet has the same advantage. Cheers, Michael
