Hi.

Is there such a thing as chunk size in the way Freenet deals with storing 
and transfering the data?

By chunk size, I mean a minimal unit of data that is worked with. For 
example, for disk access, this would be the size of the inode.

I am trying to determine what is the optimal size to split data into. The 
size I am looking for is the one that implies that (file size) == (block 
size), so that if a block gets lost, the whole file (not just a part of 
it) is gone.

The reason for this is that I am trying to design a database application 
that uses Frenet as the storage medium (yes, I know about FreeSQL, and it 
doesn't do what I want in the way I want it done). Files going missing are 
an obvious problem that needs to be tackled. I'd like to know what the 
block size is in order to implement redundancy padding in the data by 
exploiting the overheads produced by the block size, when a single item of 
data is smaller than the block that contains it.

This could be optimized out in run-time to make no impact on execution 
speed (e.g. skip downloads of blocks that we can reconstruct from already 
downloaded segments).

Thanks.

Gordan


_______________________________________________
Tech mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/tech

Reply via email to