On Wed, Dec 07, 2005 at 10:44:32PM +0200, Jusa Saari wrote:
> On Tue, 06 Dec 2005 16:54:13 +0000, Matthew Toseland wrote:
> 
> > There is no real reason why we can't allow the client access to the block
> > URIs, if he sets a sufficiently high feedback reporting level.
> 
> The more you expose the node internals, the more the tools are going to
> rely on them staying the same, and the more likely they are going to break
> when those internals change. The more often everything breaks, the more
> likely people are to abandon Freenet.

Okay, so we need a non-key identifier. This could simply be the hash of the
URI.
> 
> It's bad practice to let tools know things like block size. You just know
> that someone is going to make a tool that handles individual blocks and
> allocates memory to them without checking for their size, and then starts
> crashing when the blocksize changes. And you just know that that
> particular badly coded tool will be the most popular one at the time of
> any such change, and consequently cause a huge backlash...

We cannot change the block size easily. We can change the segment size
very easily though. Anyway, if these are necessary they would be
reported by FCP.
> 
> > Hence, when you do an insert, it will tell you when it starts inserting
> > each block, what that block's key is. Likewise on a request, if it fails,
> > it could tell you which segment failed, and which keys from that segment
> > were successfully fetched (the rest failed). Because of the way FEC works,
> > it will then be sufficient to reinsert any of the remaining keys in that
> > segment; in practice we would reinsert all of the remaining keys in the
> > segment for reliability reasons.
> > 
> > So:
> > Fetch knoppix_4.0.1.dvd.iso
> > Segment size = 128 data blocks, 64 check blocks. Segments 1-27,29-451
> > succeeded.
> > Segment 28 failed:
> > 192 blocks in segment.
> > 128 data blocks in segment; need 128 blocks. Fetched 117 blocks. <list of
> > 117 fetched blocks> Need 11 of the remainder: <list of 75 failed blocks>
> > 
> > The easiest way to implement the insert end would be simply for the client
> > to reinsert the entire segment. This would require no node support apart
> > from the above information; the client could simply chop that part of the
> > data, and insert it as a splitfile. A more complex option would be to
> > reinsert specified blocks only from a splitfile. I do not regard that as a
> > priority. But providing enough information to do the above is easy enough.
> 
> How large are the segments going to be ? Hmm... with 32 kB blocks and 192
> blocks per segment, it makes 6 megabytes. Not optimal, but acceptable. I
> still think that it should just say "failed from byte (start of failed
> segment) to (end of failed segment)", to hide the details of segmentation
> and blocks.

4MB data + 2MB check blocks. Maybe a bit bigger. Why report the entire
segment? I thought you wanted to minimize the number of blocks that need
reinserting?
> 
> > I still don't agree with you on the need for insert on demand btw; a
> > working freenet would have massive resources, and work far better for
> > less-popular files than most P2Ps, due to having more places to download
> > them from. The problem is that all production freenet's so far have had
> > serious load-versus-routing problems. Hopefully 0.7's algorithms will do
> > better, since they a) Are based on tried and tested solutions, and b)
> > Propagate overload back to the source (the client node).
> 
> Even if Freenet had limitless resources at its disposal, I, a single node
> operator, do not. Suppose that I want to make every public domain movie
> available from Freenet in case it gets censored (Night of the Living Dead,
> for example, could easily be) ? Given that a good quality movie file is at
> least 700 MB, and given that there would be hundreds of such files, and
> given that most of them would be accessed very rarely, does it make more
> sense to upload them all or to simply upload a list of files and upload
> the files themselves when someone actually wants them ?

It makes sense to upload all of them, and offer reinsertion on demand as
a bonus via giving out your freemail address. In the long term, there
will be better ways to do insert on demand.
> 
> Also, please note that when Freenet grows and gains more resources, it
> also gets more requests, which means that most popular content (porn,
> likely) will get copied to more nodes to cope with the increased load, and
> consequently least popular content will still be pushed out.

No. If freenet is bigger, it will have more resources. Not less
resources. Yes, it will have more demand. But there is no reason to
think that semi-popular content will survive *less well* than it does
now. Of course, it may be that freenet becomes more mainstream, in which
case the overall distribution of requests over different kinds of
content would change... (more celebrity porn, less dolphin porn :) ).
> 
> Freenet is an anonymizing cache system, not a permanent storage system;
> and consequently, making it as easy as possible to implement permanent
> storage system (FTP over Freenet, in practice) on top of it is important.
-- 
Matthew J Toseland - toad at amphibian.dyndns.org
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
URL: 
<https://emu.freenetproject.org/pipermail/tech/attachments/20051207/7388f7a7/attachment.pgp>

Reply via email to