Ian Clarke wrote: > Rather than simple moving data straight to the top of the datastore when > it is re-requested, it is moved up the queue an ammount inversely > proportional to its size. More accurately, if the datastore is of size > n (and here I refer to the portion of the datastore containing actual > data, not just references) a piece of data of size p should be moved up > by nq/p where q is the size of the smallest piece of data. If doing > this results in the data being moved up beyond the top of the queue then > it is placed at the top of the queue. This means that the smallest > piece of data (unless I have fscked up the math) will always be moved > straight to the top of the datastore, but other data will have to work > harder in proportion to its size (so a piece of data twice the size of > the smallest piece of data will have to get 2 hits to get to the top of > the queue if it starts at the bottom).
The math looks good to me. Appending to this, newly inserted files should be put lowest in the store and immediately be moved up by the same formula, thereby receiving a single hit. (as exactly one person wants the stuff.) However, i fear this is woulnerable to inserts of very small files. Now that I think of it the above scheme itself is, too. If the smallest file in the store is 100 KB and I've inserted a 1 MB file, it'll need about 10 requests to go to the top of the store. But if somebody now inserts a 1 KB file, my file will suddenly need 1024 requests to go move the same way ! This would suddenly decrease the worth of my file heavyly. Alternate proposal: Change the formula to nq/(2p) where q nor is the _avrage size_ of all files in this node. This means that an file of avarage size would need two hits to move all the way up. But I think this approach is less voulnerable to the above problem. Comments ? Philipp _______________________________________________ Freenet-dev mailing list Freenet-dev at lists.sourceforge.net http://lists.sourceforge.net/mailman/listinfo/freenet-dev
