Ian, Good stuff.
I can see some potential problems, though (which may have been addressed already, or I may not be understanding things correctly): 1. Presumably we're wanting old versions of the documents to expire. If document 1 keeps getting requested, then the network will think it's popular, and it'll never expire. 2. Freenet doesn't guarantee that you'll be able to find a document, so surely this means that the algorithm you describe might fail sometimes, especially if you have some old documents expiring and others not. I quite like the timestamp option, because it's guaranteed to work. All nodes could be programmed to clear out anything that has expired. The biggest downfall I can see is that it requires foresight. And, it's not much use if you want to keep your traffic to a minimum to reduce the chance that someone will discover that you are the anonymous poster. Steve On Tue, 15 Aug 2000, Ian Clarke wrote: > One of the more useful features of subspaces is that they can be used > as a form of document updating mechanism (in the absence of a real > document updating mechanism which we probably won't see until 0.5). > > There are two approaches I can see to this: > > Consider creating a subspace and inserting your document into it, > calling it "1". Lets say that whenever you wish to update the > document you insert the updated version, and call it "2". > > Now, the obvious way to find the latest version of the document is to > first request "1", then "2", until you attempt to request a > non-existent document. You can then assume that the last document you > requested was the latest incarnation. This, however, is rather > inefficient. > > A more efficient solution is to start with the last known revision of > the document, if this is your first access then you start with 1. > Test to see whether this document exists, if not, try the next one > until you find a document which does exist. Take this to be your > starting point, call it S. Next you try requesting S*2, then S*4, > then S*8, doubling each time, until your request fails, say at point > P. You now know that the latest version of the document is between P > and S. You try requesting the document at (P+S)/2, if this exists > then your new P becomes (P+S)/2, if not, your new S becomes (P+S)/2. > Through this mechanism you can rapidly "home in" on the correct > document. So, for example, for a document with 20 revisions, the > following documents would be requested in this order: > > 1,2,4,8,16,32,24,20,22,21 > > Ok, 10 requests sounds like alot, however recall that future requests > for this document will know that there are at least 20 revisions, and > this will speed things up dramatically. This algorithm also scales > excellently ( O(ln n) time). > > Another approach is to employ a timestamp, and insert a document > periodically, (perhaps daily for a Freenet equivalent of > http://freenetproject.net/, maybe hourly for a news site). You could > then be sure to be getting a version that is at most a day, or an > hour, old. The disadvantage is that the publisher of the document > must be in a position to re-insert the document regularly, although a > robust client could use a mechanism similar to that described above > to find the most recent document should the current document not be > inserted in time. This means that if such a site is shut down, users > can at least obtain the most recent version of the site. > > Just some thoughts, > > Ian. > _______________________________________________ Freenet-dev mailing list Freenet-dev at lists.sourceforge.net http://lists.sourceforge.net/mailman/listinfo/freenet-dev
