> I can refute at least this part. The problem is you're thinking that the > last node that the follow through request reaches is the one that winds up > serving the data. This isn't true. On the way towards that eventual > endpoint, the versions are checked. Its the nearest node with what is > determined to be the latest version that actually returns the data.
This is the same hole as exists in Oskar's argument - just how is a particular piece of data determined to be the latest version without actually doing a full follow-through search? > It still wories me to use broadcast of any sort. There is too much risk > of encountering cycles in the network, causing the explosions to trigger > mini-explosions, at least until the HTL runs out. This won't happen since a node will drop an update for data which has already been updated. Updates also will only be propogated by nodes which are locally caching the data, further preventing any form of "virus". Lastly, HTLs will also act as a final (but probably unnescessary) safety mechanism. > The problem is, your proposal has the updator decide what should be > updated (not directly, but via the explosion). This might leave some > people out. The deep-request method lets the requesters decide what > should be updated.. which is how it should be... after all, they are who > is interested in the recent copy. This doesn't make sense. How will the requestor do anything if it can't find the updated data, or (even more likely) the one, or small number, of nodes with the updated data have been killed by the /. effect of all these follow-through requests. > Sure, but the issue is when the messages are propogated... all at once, > making a nice little network spike, or over the lifetime of the data, > which is a nice even increase. What is wrong with a network spike, the spike will be distributed over the entire Internet, meaning that it will be distributed in space, if not time. > > But this "make sure..." process will result in a /. effect on popular > > data as I point out above. > Or not, as I pointed out. Or yes, as I pointed out. > I hate to make sweeping judgements like this, but expiration is bad. Very > bad. Attackably bad at the worst, or just annoying for content creators > at the best. I happen to like Theodore's idea about constraining the > possibility of a follow-through. The more automatic things are, the more > likely people will use Freenet, and the less likely it is that people can > screw it up. Perhaps, I am not very wedded to the idea of expiration, although I am not convinced it is bad either so long as the author of the data knows what they are doing when they set the expiration flag. Ian. _______________________________________________ Freenet-dev mailing list Freenet-dev at lists.sourceforge.net http://lists.sourceforge.net/mailman/listinfo/freenet-dev
