The most wandering node is darknet only and you know it as Zothar70. The more calm node is a hybrid darknet/opennet node. That being said, neither node appears to have a favorite location it comes back to. OTOH, if I only look at the 48 hour graph rather than the week graph, the darknet only node appears to be favoring a particular location more than the hybrid node.
In any case, I've got the data in an RRD for each node, if it's useful. Perhaps if we had a "location of the datastore" metric formed by an average or something, I could export that via FCP and track that in the RRDs as well. Matthew Toseland wrote: > On Monday 10 December 2007 20:37, David Sowder wrote: > >> I have location data in an RRD for each of my two, currently up nodes, >> as obtained via FCP using pyFreenet's utility for such. My graphs from >> the last week has had the location all over the place for one node, >> while the other node is more calm, but still not appearing to truly >> favor any particular spot on the location circle. >> > > So the second node is wandering about a lot? > > This is probably an artefact of poor topology... > >> Matthew Toseland wrote: >> >>> On Monday 10 December 2007 18:52, Robert Hailey wrote: >>> >>> >>>> On Dec 10, 2007, at 11:49 AM, Matthew Toseland wrote: >>>> >>>> >>>> >>>>>> In the present network, it probably would; but in theory I think that >>>>>> the patch is correct (or some variant thereto). >>>>>> >>>>>> >>>>> Nothing would ever be dropped from the network, because when it's >>>>> considered >>>>> for dropping, it would get reinserted to 20 other nodes! >>>>> >>>>> >>>> I am not recommending that this patch be applied... yet. Every point >>>> that you have raised against it is perfectly valid. In the present >>>> network, because the nodes drift locations soo much, this patch (even >>>> if perfectly tuned; maybe re-insert with HTL=1) would cause data >>>> blocks to "chase" the nodes around the network. Resulting in massive >>>> network traffic increases, as you said. *IF* it helped access of data, >>>> it would only be due to the renewed data being passed through the node >>>> caches (which would probably be overflowed with old insert data). >>>> >>>> My suggestion at present is to: >>>> (1) stabalize node locations enough that data stores come alive, or >>>> >>>> >>> Dependant on topology (which we can control), node uptimes (which we can't >>> control), ... >>> >>> >>> >>>> (2) bias/soft-anchor towards what is in the datastore (or perhaps what >>>> has most-recently been put in the data store?). >>>> >>>> >>> Will not happen without major simulations. >>> >>> >>>> I agree that either of which would require simulations. >>>> >>>> >>> Right. And in the latter case, simulating it would be slow, as we'd have >>> > to > >>> maintain fairly large virtual datastores in the simulation. >>> >>> >>> >>>> #1 would be a >>>> statistical solution (network drift < datastore utility-threshold) and >>>> may be presently attainable with tuning, whereas #2 would be more >>>> pragmatic (and tend to specialize nodes further). #1 may already be >>>> the case if the network size was large enough, but an algorithmically >>>> correct freenet should support any size network (as math scales very >>>> well). >>>> >>>> >>> IMHO the next step forward is simply to log location changes and display >>> > them > >>> either on the location page or on a subpage, or as CSV data (or perhaps >>> through SNMP) so it can be graphed externally. Maybe for the node's peers >>> > as > >>> well as itself. Are you interested in doing some data collection code? >>> > Lets > >>> discover whether there actually is a problem with location drift before we >>> try to solve it ... >>> >>> Another thing you could do would be to implement a datastore histogram >>> generator. We had one in 0.5. >>> >>> >>>> As an example of the general problem (although it seems to have helped >>>> get a routable network); even the theory of a node randomizing it's >>>> location totally obsoletes it's datastore. >>>> >>>> >>> Usually the node will swap back to where it should be within a fairly >>> > short > >>> period. However, again, we need some hard data from the real network >>> > before > >>> we even implement simulations. >>>
