Here. Mind you, I just updated to 991 from some months-old version. And
I'm running this with Kolivas's idleprio patches in a machine that gets
heavy CPU load from time to time, so I don't know how representative these
stats are.
* Cached keys: 14,065 (439 MiB)
* Stored keys: 16,626 (519 MiB)
* Overall size: 30,691/30,688 (959 MiB/959 MiB)
* Cache hits: 6 / 70 (8%)
* Store hits: 0 / 65 (0%)
* Avg. access rate: 0/s
On Sat, 07 Oct 2006 00:01:21 +0100, toad wrote:
> 1. THE STORE IS *LESS* EFFECTIVE THAN THE CACHE!
> ------------------------------------------------
>
> Please could people post their store statistics? Cache hits, store hits,
> cached keys, stored keys.
>
> So far:
> [23:11] <nextgens> # Cached keys: 6,389 (199 MiB) [23:11] <nextgens> #
> Stored keys: 24,550 (767 MiB) [23:09] <nextgens> # Cache hits: 217 /
> 12,738 (1%) [23:09] <nextgens> # Store hits: 14 / 10,818 (0%)
>
> (Cached hits / cached keys) / (Stored hits / stored keys) = 59.56
>
> [23:12] <cyberdo> # Cached keys: 17,930 (560 MiB) [23:12] <cyberdo> #
> Stored keys: 24,895 (777 MiB) [23:14] <cyberdo> # Cache hits: 178 / 3,767
> (4%) [23:14] <cyberdo> # Store hits: 11 / 2,970 (0%)
>
> (Cached hits / cached keys) / (Stored hits / stored keys) = 22.47
>
> [23:14] <sandos> # Cached keys: 45,148 (1.37 GiB) [23:14] <sandos> #
> Stored keys: 16,238 (507 MiB) [23:11] <sandos> # Cache hits: 41 / 861 (4%)
> [23:11] <sandos> # Store hits: 5 / 677 (0%)
>
> (Cached hits / cached keys) / (Stored hits / stored keys) = 2.95
>
> Thus, in practice, the cache is far more efficient than the store.
>
> The cache caches every key fetched or inserted through this node.
>
> The store stores only keys inserted, and of those, only those for which
> there is no closer node to the key amongst our peers.
Surely you mean: for which there is no closer node to the key amongst our
peers WHICH WE ACTUALLY SENT THE KEY TO (it could be down at the moment of
insert, or backed of, or maybe the insert HTL expired or something), right ?
It doesn't much help us that there's a better node for storing this key
if it never received it.
>
>
> The cache being more effective than the store (and note that the above is
> for CHKs only) implies either:
> 1. Routing is broken.
> 2. There is more location churn than the store can cope with. 3. There is
> more data churn than the store can cope with.
>
>
> 2. SUSPICIONS OF EXCESSIVE LOCATION CHURN
> -----------------------------------------
>
> ljn1981 said that his node would often do a swap and then reverse it.
> However several people say their location is more or less what it was. It
> is necessary to make a log of a node's location changes over time...
>
>
> 3. PROBE REQUESTS NOT WORKING
> -----------------------------
>
> "Probe requests" are a new class of requests which simply take a location,
> and try to find the next location - the lowest location greater than the
> one they started with. Here's a recent trace (these can be triggered by
> telneting to 2323 and typing PROBEALL:, then watching wrapper.log):
>
> LOCATION 1: 0.00917056526893234
> LOCATION 2: 0.009450590423585203
> LOCATION 3: 0.009507800765948482
> LOCATION 4: 0.03378227720218496
> [ delays ]
> LOCATION 5: 0.033884263580090224
> [ delays ]
> LOCATION 6: 0.03557139211207139
> LOCATION 7: 0.04136594238104219
> LOCATION 8: 0.06804731119243879
> LOCATION 9: 0.06938071503433951
> LOCATION 10: 0.11468659860500963
> [ big delays ]
> LOCATION 11: 0.11498938134581993
> LOCATION 12: 0.11800179518614218
> LOCATION 13: 0.1180104005154885
> LOCATION 14: 0.11907112718505641
> LOCATION 15: 0.3332896508938398
> [ biggish delays ]
> LOCATION 16: 0.6963082287578662
> LOCATION 17: 0.7003642648424434
> LOCATION 18: 0.7516363167204175
> LOCATION 19: 0.7840227104081505
> LOCATION 20: 0.8238921670991454
> LOCATION 21: 0.8551853934902863
> LOCATION 22: 0.8636946791670825
> LOCATION 23: 0.8755575572906827
> LOCATION 24: 0.883042607673485
> LOCATION 25: 0.8910451777595195
> LOCATION 26: 0.8930966991557874
> LOCATION 27: 0.8939968594038799
> LOCATION 28: 0.8940798222254085
> LOCATION 29: 0.8941104802690825
> LOCATION 30: 0.9103443172876444
> LOCATION 31: 0.9103717579924239
> LOCATION 32: 0.9107237145701387
> LOCATION 33: 0.9108357699627044
> LOCATION 34: 0.9130496893125409
> LOCATION 35: 0.9153056056305631
> [ delays ]
> LOCATION 36: 0.9180229911856111
> LOCATION 37: 0.9184676396364483
> LOCATION 38: 0.9198162081803294
> LOCATION 39: 0.9232383399833453
> [ big delays ]
> LOCATION 40: 0.9232484869765467
> LOCATION 41: 0.9398827726484242
> LOCATION 42: 0.9420672052844097
> LOCATION 43: 0.9442367949642505
> LOCATION 44: 0.9521296958111133
> [ big delays ]
> LOCATION 45: 0.9521866483104723
> LOCATION 46: 0.9562645053030697
> LOCATION 47: 0.9715290823566148
> LOCATION 48: 0.9722492845296398
> LOCATION 49: 0.974283274258849
> [ big delays ... ]
>
> Clearly there are more than around 50 nodes on freenet at any given time,
> and the above includes some really big jumps, as well as some really small
> ones. This may be a problem with probe requests, but it is
> suspicious..._______________________________________________ Devl mailing list
> Devl at freenetproject.org
> http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl