6226 and 6229 both feel better...  Thanks for your work the past few
days.

For todays wild and crazy ideas.... we'll concentrate on fine tuning
requests...  I'll order from most important to least important...

maxRequestsPerInterval=900
requestDelayCutoff=500

The second one is occasionally necessary, for example during node
start up, we can overload the CPU and lag a bit, the routing time is
sensitive to this overloading, and reducing the incoming queries helps
to contain the damage.  The nice thing about this is routine time is
fast moving, so the second the loading eases, we go back to the grind
full bore.  The first one gets me up to to 99% acceptance rates for in
bound requests, instead of 60%.  I don't feel anything bad by having
the higher incoming rate.  This is on a 128M 400MHz system.

On another note, incomingHopsSinceReset looks like this:

Date    Mean                    Std. Dev.
9/20/03 1.27                    0.9548079866953808
9/22/03 1.0410958904109588      0.7156651173012176
9/24/03 1.040485829959514       0.9491012901396432
9/27/03 1.2717391304347827      1.0701058884605377
10/4/03 1.3974025974025974      0.9631185817942682
10/5/03 1.5960451977401129      0.9774141511562823
10/6/03 1.446064139941691       1.0155979234287895
10/7/03 1.3639846743295019      0.8691761261250558
10/8/03 1.3823529411764706      1.0628200638210452
10/10/03 1.098360655737705      0.6852620427249465
10/11/03 1.5181347150259068     1.0462893971510339

Surely 1.04-1.6 is fairly low, given how far some of the data has to
travel and given that inserts skip the datastores and so on...  Would
a number like 4 compromise too much?  If so, could we try and retune
for a higher number?  4 would reduce loading on freenet by around 3x
which might make it slightly more responsive.  If not 4, how about 2
or 3?

announcementFirstDelay=120000

announcementFirstDelay seem like it can be shortened up some.  I don't
mind seeing the load, and I'd say, it is rather expected.  This is
kinda a minor issue.

Memory on my system is kinda tight.  I tuned it down on the command
line to around 73M after watching it thrash to death a while ago and
not get out of it, it felt like GC.  5028 felt nicer on the memory
than 6229 does.  I ran 6229 out after about 6 hours of uptime.  I'm
attempting to fix it by reducing the numbers of node connections down
to 400, which I am hoping might be just enough to not matter.  Anyway,
freenet deadlocks on out of memory.  I'd rather it caught it and just
killed itself.  I am wondering if space used on trailers takes up lots
of RAM?  Before tuning my system back down to the default number of
threads, I had it up at 200 and my system was up well past, what was
it, 50 MB waiting to be transferred out according to the connection
manager in 6226, but I wasn't using up much of my upstream and the CPU
wasn't loaded and requests weren't being rejected.  6229 might not
suffer the same problem, that, or maybe other folks were being mean to
me...

If there is a way to have non-GC memory for large data buffers out on
the side that aren't accessed often in java, maybe that could be used
for storing large datasets.  If you can mmap data (non-GCed), that's
be fine to.  Saving that, if you can notice when memory is getting low
and throttle down the incoming requests (or whatever can reduce or
clear memory usage), that'd be great.

Could we dump the upper limit of 0.5 on resetting the data source down
to 0.3?  The current code is kinda slow to react to loading and edging
it down.  I've often seen it it wildly swinging about.  A way more
controversial issue would be to drop the bottom end down from 0.02 to
0.01.  I don't suspect you'd be willing to do that, but...  I'm
thinking that this is the figure that would edge up
incomingHopsSinceReset, and that is my motivation.  If others are
resetting it so often, then dropping this to 0.01 to see how it
behaves shouldn't be too bad.


That's all for tonight.
_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to