In message <20021104124718.GA652 at sporty.spiceworld>, Oskar Sandberg 
<oskar at freenetproject.org> writes
>
>My node is doing over ten thousands qph now without breaking a sweat, so
>I'm fairly certain the problem is indeed fixed (though this is
>partially due to the fact that almost none are successful). 609 is a
>little better still since I reversed a conditional in 608 so it was
>still accepting bad announcements.
>
>"maximumThreads" is the correct name for the setting. Please do not use
>the 0.6 as the overload threshold - with 80 threads that leaves only 12
>for the node to process requests and new connections, which is far too
>little. The default value of 0.85 and 0.9 are good - changing those was
>a typical example of going after the symptoms rather than the problem.


Can I just ask for an explanation of this?  Build 608 (and all since 
525), maximum threads -120 (or 120 or -60 or 60) my node is currently 
showing pooled threads = 250, and on previous form this will increase 
until it is not permitted by the OS to make more.  Is this supposed to 
happen?







>
>I'm fairly confident that the new code is working as it should, and
>hopefully Matthew can do a new release with it sooner rather than later.
>
>On Mon, Nov 04, 2002 at 07:20:30AM -0500, Ed Tomlinson wrote:
><>
>> From my end it looks good too.  Version 608 is the _first_ version since
>> 525 that does not peg the cpu at 100%.  It still seems to use more than
>> 525 but not too the extent that it stops doing useful work - the node is
>> processing about 800 requests per hour (about 8400 jobs).  I am using the
>> following in freenet.conf:
>>
>> maximumThreads=-80
>> maxThreads=-80
>> doLoadBalance=yes
>> overloadHigh=0.65
>> overloadLow=0.60
>>
>> I have seen both maximumThreads and maxThreads suggested on the list.
>> Which is correct?
>>
>> TIA
>> Ed Tomlinson
>>
>>
>>
>

-- 
Roger Hayter

_______________________________________________
devl mailing list
devl at freenetproject.org
http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to