>> Setup:
>> 1. Firewalled v6032 node (not transient but not able to recieve
>> incoming connections)

>If it cannot receive incoming connections it should be set transient.

True, it was an accident which I discovered halfway through the profiling.
This might be a good time to start a discussion

about wrongly-configured nodes (as I assume that mine isn't the only one).
What about making nodes help each other with wrong configuration detection?

Situation:
Node A, running on a machine with IPA, is configured to use IPA:X. This
machine is located behind a firewall with IPB.
Way for A to detect this bad sitation:
Node A contacts node B. Node B detects that the request originates from IPB
and tells node A that it is wrong..

Situation:
Node A, running on a machine with IPA, is configured to use IPB:X. This
machine is located behind firewall with IPB which

doesn't portforward IPB:X to IPA:X.

Way for A to detect this bad sitation:
Node A contacts node B. Node B tells node A that it has a correct IP
configured but tries to open a connection to IPB:X and

fails and thereafter tells node A that it is wrongly configured using the
already established connection.

This could be repeated periodically to see if something has been broken.


>> 2. Source modified to not kill of any surplus threads
>> (QThreadFactory.run)
>
>Isn't this what maximumThreads=-800 does?

Thank you.


>> 3. Node configured with maximumTreads=800
>> 4. 5 pieces of lightweight webspiders, each running 10
>> concurrent requests againt fred
>
>That's a lot of threads :)

Indeed. I was first planning of doing the profiling using 4 browsers, each
using a maximum of 100 connections to fred.. This

did not work as expected though.. no matter what I did only 20 or so
LocalInterface connections showed up in fred.. hence the

spiders.


>> At any time during any of the tests kernel time (disk I/O?!)
>> where significant. About 25% when profiler wheren't attached, about 15%
>> when it was.

>Interesting. Should be significantly reduced by recent commit to the
>unstable branch.
>

Will try it out as soon a possible.


>> While actual CPU load was around 100% freds load indicator said
>> 15%
>
>Interesting that routingTime didn't spike.

Hmm.. is routingTime involved when local requests is handled (and no
incoming external requests)


>> While one spider managed to retrieve 254 files another one
>> running during the same time period managed to retrieve 238, third >one
>> managed 239 and so one. This might tie in to the issues with
>> disappearing NIMs and jpg:s people have reported
>
>They will rapidly converge over time.

Mmmm... they should do that but it would seem that that didn't happen..
Remember, I ran 25 requests for each item during my

profiling and those numbers i mentioned above where encountered during 'run
4'


>> problems getting thelist.html using those settings
>> During all of the runs a couple of the threads where occupied at
>> freenet.Message: QueryRejected. Who where they rejecting? Me? Current
>
>Set the node to transient, change the port and the node identity, and
>see if you can reproduce them.

Is it possible that one or two externally originated queries slipped in
though my outbound connections?


>> Ideas for the future:
>> Try to eliminate a bottleneck or two and try again
>
>Good idea, get on with it.

You have already taken care of the first one :)


>> Run with a spider that can handle more concurrent requests
>
>50 reqs is quite a lot already...

50 should be enough, but I dont think that my 5*10 scheme is optimal.. since
70 of the images wheren't retrievable there is a

good chance that many of those request are stuck waiting for a DNF from
someone on the other side of the globe. Switching to

an 1*50 (or possbily 5*50) scheme would decrease the impact of those
DNF-waits.


/N

_______________________________________________
devl mailing list
devl at freenetproject.org
http://hawk.freenetproject.org:8080/cgi-bin/mailman/listinfo/devl

Reply via email to