I think routing is trying some nodes we don't have connections open to.
___Routing.freeConn OpenConnectionManager.needsConnection read like
they return the opposite of what their names suggests.
I haven't run it though.
Also wouldn't it be better just to run scheduleConnectionOpener in the
Ian Clarke wrote:
Ok, the new stable build seems to be working quite well, are other
people experiencing the same thing?
I don't think you can judge it yet. Only a fraction of users have updated.
I'm running a new (always) transient node on stable and it's routing
other requests. I thought a
[EMAIL PROTECTED] wrote:
One thing that always bothered me about NGrouting is that we only antisipate
having to retry once.
Originally I wanted to use a Ramon sum to determine the overall time but Ian
pointed out that we don't want to retry infinitely.
It occurred to me that NGrouting estimates
Ian Clarke wrote:
Salah Coronya wrote:
Well so far routing doesn't seem to have improved, most requests are
failing (51051 requests attempted, 409 succeeded; 514 inserts
attempted, 7 succeeded). About 14000 qph here.
Thats disappointing - anyone else seen any change, positive or negative,
Not seen this suggested so I'll post what I'm thinking.
How about replacing the current pDNFs with a new stat that when data is
found promotes that node and demotes (by some fraction) all others, (for
the given key area.) This will eliminate all the il/legitimate DNF problems.
I've deleted the
I tried changing my estimate removing SearchFailed and saw a lot more RNFs.
Toad
They don't work (maybe they could though)
What's wrong with them? I know tSearchFailed averages the results from
several errors but what could you do with them identified individually.
and exponential back off
Pruning the routing table first deletes nodes with Consecutive Failures,
which is fine. (I can see sometimes that I wouldn't always want this,
but that's another issue)
What I see now are alot of the nodes with high Connection Attempts
Successful Connections are the ones with the highest
Are most of the nodes on dev-network overloaded or is this a client side
bug giving me mostly high search died probability?
I have 11 nodes with open connections, two with ~0.5 SDP, other close to 1.
I have got a reasonable number of Successful Transfers compared with stable.
J wrote:
Quoting Toad [EMAIL PROTECTED]:
We have a mechanism called probability of legitimate DNF, which should
compensate for most of this noise.
Most (if not all) the time, probability of legitimate DNF is the same as
probability of DNF on my node.
I would think that's the effect the
I'm proposing a server side test for NGR.
Someone with a connected, well running node (if there is one) runs a
modified source that accepts connections but rejects all queries.
Logging the time and node version. Then it shouldn't be too difficult to
extract the relative request rate between
Attached is diff against 6233.
It fixes bug in RSL, adds WSL diagnostics and removes code that tries to
do the job of select.
diff -uwr freenet-unstable-latest/src/freenet/node/Main.java
Myfreenet-unstable-latest/src/freenet/node/Main.java
--- freenet-unstable-latest/src/freenet/node/Main.java
Todd Walton wrote:
On Sat, 11 Oct 2003, Ian Clarke wrote:
For the stable merge of the current unstable code, which will likely be
5029, we should consider the benefits of increasing lastGoodBuild to
5029. Some say that 5028 is next to useless, however some of the best
nodes in routing tables
Should the code be changed so highly overloaded nodes send DNFs when the
keys are in the Failure Table.
--
How about a cooling off period after receiving QueryRejected before
sending anything else to the node.
--
Attached is a changed version of
Attached is a change (base 6215) to prefer doing inserts to non-wild nodes.
p.zip
Description: Zip compressed data
___
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
Toad wrote:
On Mon, Oct 06, 2003 at 11:21:07AM +0100, Jonathan Howard wrote:
Attached is a change (base 6215) to prefer doing inserts to non-wild nodes.
What is the point? Is NGRouting really THAT broken that it can't tell
when a node is crap? In which case, FIX THE PROBLEM, don't just add yet
I managed to log it for YoYo.
Can someone take a look. I'm thinking it's from a file system error.
My full requests were for the front page and then canceled, then for YoYo//.
The log shows YoYo active link getting the metadata and succeeding up to
detecting I had stopped the browser.
I'm
Unknown mime type application/octet-stream
I have noticed a rise in the wrong occurrence of this warning in recent
unstable builds. I've always had it just not as much as late. I don't
think you can infer anything from it just it makes it more important to fix.
It could be caused by anything
Recall that in NGR we always estimate how long will it take to get the
data if the message is (initially) routed to this node. It is
important that we know what the estimate *means*. I don't think that is
the case with the equasion you give above.
I see you don't work with sub-symbolic AI.
G Granum wrote:
I have started to receive pages/data that start OK but turn into
'garbage' E.g. The Freedom Engines 'intermediate page':
It is of course also apparent to me that while I shall never take the
position of freesite censor; even though some would argue I do indeed
possess
Ian Clarke wrote:
Jonathan Howard wrote:
The current StandardNodeEstimator.estimate() is trying to calculate
the average time a node will take for any outcome.
Shouldn't it be returning the time if it succeeds + punishment for
when it fails?
It is (or should be). It returns pSuccess
The current StandardNodeEstimator.estimate() is trying to calculate the
average time a node will take for any outcome.
Shouldn't it be returning the time if it succeeds + punishment for when
it fails?
I'm suggesting changing
174: estimate += pSuccess * tSuccess;
to
estimate += tSuccess;
Toad wrote:
On Thu, Sep 25, 2003 at 12:17:15PM +0100, Jonathan Howard wrote:
What is the reason for decreasing hopsToLive is Pending?
Probably to prevent requests from retrying infinitely - what was the
context?
I see it's stopping that now. I spotted it seeing each successive node
getting
Here is what I see happening with idle connections;
The write is succeeding in WSL.
The channel is registered with RSL but nothing gets received.
The attempt gets stopped after ~18s and the next route tried.
Is there any reason for not sending acknowledgment or overload?
(other than being slow or
I have just read this;
a selector can have only 63 channels registered
Don't know if its still valid.
Could a Freenet node get to this limit?
Source;
http://www.javaworld.com/javaworld/jw-09-2001/jw-0907-merlin_p.html
___
Devl mailing list
[EMAIL
I'm new to looking into the Freenet code and haven't got the full
picture of workflow, but I know it isn't pleasant.
The good news is I'm running with Windows ME MaxConnections set to 256
and freenet's max connections up to 128 without any additional bugs.
The squeamish should close their eyes
25 matches
Mail list logo