On Wed, Dec 29, 2004 at 10:25:22AM -0800, [EMAIL PROTECTED] wrote:
> 
> My node often overloads with too many transfers.  You will see that I 
> have 467 active transfers transmitting.

Woah! What's your outgoing bandwidth? This could perhaps be an
accounting bug... Were you using the node at the time e.g. for Frost, or
an insert? It's possible that transfers are leaked during inserts...
> 
> 
> **Connections [Switch to peers mode]**
> 
> Wed Dec 29 10:03:30 PST 2004
> [More details]
> Connections open (Inbound/Outbound/Limit)     70 (12/58/100)
> Transfers active (Transmitting/Receiving)     467 (462/5)
> Data waiting to be transferred        47 KiB
> Total amount of data transferred      2,002 MiB
> 
> 
> **Enviroment**
> Architecture and Operating System     
>       
> Architecture  i386
> Available processors  1
> Operating System      Linux
> OS Version    2.6.5-7.111.5-default
>       Java Virtual Machine    
> JVM Vendor    Sun Microsystems Inc.
> JVM Name      Java HotSpot(TM) Server VM
> JVM Version   1.5.0-b64
>       Memory Allocation       
> Maximum memory the JVM will allocate  259,264 KiB
> Memory currently allocated by the JVM 259,264 KiB
> Memory in use 221,159,704 Bytes

That's a lot of memory.. that suggests they may be real transfers.

> Estimated memory used by logger       None
> Unused allocated memory       44,326,632 Bytes
>       Data Store      
> Maximum size  117,187,500 KiB
> Used space    24,037,700 KiB
> Free space    93,149,800 KiB
> Percent used  20
> Total keys    91753
> Space used by temp files      966,116 KiB
> Maximum space for temp files  40,000,001,192 Bytes
> 
> 
> 
> **General Information**
> Version Information   
>       
> Node Version  0.5
> Protocol Version      STABLE-1.51
> Build Number  5100
> CVS Revision  1.90.2.50.2.128
>       Uptime  
>        1 day 3 hours 57 minutes       
>       Load    
> Current routingTime   0ms
> Current messageSendTimeRequest        0ms
> Pooled threads running jobs   14 (7%)
> Pooled threads which are idle 15
> Current upstream bandwidth usage      12783 bytes/second (104%)
> Reason for QueryRejecting requests:   Estimated load (130%) > overloadHigh 
> (125%)Estimated load (130%) > overloadHigh (125%)
>       It's normal for the node to sometimes reject connections or requests 
> for a limited period. If you're seeing rejections continuously the node 
> is overloaded or something is wrong (i.e. a bug).
> Current estimated load for QueryReject purposes       130%
> Current estimated load for rate limiting      133.3% [QueryRejecting all 
> incoming requests!]
> Reason for load:      Load due to thread limit = 7%
> Load due to routingTime = 10% = 100ms / 1000ms <= overloadLow (100%)
> Load due to messageSendTimeRequest = 20% = 100ms / 500ms <= overloadLow 
> (100%)
> Load due to output bandwidth limiting = 129.8% because 
> outputBytes(765706) > limit (589824.009 ) = outLimitCutoff (0.8) * 
> outputBandwidthLimit (12288) * 60

This also suggests they may be real transfers. But it ought to settle
down eventually, unless there is an external stress causing this - for
example, unusual local load patterns.

> Load due to CPU usage = 133.3% = 100% / 0.75

Hey, somebody actually uses doCPULoad! :)

> Load due to expected inbound transfers: 1.8% because: 4727.252702849095 
> req/hr * 0.002778508224000003 (pTransfer) * 268259.0 bytes = 3523504 
> bytes/hr expected from current requests, but maxInputBytes/minute = 
> 2949120 (output limit assumed smaller than input capacity) * 60 * 1.1 = 
> 194641920 bytes/hr target
> Load due to expected outbound transfers: 0.1% because: 69.17398158133034 
> req/hr * 0.0010(999 0s, 1 1s, 1000 total) (pTransfer) * 268259.0 bytes = 
> 18556 bytes/hr expected from current requests, but maxOutputBytes/minute 
> = 516095 * 60 = 30965759 bytes/hr target
> Estimated external pSearchFailed (based only on QueryRejections due to 
> load):        0.729
> Current estimated requests per hour:  4417.545876413522
> Current global quota (requests per hour):     2235.1086113701913
> Current global quota limit from bandwidth (requests per hour): 
> 164903.3210442147
> Highest seen bytes downloaded in one minute:  1595458
> Current outgoing request rate 4727.252702849095
> Current probability of a request succeeding by routing        0.7%
> Current probability of an inbound request causing a transfer outwards 0.1%
> Current target (best case single node) probability of a request 
> succeeding    3.7%
> 
> 
> 
> **Node Status Info**
> Uptime:   1 day,   4 hours,   0 minutes
> Current routingTime: 0ms.
> Pooled threads running jobs: 11     (5.5%)   [QueryRejecting all 
> incoming requests!]
> Pooled threads which are idle: 18
> It's normal for the node to sometimes reject connections or requests for 
> a limited period. If you're seeing rejections continuously the node is 
> overloaded or something is wrong (i.e. a bug). Current estimated load 
> for rate limiting: 127.4%.
> Load due to thread limit = 5.5%
> Load due to routingTime = 10% = 100ms / 1000ms <= overloadLow (100%)
> Load due to messageSendTimeRequest = 20% = 100ms / 500ms <= overloadLow 
> (100%)
> Load due to output bandwidth limiting = 127.4% because 
> outputBytes(751185) > limit (589824.009 ) = outLimitCutoff (0.8) * 
> outputBandwidthLimit (12288) * 60
> Load due to CPU usage = 102.7% = 77% / 0.75
> Load due to expected inbound transfers: 1.8% because: 4727.252702849095 
> req/hr * 0.002778508224000003 (pTransfer) * 268290.0 bytes = 3523911 
> bytes/hr expected from current requests, but maxInputBytes/minute = 
> 2949120 (output limit assumed smaller than input capacity) * 60 * 1.1 = 
> 194641920 bytes/hr target
> Load due to expected outbound transfers: 0.1% because: 63.31118607638463 
> req/hr * 0.0010(999 0s, 1 1s, 1000 total) (pTransfer) * 268290.0 bytes = 
> 16985 bytes/hr expected from current requests, but maxOutputBytes/minute 
> = 516095 * 60 = 30965759 bytes/hr target
> Current estimated load for QueryRejecting: 127.6%.
> 
> 
> **Routing Table**
> Number of known routing nodes 194
> Number of node references     188
> Number of newbie nodes        23
> Number of uncontactable nodes 6
> Contacted and attempted to contact node references    188
> Contacted node references     74

Not bad.

> Contacted newbie node references      12
> Connections with Successful Transfers 65
> Backed off nodes      3

Definitely not bad. Is the above behaviour consistent over the long
term? Does the number of transfers increase forever?

> Connection Attempts   1741
> Successful Connections        142
> Lowest max estimated search time      0ms
> Lowest max estimated DNF time 0ms
> Lowest global search time estimate    38526.0ms
> Highest global search time estimate   89478.0ms

Not too bad.. a bit higher than my node's.

> Lowest global transfer rate estimate  550 bytes/second
> Highest global transfer rate estimate 1,119 bytes/second

That's a bit low... Which is interesting; I'd expect upload to be slow,
not download.. although the whole node might be slow.

> Lowest one hop probability of DNF     0.979
> Highest one hop probability of DNF    0.99

Tolerable.. just reflects the poor state of the network.

> Lowest one hop probability of transfer failure        0.029
> Highest one hop probability of transfer failure       0.055

Also tolerable.

> Single hop probability of QueryRejected       0.196
> Single hop average time for QueryRejected     4742.183748517353
> Single hop probability of early timeout       0.691

Rather high, probably because the whole node has been slowed down... my
node has 0.1.

> Single hop average time for early timeout     101483.56305479878
> Single hop probability of search timeout      0.068

Curious (low).

> Single hop average time for search timeout    155086.73389271458
> Single hop overall probability of DNF given no timeout        0.988
> Single hop overall probability of transfer failure given transfer     0.05
> Probability of transfer given incoming request        0.0010
> Total number of requests that didn't QR       648856
> Total number of reqests that timed out before a QR or Accepted        63125
> Implementation        freenet.node.rt.NGRoutingTable
-- 
Matthew J Toseland - [EMAIL PROTECTED]
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Support mailing list
Support@freenetproject.org
http://news.gmane.org/gmane.network.freenet.support
Unsubscribe at http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/support
Or mailto:[EMAIL PROTECTED]

Reply via email to