Re: [freenet-support] Active transfers too high

2005-01-05 Thread Toad
On Wed, Dec 29, 2004 at 10:25:22AM -0800, [EMAIL PROTECTED] wrote:
 
 My node often overloads with too many transfers.  You will see that I 
 have 467 active transfers transmitting.

Woah! What's your outgoing bandwidth? This could perhaps be an
accounting bug... Were you using the node at the time e.g. for Frost, or
an insert? It's possible that transfers are leaked during inserts...
 
 
 **Connections [Switch to peers mode]**
 
 Wed Dec 29 10:03:30 PST 2004
 [More details]
 Connections open (Inbound/Outbound/Limit) 70 (12/58/100)
 Transfers active (Transmitting/Receiving) 467 (462/5)
 Data waiting to be transferred47 KiB
 Total amount of data transferred  2,002 MiB
 
 
 **Enviroment**
 Architecture and Operating System 
   
 Architecture  i386
 Available processors  1
 Operating System  Linux
 OS Version2.6.5-7.111.5-default
   Java Virtual Machine
 JVM VendorSun Microsystems Inc.
 JVM Name  Java HotSpot(TM) Server VM
 JVM Version   1.5.0-b64
   Memory Allocation   
 Maximum memory the JVM will allocate  259,264 KiB
 Memory currently allocated by the JVM 259,264 KiB
 Memory in use 221,159,704 Bytes

That's a lot of memory.. that suggests they may be real transfers.

 Estimated memory used by logger   None
 Unused allocated memory   44,326,632 Bytes
   Data Store  
 Maximum size  117,187,500 KiB
 Used space24,037,700 KiB
 Free space93,149,800 KiB
 Percent used  20
 Total keys91753
 Space used by temp files  966,116 KiB
 Maximum space for temp files  40,000,001,192 Bytes
 
 
 
 **General Information**
 Version Information   
   
 Node Version  0.5
 Protocol Version  STABLE-1.51
 Build Number  5100
 CVS Revision  1.90.2.50.2.128
   Uptime  
1 day 3 hours 57 minutes   
   Load
 Current routingTime   0ms
 Current messageSendTimeRequest0ms
 Pooled threads running jobs   14 (7%)
 Pooled threads which are idle 15
 Current upstream bandwidth usage  12783 bytes/second (104%)
 Reason for QueryRejecting requests:   Estimated load (130%)  overloadHigh 
 (125%)Estimated load (130%)  overloadHigh (125%)
   It's normal for the node to sometimes reject connections or requests 
 for a limited period. If you're seeing rejections continuously the node 
 is overloaded or something is wrong (i.e. a bug).
 Current estimated load for QueryReject purposes   130%
 Current estimated load for rate limiting  133.3% [QueryRejecting all 
 incoming requests!]
 Reason for load:  Load due to thread limit = 7%
 Load due to routingTime = 10% = 100ms / 1000ms = overloadLow (100%)
 Load due to messageSendTimeRequest = 20% = 100ms / 500ms = overloadLow 
 (100%)
 Load due to output bandwidth limiting = 129.8% because 
 outputBytes(765706)  limit (589824.009 ) = outLimitCutoff (0.8) * 
 outputBandwidthLimit (12288) * 60

This also suggests they may be real transfers. But it ought to settle
down eventually, unless there is an external stress causing this - for
example, unusual local load patterns.

 Load due to CPU usage = 133.3% = 100% / 0.75

Hey, somebody actually uses doCPULoad! :)

 Load due to expected inbound transfers: 1.8% because: 4727.252702849095 
 req/hr * 0.00277850822403 (pTransfer) * 268259.0 bytes = 3523504 
 bytes/hr expected from current requests, but maxInputBytes/minute = 
 2949120 (output limit assumed smaller than input capacity) * 60 * 1.1 = 
 194641920 bytes/hr target
 Load due to expected outbound transfers: 0.1% because: 69.17398158133034 
 req/hr * 0.0010(999 0s, 1 1s, 1000 total) (pTransfer) * 268259.0 bytes = 
 18556 bytes/hr expected from current requests, but maxOutputBytes/minute 
 = 516095 * 60 = 30965759 bytes/hr target
 Estimated external pSearchFailed (based only on QueryRejections due to 
 load):0.729
 Current estimated requests per hour:  4417.545876413522
 Current global quota (requests per hour): 2235.1086113701913
 Current global quota limit from bandwidth (requests per hour): 
 164903.3210442147
 Highest seen bytes downloaded in one minute:  1595458
 Current outgoing request rate 4727.252702849095
 Current probability of a request succeeding by routing0.7%
 Current probability of an inbound request causing a transfer outwards 0.1%
 Current target (best case single node) probability of a request 
 succeeding3.7%
 
 
 
 **Node Status Info**
 Uptime:   1 day,   4 hours,   0 minutes
 Current routingTime: 0ms.
 Pooled threads running jobs: 11 (5.5%)   [QueryRejecting all 
 incoming requests!]
 Pooled threads which are idle: 18
 It's normal for the node to sometimes reject connections or requests for 
 a limited period. If you're seeing rejections continuously the node is 
 overloaded or something is wrong (i.e. a bug). Current estimated load 
 for rate limiting: 127.4%.
 Load due to thread limit = 5.5%
 Load due to routingTime = 10% = 100ms / 1000ms = overloadLow (100%)
 Load due to messageSendTimeRequest = 20% = 100ms / 500ms 

[freenet-support] Active transfers too high

2004-12-29 Thread freenetproject
My node often overloads with too many transfers.  You will see that I 
have 467 active transfers transmitting.

**Connections [Switch to peers mode]**
Wed Dec 29 10:03:30 PST 2004
[More details]
Connections open (Inbound/Outbound/Limit)   70 (12/58/100)
Transfers active (Transmitting/Receiving)   467 (462/5)
Data waiting to be transferred  47 KiB
Total amount of data transferred2,002 MiB
**Enviroment**
Architecture and Operating System   

Architecturei386
Available processors1
Operating SystemLinux
OS Version  2.6.5-7.111.5-default
Java Virtual Machine
JVM Vendor  Sun Microsystems Inc.
JVM NameJava HotSpot(TM) Server VM
JVM Version 1.5.0-b64
Memory Allocation   
Maximum memory the JVM will allocate259,264 KiB
Memory currently allocated by the JVM   259,264 KiB
Memory in use   221,159,704 Bytes
Estimated memory used by logger None
Unused allocated memory 44,326,632 Bytes
Data Store  
Maximum size117,187,500 KiB
Used space  24,037,700 KiB
Free space  93,149,800 KiB
Percent used20
Total keys  91753
Space used by temp files966,116 KiB
Maximum space for temp files40,000,001,192 Bytes

**General Information**
Version Information  	
	
Node Version	0.5
Protocol Version	STABLE-1.51
Build Number	5100
CVS Revision	1.90.2.50.2.128
	Uptime 	
	 1 day 3 hours 57 minutes 	
	Load 	
Current routingTime	0ms
Current messageSendTimeRequest	0ms
Pooled threads running jobs	14 (7%)
Pooled threads which are idle	15
Current upstream bandwidth usage	12783 bytes/second (104%)
Reason for QueryRejecting requests:	Estimated load (130%)  overloadHigh 
(125%)Estimated load (130%)  overloadHigh (125%)
	It's normal for the node to sometimes reject connections or requests 
for a limited period. If you're seeing rejections continuously the node 
is overloaded or something is wrong (i.e. a bug).
Current estimated load for QueryReject purposes	130%
Current estimated load for rate limiting	133.3% [QueryRejecting all 
incoming requests!]
Reason for load:	Load due to thread limit = 7%
Load due to routingTime = 10% = 100ms / 1000ms = overloadLow (100%)
Load due to messageSendTimeRequest = 20% = 100ms / 500ms = overloadLow 
(100%)
Load due to output bandwidth limiting = 129.8% because 
outputBytes(765706)  limit (589824.009 ) = outLimitCutoff (0.8) * 
outputBandwidthLimit (12288) * 60
Load due to CPU usage = 133.3% = 100% / 0.75
Load due to expected inbound transfers: 1.8% because: 4727.252702849095 
req/hr * 0.00277850822403 (pTransfer) * 268259.0 bytes = 3523504 
bytes/hr expected from current requests, but maxInputBytes/minute = 
2949120 (output limit assumed smaller than input capacity) * 60 * 1.1 = 
194641920 bytes/hr target
Load due to expected outbound transfers: 0.1% because: 69.17398158133034 
req/hr * 0.0010(999 0s, 1 1s, 1000 total) (pTransfer) * 268259.0 bytes = 
18556 bytes/hr expected from current requests, but maxOutputBytes/minute 
= 516095 * 60 = 30965759 bytes/hr target
Estimated external pSearchFailed (based only on QueryRejections due to 
load):	0.729
Current estimated requests per hour:	4417.545876413522
Current global quota (requests per hour):	2235.1086113701913
Current global quota limit from bandwidth (requests per hour): 
164903.3210442147
Highest seen bytes downloaded in one minute:	1595458
Current outgoing request rate	4727.252702849095
Current probability of a request succeeding by routing	0.7%
Current probability of an inbound request causing a transfer outwards	0.1%
Current target (best case single node) probability of a request 
succeeding	3.7%


**Node Status Info**
Uptime:   1 day,   4 hours,   0 minutes
Current routingTime: 0ms.
Pooled threads running jobs: 11 (5.5%)   [QueryRejecting all 
incoming requests!]
Pooled threads which are idle: 18
It's normal for the node to sometimes reject connections or requests for 
a limited period. If you're seeing rejections continuously the node is 
overloaded or something is wrong (i.e. a bug). Current estimated load 
for rate limiting: 127.4%.
Load due to thread limit = 5.5%
Load due to routingTime = 10% = 100ms / 1000ms = overloadLow (100%)
Load due to messageSendTimeRequest = 20% = 100ms / 500ms = overloadLow 
(100%)
Load due to output bandwidth limiting = 127.4% because 
outputBytes(751185)  limit (589824.009 ) = outLimitCutoff (0.8) * 
outputBandwidthLimit (12288) * 60
Load due to CPU usage = 102.7% = 77% / 0.75
Load due to expected inbound transfers: 1.8% because: 4727.252702849095 
req/hr * 0.00277850822403 (pTransfer) * 268290.0 bytes = 3523911 
bytes/hr expected from current requests, but maxInputBytes/minute = 
2949120 (output limit assumed smaller than input capacity) * 60 * 1.1 = 
194641920 bytes/hr target
Load due to expected outbound transfers: 0.1% because: 63.31118607638463 
req/hr * 0.0010(999 0s, 1 1s, 1000 total) (pTransfer) * 268290.0 bytes = 
16985 bytes/hr expected from current requests, but