[jira] [Comment Edited] (TS-1006) memory management, cut down memory waste ?

2012-12-26 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539813#comment-13539813
 ] 

jaekyung oh edited comment on TS-1006 at 12/27/12 3:20 AM:
---

your patches looks fine but regretfully it doesn't look fundamental solution.

because the memory usage still keep increasing every moment and finally we 
couldn't help but restart traffic server today because memory usage reached 90% 
of 16G.

in spite of your new patches it seems we have to restart traffic server every 
1~2 days to avoid system crash might be caused by run out of memory. it's same 
situation before your patches

Thansk you again.

  was (Author: genext):
your patches looks fine but regretfully it doesn't look fundamental 
solution.

because the memory usage still keep increasing every moment and finally we 
couldn't help but restart traffic server today because memory usage reached 90% 
of 16G. we

in spite of your new patches it seems we have to restart traffic server every 
1~2 days to avoid system crash might be caused by run out of memory. it's same 
situation before your patches

Thansk you again.
  
 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |   

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-26 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539813#comment-13539813
 ] 

jaekyung oh commented on TS-1006:
-

your patches looks fine but regretfully it doesn't look fundamental solution.

because the memory usage still keep increasing every moment and finally we 
couldn't help but restart traffic server today because memory usage reached 90% 
of 16G. we

in spite of your new patches it seems we have to restart traffic server every 
1~2 days to avoid system crash might be caused by run out of memory. it's same 
situation before your patches

Thansk you again.

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-26 Thread bettydramit (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539840#comment-13539840
 ] 

bettydramit commented on TS-1006:
-

How reasonable to set these values​​? 

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 memory/CongestRequestParamAllocator
   0 |  0 |128 | 
 memory/CongestionDBContAllocator
 5767168 | 704512 |   2048 | memory/hdrStrHeap
18350080 |1153024 | 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-26 Thread Yunkai Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539842#comment-13539842
 ] 

Yunkai Zhang commented on TS-1006:
--

[~bettydreamit]

Please read this comment: 
https://issues.apache.org/jira/browse/TS-1006?focusedCommentId=13539187page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13539187

If the description is not clear to understand, tell us.

For example, you can decrease *max_overage* to speed up reclaiming rate(but may 
harm performance if max_overage is too small, you can test it). 

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |   

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-26 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539851#comment-13539851
 ] 

jaekyung oh commented on TS-1006:
-

i don't know what value will be adequate. The value was 

CONFIG proxy.config.allocator.enable_reclaim INT 1
CONFIG proxy.config.allocator.reclaim_factor FLOAT 0.20
CONFIG proxy.config.allocator.max_overage INT 100

if i should adjust these values like below is it ok?
CONFIG proxy.config.allocator.reclaim_factor FLOAT 0.50
CONFIG proxy.config.allocator.max_overage INT 10

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |

[jira] [Updated] (TS-1637) nodes as idle/dead if we have not heard from them in awhile

2012-12-26 Thread Bin Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-1637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bin Chen updated TS-1637:
-

Attachment: TS-1637.patch

improve cluster start when have more node

 nodes as idle/dead if we have not heard from them in awhile
 ---

 Key: TS-1637
 URL: https://issues.apache.org/jira/browse/TS-1637
 Project: Traffic Server
  Issue Type: Improvement
  Components: Clustering, Management
Affects Versions: 3.2.0
Reporter: Bin Chen
Assignee: Bin Chen
 Attachments: TS-1637.patch


 in ClusterCom::sendReliableMessageReadTillClose(), if we can't connect peer 
 node quickly(if timeout, will  30s(default peer_timeout)), then won't send 
 Shared date called by ClusterCom::sendSharedData. So peer node will marking 
 me down.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira