[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2013-08-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13728650#comment-13728650
 ] 

ASF subversion and git services commented on TS-1006:
-

Commit 1bb12edf9cd9b0076eaf39b8665909846743ee73 in branch refs/heads/3.3.x from 
[~zwoop]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=1bb12ed ]

Merge branch 'master' into 3.3.x

* master: (295 commits)
  TS-2081 Make the WCCP addr configuration LOCAL
  TS-2093: Check bounds on plugin stat creation.
  TS-2092 Use of uninitialized member in HdrHeap.
  [TS-2052] ET_SSL thread spinning
  TS-2090 Make proxy.config.allocator.enable_reclaim default based on build 
instructions
  TS-1006: adjust some reclaimable-freelist's default configuration
  Updated newish tests to the gitignore
  Fix autoconf checks for mcheck_pedantic()
  Fix configure check for eventfd
  TS-1330: Fix logging core at checkout_write()
  add some standard extensions to ignore list
  TS-1953: remove version check from stable plugins
  TS-1953: Remove check_ts_version() from experimental plugins
  TS-1953: remove check_ts_version() from examples
  Added TS-2086.
  TS-2086 Remove a few more unused configs
  Added TS-1685
  TS-1685 Remove TS_MICRO and fellas
  Added Ts-1255
  TS-1255 Fix the types for all overridable configs. This was actually a real 
bug in the code, in that all float configurations were actually treated as 
integer, rendering them useless
  ...


 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Yunkai Zhang
 Fix For: 3.3.4

 Attachments: 
 0001-TS-1006-Add-an-enable-reclaimable-freelist-option.patch, 
 0002-TS-1006-Add-a-new-wrapper-ink_atomic_decrement.patch, 
 0003-TS-1006-Introduce-a-reclaimable-InkFreeList.patch, 
 0004-TS-1006-Make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2013-08-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13727759#comment-13727759
 ] 

ASF subversion and git services commented on TS-1006:
-

Commit cdbfe1d6d6c43241d773a32e3bbdd0ca1d9b in branch refs/heads/master 
from [~yunkai]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=cdbfe1d ]

TS-1006: adjust some reclaimable-freelist's default configuration

1) Adjust enable_reclaim to 1 as default in
   records.config.default.in file.

2) Adjust max_overage to 3 in code, so that it keeps the same value
   with records.config.default.in. And most important, this value seems
   work well in our product environment.

Signed-off-by: Yunkai Zhang qiushu@taobao.com


 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Yunkai Zhang
 Fix For: 3.3.4

 Attachments: 
 0001-TS-1006-Add-an-enable-reclaimable-freelist-option.patch, 
 0002-TS-1006-Add-a-new-wrapper-ink_atomic_decrement.patch, 
 0003-TS-1006-Introduce-a-reclaimable-InkFreeList.patch, 
 0004-TS-1006-Make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2013-04-02 Thread Yunkai Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13619550#comment-13619550
 ] 

Yunkai Zhang commented on TS-1006:
--

Do you use master for testing? Or does this issue happen on your private branch 
which had been backported from master?

It works well for us in product environment for more than one months.

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.3

 Attachments: 
 0001-TS-1006-Add-an-enable-reclaimable-freelist-option.patch, 
 0002-TS-1006-Add-a-new-wrapper-ink_atomic_decrement.patch, 
 0003-TS-1006-Introduce-a-reclaimable-InkFreeList.patch, 
 0004-TS-1006-Make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 memory/CongestRequestParamAllocator
   0 |  0 |128 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2013-04-01 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13618711#comment-13618711
 ] 

jaekyung oh commented on TS-1006:
-

Hi. Yunkai Zhang.

I tried again with ./configure --enable-reclaimable-freelist

but same happens...

these are the part of traffic.out when i executed traffic_line -x


 360192 | 360192 |672 | memory/netVCAllocator
  0 |  0 |120 | 
memory/udpReadContAllocator
  0 |  0 |176 | 
memory/udpPacketAllocator
  0 |  0 |384 | memory/socksAllocator
  0 |  0 |128 | 
memory/UDPIOEventAllocator
  16256 |  16256 | 64 | memory/ioBlockAllocator
   8064 |   8064 | 48 | memory/ioDataAllocator
  32480 |  32480 |232 | memory/ioAllocator
 109512 | 109512 | 72 | memory/mutexAllocator
 318032 | 318032 | 88 | memory/eventAllocator
 268288 | 268288 |   1024 | memory/ArenaBlock
[Apr  1 18:18:35.010] Manager {0x7f25beffd700} NOTE: User has changed config 
file records.config
[Apr  1 18:18:36.678] Server {0x2b989b06b700} NOTE: cache enabled
[2b989b06b700:01][ink_queue_ext.cc:00577][F]  13.28M t:278 f:274  m:0
avg:0.0M:4csbase:256  csize:278  tsize:88 cbsize:24576
[2b989b06b700:01][ink_queue_ext.cc:00584][-]  13.28M t:278 f:274  m:0
avg:0.0M:4csbase:256  csize:278  tsize:88 cbsize:24576
[2b989b06b700:01][ink_queue_ext.cc:00577][F]  13.32M t:278 f:274  m:0
avg:82.2   M:4csbase:256  csize:278  tsize:88 cbsize:24576
[2b989b06b700:01][ink_queue_ext.cc:00584][-]  13.32M t:196 f:192  m:0
avg:82.2   M:4csbase:256  csize:278  tsize:88 cbsize:24576
NOTE: Traffic Server received Sig 11: Segmentation fault
 what happened after logging?
/usr/local/nts/bin/traffic_server - STACK TRACE:
/lib64/libpthread.so.0(+0xf2d0)[0x2b9897f1f2d0]
[0x2b989b274f10]
[Apr  1 18:18:39.800] Manager {0x7f25c7ec0720} ERROR: 
[LocalManager::pollMgmtProcessServer] Server Process terminated due to Sig 11: 
Segmentation fault
[Apr  1 18:18:39.800] Manager {0x7f25c7ec0720} ERROR:  (last system error 2: No 
such file or directory)
[Apr  1 18:18:39.800] Manager {0x7f25c7ec0720} ERROR: [Alarms::signalAlarm] 
Server Process was reset
[Apr  1 18:18:39.800] Manager {0x7f25c7ec0720} ERROR:  (last system error 2: No 
such file or directory)
[Apr  1 18:18:40.803] Manager {0x7f25c7ec0720} NOTE: [LocalManager::startProxy] 
Launching ts process
[TrafficServer] using root directory '/usr/local/nts'
[Apr  1 18:18:40.813] Manager {0x7f25c7ec0720} NOTE: 
[LocalManager::pollMgmtProcessServer] New process connecting fd '10'
[Apr  1 18:18:40.813] Manager {0x7f25c7ec0720} NOTE: [Alarms::signalAlarm] 
Server Process born
.



 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.3

 Attachments: 
 0001-TS-1006-Add-an-enable-reclaimable-freelist-option.patch, 
 0002-TS-1006-Add-a-new-wrapper-ink_atomic_decrement.patch, 
 0003-TS-1006-Introduce-a-reclaimable-InkFreeList.patch, 
 0004-TS-1006-Make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2013-03-27 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13614983#comment-13614983
 ] 

jaekyung oh commented on TS-1006:
-

yes. with last final patch i always put --enable-reclaimable-freelist=yes

same error happens on both 3.2.0 and 3.2.4

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.3

 Attachments: 
 0001-TS-1006-Add-an-enable-reclaimable-freelist-option.patch, 
 0002-TS-1006-Add-a-new-wrapper-ink_atomic_decrement.patch, 
 0003-TS-1006-Introduce-a-reclaimable-InkFreeList.patch, 
 0004-TS-1006-Make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 memory/CongestRequestParamAllocator
   0 |  0 |128 | 
 memory/CongestionDBContAllocator
 5767168 | 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2013-03-27 Thread Yunkai Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13614985#comment-13614985
 ] 

Yunkai Zhang commented on TS-1006:
--

1) The format seems error,

it should be ./configure --enable-reclaimable-freelist, need not =yes.

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.3

 Attachments: 
 0001-TS-1006-Add-an-enable-reclaimable-freelist-option.patch, 
 0002-TS-1006-Add-a-new-wrapper-ink_atomic_decrement.patch, 
 0003-TS-1006-Introduce-a-reclaimable-InkFreeList.patch, 
 0004-TS-1006-Make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 memory/CongestRequestParamAllocator
   0 |  0 |128 | 
 memory/CongestionDBContAllocator
 5767168 | 704512 |   2048 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2013-03-26 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13614856#comment-13614856
 ] 

jaekyung oh commented on TS-1006:
-

i've applied final patch on 3.2.0 and 3.2.4. it seems fine without debug 
options.

but when i tried turning on CONFIG proxy.config.allocator.debug_filter INT 0 - 
1, no debug message was printed.

so i tried to turn on CONFIG proxy.config.dump_mem_info_frequency INT 0 - 1, 
memory dump message were printed. 

but if i repeat it 3times(0 - 1 - traffic_line -x - 1 - 0 - traffic_line 
-x is one time) traffic server stops showing these messages.

[Mar 26 22:16:56.218] Manager {0x7f1d6effd700} NOTE: User has changed config 
file records.config
 NOTE: Traffic Server received Sig 11: Segmentation fault
 /usr/local/bin/traffic_server - STACK TRACE:
 /lib64/libpthread.so.0(+0xf2d0)[0x2b59c7a3a2d0]
 [0x2b59cc49d3e0]
 [Mar 26 22:17:02.986] Manager {0x7f1d77a93720} ERROR: 
[LocalManager::pollMgmtProcessServer] Server Process terminated due to Sig 11: 
Segmentation fault

[Mar 26 22:30:10.508] Manager {0x7f7d57fff700} NOTE: User has changed config 
file records.config
 NOTE: Traffic Server received Sig 11: Segmentation fault
 /usr/local/bin/traffic_server - STACK TRACE:
 /lib64/libpthread.so.0(+0xfd00)[0x2b3998975d00]
 
/usr/local/bin/traffic_server(_Z12init_trackerPKc8RecDataT7RecDataPv+0x31)[0x4eb6f1]
 /usr/local/bin/traffic_server(_Z22RecExecConfigUpdateCbsv+0x79)[0x6a9c99]
 
/usr/local/bin/traffic_server(_ZN18config_update_cont14exec_callbacksEiP5Event+0x26)[0x6ab976]
 /usr/local/bin/traffic_server(_ZN7EThread7executeEv+0xb93)[0x6b58e3]
 /usr/local/bin/traffic_server[0x6b3572]
 /lib64/libpthread.so.0(+0x7f05)[0x2b399896df05]
 /lib64/libc.so.6(clone+0x6d)[0x2b399abbe10d]


 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.3

 Attachments: 
 0001-TS-1006-Add-an-enable-reclaimable-freelist-option.patch, 
 0002-TS-1006-Add-a-new-wrapper-ink_atomic_decrement.patch, 
 0003-TS-1006-Introduce-a-reclaimable-InkFreeList.patch, 
 0004-TS-1006-Make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2013-03-26 Thread Yunkai Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13614868#comment-13614868
 ] 

Yunkai Zhang commented on TS-1006:
--

do you compile ATS with --enable-reclaimable-freelist option?


 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.3

 Attachments: 
 0001-TS-1006-Add-an-enable-reclaimable-freelist-option.patch, 
 0002-TS-1006-Add-a-new-wrapper-ink_atomic_decrement.patch, 
 0003-TS-1006-Introduce-a-reclaimable-InkFreeList.patch, 
 0004-TS-1006-Make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 memory/CongestRequestParamAllocator
   0 |  0 |128 | 
 memory/CongestionDBContAllocator
 5767168 | 704512 |   2048 | memory/hdrStrHeap

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2013-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13572367#comment-13572367
 ] 

ASF subversion and git services commented on TS-1006:
-

Commit 616203849743e32d446f09a9089da29116982947 from Zhao Yongming 
ming@gmail.com
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=6162038 ]

TS-1006: Remove unused InkFreeList-offset

I have simplified ink_freelist_init API by removing offset parameter,
but I forgot to remove unused InkFreeList-offset when rebasing code.

So now, the code using an uninitialized value, it would be a bug. Let's
clean it completely.

Signed-off-by: Yunkai Zhang qiushu@taobao.com
Signed-off-by: Zhao Yongming ming@gmail.com


 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 
 0001-TS-1006-Add-an-enable-reclaimable-freelist-option.patch, 
 0002-TS-1006-Add-a-new-wrapper-ink_atomic_decrement.patch, 
 0003-TS-1006-Introduce-a-reclaimable-InkFreeList.patch, 
 0004-TS-1006-Make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 | 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2013-01-30 Thread James Peach (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13567397#comment-13567397
 ] 

James Peach commented on TS-1006:
-

Some thoughts:
- we should start work on merging this, because I don't think it will get 
sufficient review outside of trunk
- unless there is a performance cost, it should always be compiled in but 
default to disabled at runtime
- when merging, break it down into smaller, uncontroversial pieces; for 
example add the configuration options first, then some infrastructure, then 
more functional pieces; try to avoid the unnecessary whitespace and general fix 
changes
- do we really need another linked list?
- it would be great to have some unit tests

:)

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: -INTRODUCTION-.patch, 
 0001-TS-1006-Allocator-Introduce-a-reclaimable-InkFreeLis.patch, 
 0002-TS-1006-Allocator-make-InkFreeList-memory-pool-confi.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  

Re: [jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2013-01-30 Thread Yunkai Zhang
On Thu, Jan 31, 2013 at 2:17 PM, James Peach (JIRA) j...@apache.org wrote:


 [
 https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13567397#comment-13567397]

 James Peach commented on TS-1006:
 -

 Some thoughts:
 - we should start work on merging this, because I don't think it will
 get sufficient review outside of trunk
 - unless there is a performance cost, it should always be compiled in
 but default to disabled at runtime


Agree.


 - when merging, break it down into smaller, uncontroversial pieces;
 for example add the configuration options first, then some infrastructure,
 then more functional pieces; try to avoid the unnecessary whitespace and
 general fix changes


Ok, I can split them into more pieces.


 - do we really need another linked list?


When I developed this patchset, I was not so familiar with TS library.

Now, I  have found that there is a List.h file containing templated
single-link/double-link list library.

I'll try to use templated-list in this patchset, although I prefer C-Style
link list:D.


 - it would be great to have some unit tests


Agree, but I'm busy with other things now. Unit tests is not so urgent for
us,  maybe I'll write some tests in the future.

I'll give a new version later with your advise.

Thank you!



 :)


-- 
Yunkai Zhang
Work at Taobao


[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2013-01-16 Thread Zhao Yongming (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13555899#comment-13555899
 ] 

Zhao Yongming commented on TS-1006:
---

ram_cache.algorithm: 0, that is CLFUS. maybe the CLFUS is good with our 
patches? and 72% is about 11G, that sounds very good to me.

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 0005-Allocator-adjust-reclaiming-strategy-of-InkFreeList.patch, 
 0006-RamCacheLRU-split-LRU-queue-into-multiple-queues-to-.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-31 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13541562#comment-13541562
 ] 

jaekyung oh commented on TS-1006:
-

Happy New Year.

after a week monitoring your last patch shows it's effective. At first, for a 
couple of days memory usage didn't stop increase but then it keeps going 
between minimum value of 72% and maximum value of 76%.

Even though I haven't applied 6th patch traffic server is stable now. Thank you.

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 0005-Allocator-adjust-reclaiming-strategy-of-InkFreeList.patch, 
 0006-RamCacheLRU-split-LRU-queue-into-multiple-queues-to-.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-31 Thread Yunkai Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13541586#comment-13541586
 ] 

Yunkai Zhang commented on TS-1006:
--

[~genext]

That is the first gift for me in 2013:D.

Can you show us your configuration to here, maybe it will help other users:
{code}
// physical mem of OS: 16G ?
ram_cache.size: ?
reclaim_factor: ?
max_overage: /
{code}

Happy New Year.

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 0005-Allocator-adjust-reclaiming-strategy-of-InkFreeList.patch, 
 0006-RamCacheLRU-split-LRU-queue-into-multiple-queues-to-.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-31 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13541595#comment-13541595
 ] 

jaekyung oh commented on TS-1006:
-

sure.

physical mem of OS: 16G
ram_cache.size: 80 (8G)
reclaim_factor: 0.30
max_overage: 3

ram_cache.algorithm: 0

thanks.

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 0005-Allocator-adjust-reclaiming-strategy-of-InkFreeList.patch, 
 0006-RamCacheLRU-split-LRU-queue-into-multiple-queues-to-.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-29 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13540843#comment-13540843
 ] 

jaekyung oh commented on TS-1006:
-

got it. i'll talk about ram_cache.size with op team. thank you for your 
immediate and easily understandable explanations.

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 0005-Allocator-adjust-reclaiming-strategy-of-InkFreeList.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 memory/CongestRequestParamAllocator
   0 |  0 |128 | 
 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-28 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13540739#comment-13540739
 ] 

jaekyung oh commented on TS-1006:
-

Hi Yunkai Zhang.

i'm afraid there is memory leak a little because the memory usage tends to 
increase. Now traffic server uses 8.7G bytes. Maybe it will use 9G tomorrow.

we set ram_cache.size 80, ram_cache_cutoff 60

part of yesterday debug logs :

[2aac3c888700:40][ink_queue.cc:00616][F] 8599.70M t:12861   f:71   m:71   
avg:70.9   M:12790 csbase:64   csize:64   tsize:4096   cbsize:266240
[2aac3c888700:40][ink_queue.cc:00623][-] 8599.70M t:12791   f:1m:71   
avg:70.9   M:12790 csbase:64   csize:64   tsize:4096   cbsize:266240
[2aac3c888700:41][ink_queue.cc:00631][M] 8599.70M t:15469   f:0m:1
avg:6.3M:15469 csbase:32   csize:32   tsize:8192   cbsize:266240
[2aac3c888700:41][ink_queue.cc:00634][+] 8599.70M t:15501   f:31   m:1
avg:6.3M:15469 csbase:32   csize:32   tsize:8192   cbsize:266240
[2aac3c888700:01][ink_queue.cc:00631][M] 8599.70M t:117925  f:0m:12   
avg:45.7   M:117925 csbase:256  csize:278  tsize:88 cbsize:24576
[2aac3c888700:01][ink_queue.cc:00634][+] 8599.70M t:118203  f:277  m:12   
avg:45.7   M:117925 csbase:256  csize:278  tsize:88 cbsize:24576
[2aac3c888700:40][ink_queue.cc:00631][M] 8599.70M t:12791   f:0m:1
avg:9.1M:12791 csbase:64   csize:64   tsize:4096   cbsize:266240
[2aac3c888700:40][ink_queue.cc:00634][+] 8599.70M t:12855   f:63   m:1
avg:9.1M:12791 csbase:64   csize:64   tsize:4096   cbsize:266240
[2aac3c888700:42][ink_queue.cc:00616][F] 8599.70M t:22873   f:18   m:16   
avg:16.1   M:22855 csbase:32   csize:32   tsize:16384  cbsize:528384
[2aac3c888700:42][ink_queue.cc:00623][-] 8599.70M t:22857   f:2m:16   
avg:16.1   M:22855 csbase:32   csize:32   tsize:16384  cbsize:528384
[2aac3c888700:43][ink_queue.cc:00616][F] 8599.70M t:4474f:37   m:26   
avg:31.2   M:4437 csbase:32   csize:31   tsize:32768  cbsize:1019904
[2aac3c888700:43][ink_queue.cc:00623][-] 8599.70M t:4443f:6m:26   
avg:31.2   M:4437 csbase:32   csize:31   tsize:32768  cbsize:1019904
[2aac3c888700:47][ink_queue.cc:00616][F] 8599.70M t:264 f:2m:1
avg:1.3M:262  csbase:32   csize:1tsize:524288 cbsize:528384
[2aac3c888700:47][ink_queue.cc:00623][-] 8599.19M t:263 f:1m:1
avg:1.3M:262  csbase:32   csize:1tsize:524288 cbsize:528384
[2aac3c888700:46][ink_queue.cc:00631][M] 8599.19M t:733 f:0m:0
avg:1.1M:733  csbase:32   csize:3tsize:262144 cbsize:790528
[2aac3c888700:46][ink_queue.cc:00634][+] 8599.19M t:736 f:2m:0
avg:1.1M:733  csbase:32   csize:3tsize:262144 cbsize:790528
[2aac3c888700:27][ink_queue.cc:00616][F] 8599.19M t:157 f:157  m:152  
avg:150.7  M:0csbase:128  csize:129  tsize:2048   cbsize:266240
[2aac3c888700:27][ink_queue.cc:00623][-] 8598.94M t:7   f:7m:152  
avg:150.7  M:0csbase:128  csize:129  tsize:2048   cbsize:266240
[2aac3c888700:41][ink_queue.cc:00616][F] 8598.94M t:15501   f:30   m:30   
avg:29.8   M:15471 csbase:32   csize:32   tsize:8192   cbsize:266240

current last debug logs :

[2aac3c989700:44][ink_queue.cc:00616][F] 8710.25M t:1461f:2m:1
avg:1.3M:1459 csbase:32   csize:15   tsize:65536  cbsize:987136
[2aac3c989700:44][ink_queue.cc:00623][-] 8710.25M t:1460f:1m:1
avg:1.3M:1459 csbase:32   csize:15   tsize:65536  cbsize:987136
[2aac3c989700:45][ink_queue.cc:00616][F] 8710.25M t:1536f:2m:1
avg:0.8M:1534 csbase:32   csize:7tsize:131072 cbsize:921600
[2aac3c989700:45][ink_queue.cc:00623][-] 8710.25M t:1536f:2m:1
avg:0.8M:1534 csbase:32   csize:7tsize:131072 cbsize:921600
[2aac3c989700:47][ink_queue.cc:00616][F] 8710.25M t:93  f:3m:2
avg:1.9M:90   csbase:32   csize:1tsize:524288 cbsize:528384
[2aac3c989700:47][ink_queue.cc:00623][-] 8709.75M t:92  f:2m:2
avg:1.9M:90   csbase:32   csize:1tsize:524288 cbsize:528384
[2aac3c989700:41][ink_queue.cc:00631][M] 8709.75M t:18766   f:0m:1
avg:0.3M:18766 csbase:32   csize:32   tsize:8192   cbsize:266240
[2aac3c989700:41][ink_queue.cc:00634][+] 8710.00M t:18798   f:31   m:1
avg:0.3M:18766 csbase:32   csize:32   tsize:8192   cbsize:266240
[2aac3c686700:42][ink_queue.cc:00616][F] 8710.00M t:26868   f:32   m:32   
avg:30.6   M:26836 csbase:32   csize:32   tsize:16384  cbsize:528384
[2aac3c686700:42][ink_queue.cc:00623][-] 8710.00M t:26838   f:2m:32   
avg:30.6   M:26836 csbase:32   csize:32   tsize:16384  cbsize:528384
[2aac3c686700:01][ink_queue.cc:00631][M] 8710.00M t:141808  f:0m:2
avg:6.2M:141808 csbase:256  csize:278  tsize:88 cbsize:24576
[2aac3c686700:01][ink_queue.cc:00634][+] 8710.02M t:142086  f:277  m:2
avg:6.2M:141808 csbase:256  csize:278  tsize:88 cbsize:24576

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-27 Thread bettydramit (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539988#comment-13539988
 ] 

bettydramit commented on TS-1006:
-

patch error with 0005-xxx
+ /usr/bin/patch -s -p1 --fuzz=0
1 out of 7 hunks FAILED -- saving rejects to file lib/ts/ink_queue.cc.rej

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 0005-Allocator-adjust-reclaiming-strategy-of-InkFreeList.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 memory/CongestRequestParamAllocator
   0 |  0 |128 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-27 Thread bettydramit (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13540044#comment-13540044
 ] 

bettydramit commented on TS-1006:
-

HI! Yunkai Zhang
the same error
This patch of 0005 to be dependent on the other(0001---4)?

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 0005-Allocator-adjust-reclaiming-strategy-of-InkFreeList.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 memory/CongestRequestParamAllocator
   0 |  0 |128 | 
 memory/CongestionDBContAllocator
 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-27 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13540248#comment-13540248
 ] 

jaekyung oh commented on TS-1006:
-

Hi Yunkai Zhang!

it's great!! it' been only one day since i applied your last patch. it's very 
hopeful.

First the memory usage had been increasing up before it reached 8G.

Since that time traffic server keeps memory usage around 8G for 12 hours. 
That's what we want! Thank you so much. Great job!!

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 0005-Allocator-adjust-reclaiming-strategy-of-InkFreeList.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-26 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539813#comment-13539813
 ] 

jaekyung oh commented on TS-1006:
-

your patches looks fine but regretfully it doesn't look fundamental solution.

because the memory usage still keep increasing every moment and finally we 
couldn't help but restart traffic server today because memory usage reached 90% 
of 16G. we

in spite of your new patches it seems we have to restart traffic server every 
1~2 days to avoid system crash might be caused by run out of memory. it's same 
situation before your patches

Thansk you again.

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-26 Thread bettydramit (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539840#comment-13539840
 ] 

bettydramit commented on TS-1006:
-

How reasonable to set these values​​? 

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 memory/CongestRequestParamAllocator
   0 |  0 |128 | 
 memory/CongestionDBContAllocator
 5767168 | 704512 |   2048 | memory/hdrStrHeap
18350080 |1153024 | 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-26 Thread Yunkai Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539842#comment-13539842
 ] 

Yunkai Zhang commented on TS-1006:
--

[~bettydreamit]

Please read this comment: 
https://issues.apache.org/jira/browse/TS-1006?focusedCommentId=13539187page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13539187

If the description is not clear to understand, tell us.

For example, you can decrease *max_overage* to speed up reclaiming rate(but may 
harm performance if max_overage is too small, you can test it). 

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |   

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-26 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539851#comment-13539851
 ] 

jaekyung oh commented on TS-1006:
-

i don't know what value will be adequate. The value was 

CONFIG proxy.config.allocator.enable_reclaim INT 1
CONFIG proxy.config.allocator.reclaim_factor FLOAT 0.20
CONFIG proxy.config.allocator.max_overage INT 100

if i should adjust these values like below is it ok?
CONFIG proxy.config.allocator.reclaim_factor FLOAT 0.50
CONFIG proxy.config.allocator.max_overage INT 10

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-25 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539466#comment-13539466
 ] 

jaekyung oh commented on TS-1006:
-

i've applied your new patches this morning and no problem yet. it passed 4G 
where the error happened last time. it looks fine now. i'll let you know if 
something else happens.

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 memory/CongestRequestParamAllocator
   0 |  0 |128 | 
 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-25 Thread Yunkai Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539473#comment-13539473
 ] 

Yunkai Zhang commented on TS-1006:
--

[~genext]

Ok, Thanks your feedback:D

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 memory/CongestRequestParamAllocator
   0 |  0 |128 | 
 memory/CongestionDBContAllocator
 5767168 | 704512 |   2048 | memory/hdrStrHeap
18350080 |1153024 |

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-24 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539224#comment-13539224
 ] 

jaekyung oh commented on TS-1006:
-

i've tried again in accordance with your guide.

[2ba8ae8cc700:43][ink_queue.cc:00601][-] all:4011MB  t:1024f:8 m:40
avg:40.0malloc:1016csize:32tsize:32768   cbsize:1052672
[2ba8adbb7e20:40][ink_queue.cc:00595][M] all:4029MB  t:767 f:87m:86
avg:84.8malloc:680 csize:64tsize:4096cbsize:266240
[2ba8adbb7e20:40][ink_queue.cc:00601][-] all:4029MB  t:703 f:23m:86
avg:84.8malloc:680 csize:64tsize:4096cbsize:266240
[2ba8ae8cc700:03][ink_queue.cc:00668][F] all:4044MB  t:628 f:108   m:108   
avg:107.1   malloc:519 csize:70tsize:232 cbsize:16384
[2ba8ae8cc700:03][ink_queue.cc:00674][-] all:4044MB  t:557 f:38m:108   
avg:107.1   malloc:519 csize:70tsize:232 cbsize:16384
/usr/local/bin/traffic_server - STACK TRACE:
/usr/local/lib/libtsutil.so.3(ink_freelist_new+0x992)[0x2ba8ab4b5072]
/usr/local/bin/traffic_server[0x64a68f]
/usr/local/bin/traffic_server(_ZN7CacheVC10handleReadEiP5Event+0x1e2)[0x64ef52]
/usr/local/bin/traffic_server(_ZN5Cache9open_readEP12ContinuationP7INK_MD5P7HTTPHdrP21CacheLookupHttpConfig13CacheFragTypePci+0x754)[0x67ef54]
/usr/local/bin/traffic_server(_ZN14CacheProcessor9open_readEP12ContinuationP3URLP7HTTPHdrP21CacheLookupHttpConfigl13CacheFragType+0x130)[0x6540f0]
/usr/local/bin/traffic_server(_ZN11HttpCacheSM9open_readEP3URLP7HTTPHdrP21CacheLookupHttpConfigl+0x74)[0x521c14]
/usr/local/bin/traffic_server(_ZN6HttpSM24do_cache_lookup_and_readEv+0xfd)[0x538c1d]
/usr/local/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0xb20)[0x54ffd0]
/usr/local/bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x33a)[0x547fda]
/usr/local/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x353)[0x5494d3]
/usr/local/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x538)[0x54f9e8]
/usr/local/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0xb92)[0x550042]
/usr/local/bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x33a)[0x547fda]
/usr/local/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x353)[0x5494d3]
/usr/local/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x538)[0x54f9e8]
/usr/local/bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x33a)[0x547fda]
/usr/local/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x353)[0x5494d3]
/usr/local/bin/traffic_server(_ZN6HttpSM18state_api_callbackEiPv+0x9a)[0x55091a]
/usr/local/bin/traffic_server(TSHttpTxnReenable+0x40d)[0x4bec1d]
/usr/local/libexec/trafficserver/purge.so(+0x1e56)[0x2ba8b99bfe56]
/usr/local/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0xfb)[0x54927b]
/usr/local/bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x538)[0x54f9e8]
/usr/local/bin/traffic_server(_ZN6HttpSM32setup_client_read_request_headerEv+0x39f)[0x54793f]
/usr/local/bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x25a)[0x547efa]
/usr/local/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x353)[0x5494d3]
/usr/local/bin/traffic_server(_ZN6HttpSM21attach_client_sessionEP17HttpClientSessionP14IOBufferReader+0x688)[0x54a4e8]
/usr/local/bin/traffic_server(_ZN17HttpClientSession16state_keep_aliveEiPv+0xa8)[0x523738]
/usr/local/bin/traffic_server[0x6b6f4a]
/usr/local/bin/traffic_server(_ZN10NetHandler12mainNetEventEiP5Event+0x1fe)[0x6aff0e]
/usr/local/bin/traffic_server(_ZN7EThread13process_eventEP5Eventi+0x8e)[0x6dfd4e]
/usr/local/bin/traffic_server(_ZN7EThread7executeEv+0x4f0)[0x6e06c0]
/usr/local/bin/traffic_server[0x6df9b2]
/lib64/libpthread.so.0(+0x6a3f)[0x2ba8ab6dca3f]
/lib64/libc.so.6(clone+0x6d)[0x2ba8ad91a67d]
FATAL: Failed to mmap 50331648 bytes, Cannot allocate memory 
- here you 
want?
/usr/local/bin/traffic_server - STACK TRACE:
/usr/local/lib/libtsutil.so.3(ink_fatal+0x88)[0x2ba8ab4b0c68]
/usr/local/lib/libtsutil.so.3(+0x17aca)[0x2ba8ab4b3aca]
/usr/local/lib/libtsutil.so.3(ink_freelist_new+0x992)[0x2ba8ab4b5072]
/usr/local/bin/traffic_server[0x64a68f]
/usr/local/bin/traffic_server(_ZN7CacheVC10handleReadEiP5Event+0x1e2)[0x64ef52]

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-24 Thread Yunkai Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539229#comment-13539229
 ] 

Yunkai Zhang commented on TS-1006:
--

Ok, thanks.

I'll give a patch later, It'll do some optimization - Merge a data 
struct(InkChunkInfo) into chunk to reduce memory fragment. I'm not sure it will 
fix this issue, but worth trying.

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 memory/CongestRequestParamAllocator
   0 |  0 |128 | 
 memory/CongestionDBContAllocator
 5767168 | 704512 |   2048 | memory/hdrStrHeap
  

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-24 Thread Yunkai Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539313#comment-13539313
 ] 

Yunkai Zhang commented on TS-1006:
--

[~genext]

Please try this patch: 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, and 
give us some feedback.

Just apply it upon 0001-0002 patches.

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 memory/CongestRequestParamAllocator
   0 |  0 |128 | 
 memory/CongestionDBContAllocator
 5767168 | 704512 |   2048 | 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-23 Thread Yunkai Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539187#comment-13539187
 ] 

Yunkai Zhang commented on TS-1006:
--

[~genext]

It seems that mmap() failed to allocate memory. Was OS memory used up at that 
time?

This patches was based on master branch, you should also apply it carefully.

I haven't test these on OpenSuse.

And, you can enable debug info by modify debug_filter's value(do you notice 
it?):
##
#
# Configuration for InkFreeList memory allocator
#
##
  # Dump debug information according bit mask of debug_filter, if a bit is set
  # in the mask, then debug information of the corresponding action are dumped:
  #  bit 0: reclaim memory in ink_freelist_new
  #  bit 1: reclaim memory in ink_freelist_free
  #  bit 2: fetch memory from thread cache
CONFIG proxy.config.allocator.debug_filter INT 3
  # The value of enable_reclaim should be 0 or 1. We can disable all reclaiming
  # strategy by setting it to be 0.
CONFIG proxy.config.allocator.enable_reclaim INT 1
  # The value of reclaim_factor should be in [0, 1], allocator use it to
  # calculate average value of idle memory in InkFreeList, which will determine
  # when to reclaim memory. The larger the value, the faster the reclaiming.
  # This value is effective only when enable_reclaim is 1.
CONFIG proxy.config.allocator.reclaim_factor FLOAT 0.20
  # Allocator will reclaim memory only when it continuously satisfy the reclaim
  # condition for max_overage times. This value is effective only when
  # enable_reclaim is 1.
CONFIG proxy.config.allocator.max_overage INT 100


 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-23 Thread Yunkai Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539212#comment-13539212
 ] 

Yunkai Zhang commented on TS-1006:
--

[~genext]

By reading your log:

[2b59f1d13700:40][ink_queue.cc:00674][-] all:4015MB t:566 f:22 m:85 avg:84.6 
malloc:544 csize:64 tsize:4096 cbsize:266240
 The total memory used by InkFreeList allocator is 4015MB (including the 
 memory used by ram_cache).

FATAL: Failed to mmap 16781312 bytes, Cannot allocate memory
 The patch try to mmap an aligned memory (align 16781312 to 2^n * PAGE_SIZE).
 Suppose your OS's PAGE_SIZE is 4K, the aligned size should be: 2^13 * 4K = 
 2^25 = 32MB.

It seems that your OS can't malloc a continuous blocks of 32MB at that time?

What is the PAGE_SIZE on your OS?


 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |   

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-14 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13532466#comment-13532466
 ] 

Leif Hedstrom commented on TS-1006:
---

At a minimum, make the default max_mem be 0, which means never, ever, try to 
use this new reclaim code. Having a configure option would be cool too. I don't 
want a new config, with some arbitrary default (what did you expect to set 
max_mem to?), which the user misconfigures or don't configure at all, and we 
get bizarre behavior.

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 memory/CongestRequestParamAllocator
   0 |  

Re: [jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-14 Thread Yunkai Zhang
在 2012年12月15日星期六,Leif Hedstrom (JIRA) 写道:


 [
 https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13532466#comment-13532466]

 Leif Hedstrom commented on TS-1006:
 ---

 At a minimum, make the default max_mem be 0, which means never, ever,
 try to use this new reclaim code. Having a configure option would be cool
 too. I don't want a new config, with some


Make the default max_mem to be 0, is a good idea.

I'll give next version soon.

arbitrary default (what did you expect to set max_mem to?), which the user
 misconfigures or don't configure at all, and we get bizarre behavior.

  memory management, cut down memory waste ?
  --
 
  Key: TS-1006
  URL: https://issues.apache.org/jira/browse/TS-1006
  Project: Traffic Server
   Issue Type: Improvement
   Components: Core
 Affects Versions: 3.1.1
 Reporter: Zhao Yongming
 Assignee: Bin Chen
  Fix For: 3.3.2
 
  Attachments:
 0001-Allocator-optimize-InkFreeList-memory-pool.patch,
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch,
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods
 
 
  when we review the memory usage in the production, there is something
 abnormal, ie, looks like TS take much memory than index data + common
 system waste, and here is some memory dump result by set
 proxy.config.dump_mem_info_frequency
  1, the one on a not so busy forwarding system:
  physics memory: 32G
  RAM cache: 22G
  DISK: 6140 GB
  average_object_size 64000
  {code}
   allocated  |in-use  | type size  |   free list name
 
 |||--
671088640 |   37748736 |2097152 |
 memory/ioBufAllocator[14]
   2248146944 | 2135949312 |1048576 |
 memory/ioBufAllocator[13]
   1711276032 | 1705508864 | 524288 |
 memory/ioBufAllocator[12]
   1669332992 | 1667760128 | 262144 |
 memory/ioBufAllocator[11]
   2214592512 | 221184 | 131072 |
 memory/ioBufAllocator[10]
   2325741568 | 2323775488 |  65536 |
 memory/ioBufAllocator[9]
   2091909120 | 2089123840 |  32768 |
 memory/ioBufAllocator[8]
   1956642816 | 1956478976 |  16384 |
 memory/ioBufAllocator[7]
   2094530560 | 2094071808 |   8192 |
 memory/ioBufAllocator[6]
356515840 |  355540992 |   4096 |
 memory/ioBufAllocator[5]
  1048576 |  14336 |   2048 |
 memory/ioBufAllocator[4]
   131072 |  0 |   1024 |
 memory/ioBufAllocator[3]
65536 |  0 |512 |
 memory/ioBufAllocator[2]
32768 |  0 |256 |
 memory/ioBufAllocator[1]
16384 |



-- 
Yunkai Zhang
Work at Taobao


[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-11 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13529082#comment-13529082
 ] 

Leif Hedstrom commented on TS-1006:
---

I haven't had a chance to review the patch (yet), but a few comments / concerns:

1) I understand you want to reclaim the memory, to use it for another (non-ATS) 
process. I guess that's reasonable, but bear in mind that bad things can happen 
here. Such as, if you need 10G RAM to handle certain load, reclaim 4G of it to 
use for some other process, if you now see that same load again, you might not 
be able to allocate that full 10G again (at least not without going on swap).

2) I also see / hear about problems where the claim is that we sort of leak, or 
don't use, the freelist as efficiently as possible. If that's the case, this 
patch seems like a bandaid (or duct tape) solution. I'm not opposing it per se, 
but doing this sort of garbage collection could then hide real, serious 
problems that we should otherwise fix.

3) This becomes more difficult to configure. How do I know what to set the max 
memory to? How do I avoid the case where it constantly garbage collects off the 
freelist, just to cause it to malloc() new objects, and then garbage collect 
again? Such cycles could completely kill performance.


At a minimum, I'd encourage that we make this behavior optional, and disabled 
by default. I'd be interested to hear from the Taoabao team and John about 
these concerns as well.

Thanks!

-- Leif


 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-11 Thread John Plevyak (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13529116#comment-13529116
 ] 

John Plevyak commented on TS-1006:
--

I agree.  We should land this initially as a compile time option in the dev
branch to get wider production time on it before moving it to default.

The main reason is that it is invasive and complicated, particularly in the
way it will interact with the VM system and it would be nice to see how it
responds in a variety of environments.

If it is much better than TCMalloc, then perhaps we should package it up in
a more general form as well.

Was the design based on another allocator/paper?  Any references?

john




 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  

Re: [jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-11 Thread Yunkai Zhang
On Wed, Dec 12, 2012 at 12:09 AM, Leif Hedstrom (JIRA) j...@apache.orgwrote:


 [
 https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13529082#comment-13529082]

 Leif Hedstrom commented on TS-1006:
 ---

 I haven't had a chance to review the patch (yet), but a few comments /
 concerns:

 1) I understand you want to reclaim the memory, to use it for another
 (non-ATS) process. I guess that's reasonable, but bear in mind that bad
 things can happen here. Such as, if you need 10G RAM to handle certain
 load, reclaim 4G of it to use for some other process, if you now see that
 same load again, you might not be able to allocate that full 10G again (at
 least not without going on swap).


The main propose of this patch is to fasten total memory size, so that we
can prevent TS running into swap which will lead to OOM.

I can disable reclaiming when the total memory size less than *max_mem* by
default (In fact, we can control the reclaiming speed by setting
*reclaim_factor* and *max_overage* variables).



 2) I also see / hear about problems where the claim is that we sort of
 leak, or don't use, the freelist as efficiently as possible. If that's the
 case, this patch seems like a bandaid (or duct tape) solution. I'm not
 opposing it per se, but doing this sort of garbage collection could then
 hide real, serious problems that we should otherwise fix.


I agree with you that we need to fix the real problems.

But there is a serious problem about original InkFreeList -- the original
InkFreeList can't reclaim itself, so some free memory used by one kind of
idle Class/RAM-cache can't reused by another kind of busy Class/RAM-cahe.



 3) This becomes more difficult to configure. How do I know what to set the
 max memory to? How do I avoid the case where it constantly garbage collects
 off the freelist, just to cause it to malloc() new objects, and then
 garbage collect again? Such cycles could completely kill performance.


Right, I have observed that when TS exceed the max_mem, it will kill some
performance(CPU will become more busy).

But it should only kill the performance of requests which lead to memory
beyond max_mem. The other request should no be affected.




 At a minimum, I'd encourage that we make this behavior optional, and
 disabled by default. I'd be interested to hear from the Taoabao team and
 John about these concerns as well.



I can make it not reclaiming by default:)



 Thanks!

 -- Leif


  memory management, cut down memory waste ?
  --
 
  Key: TS-1006
  URL: https://issues.apache.org/jira/browse/TS-1006
  Project: Traffic Server
   Issue Type: Improvement
   Components: Core
 Affects Versions: 3.1.1
 Reporter: Zhao Yongming
 Assignee: Bin Chen
  Fix For: 3.3.2
 
  Attachments:
 0001-Allocator-optimize-InkFreeList-memory-pool.patch,
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch,
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods
 
 
  when we review the memory usage in the production, there is something
 abnormal, ie, looks like TS take much memory than index data + common
 system waste, and here is some memory dump result by set
 proxy.config.dump_mem_info_frequency
  1, the one on a not so busy forwarding system:
  physics memory: 32G
  RAM cache: 22G
  DISK: 6140 GB
  average_object_size 64000
  {code}
   allocated  |in-use  | type size  |   free list name
 
 |||--
671088640 |   37748736 |2097152 |
 memory/ioBufAllocator[14]
   2248146944 | 2135949312 |1048576 |
 memory/ioBufAllocator[13]
   1711276032 | 1705508864 | 524288 |
 memory/ioBufAllocator[12]
   1669332992 | 1667760128 | 262144 |
 memory/ioBufAllocator[11]
   2214592512 | 221184 | 131072 |
 memory/ioBufAllocator[10]
   2325741568 | 2323775488 |  65536 |
 memory/ioBufAllocator[9]
   2091909120 | 2089123840 |  32768 |
 memory/ioBufAllocator[8]
   1956642816 | 1956478976 |  16384 |
 memory/ioBufAllocator[7]
   2094530560 | 2094071808 |   8192 |
 memory/ioBufAllocator[6]
356515840 |  355540992 |   4096 |
 memory/ioBufAllocator[5]
  1048576 |  14336 |   2048 |
 memory/ioBufAllocator[4]
   131072 |  0 |   1024 |
 memory/ioBufAllocator[3]
65536 |  0 |512 |
 memory/ioBufAllocator[2]
32768 |  0 |256 |
 memory/ioBufAllocator[1]
16384 |  0 |128 |
 

Re: [jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-11 Thread Yunkai Zhang
On Wed, Dec 12, 2012 at 12:53 AM, John Plevyak (JIRA) j...@apache.orgwrote:


 [
 https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13529116#comment-13529116]

 John Plevyak commented on TS-1006:
 --

 I agree.  We should land this initially as a compile time option in the dev
 branch to get wider production time on it before moving it to default.

 The main reason is that it is invasive and complicated, particularly in the
 way it will interact with the VM system and it would be nice to see how it
 responds in a variety of environments.

 If it is much better than TCMalloc, then perhaps we should package it up in
 a more general form as well.

 Was the design based on another allocator/paper?  Any references?


The code and design is simple, not so profound:)



 john




  memory management, cut down memory waste ?
  --
 
  Key: TS-1006
  URL: https://issues.apache.org/jira/browse/TS-1006
  Project: Traffic Server
   Issue Type: Improvement
   Components: Core
 Affects Versions: 3.1.1
 Reporter: Zhao Yongming
 Assignee: Bin Chen
  Fix For: 3.3.2
 
  Attachments:
 0001-Allocator-optimize-InkFreeList-memory-pool.patch,
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch,
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods
 
 
  when we review the memory usage in the production, there is something
 abnormal, ie, looks like TS take much memory than index data + common
 system waste, and here is some memory dump result by set
 proxy.config.dump_mem_info_frequency
  1, the one on a not so busy forwarding system:
  physics memory: 32G
  RAM cache: 22G
  DISK: 6140 GB
  average_object_size 64000
  {code}
   allocated  |in-use  | type size  |   free list name
 
 |||--
671088640 |   37748736 |2097152 |
 memory/ioBufAllocator[14]
   2248146944 | 2135949312 |1048576 |
 memory/ioBufAllocator[13]
   1711276032 | 1705508864 | 524288 |
 memory/ioBufAllocator[12]
   1669332992 | 1667760128 | 262144 |
 memory/ioBufAllocator[11]
   2214592512 | 221184 | 131072 |
 memory/ioBufAllocator[10]
   2325741568 | 2323775488 |  65536 |
 memory/ioBufAllocator[9]
   2091909120 | 2089123840 |  32768 |
 memory/ioBufAllocator[8]
   1956642816 | 1956478976 |  16384 |
 memory/ioBufAllocator[7]
   2094530560 | 2094071808 |   8192 |
 memory/ioBufAllocator[6]
356515840 |  355540992 |   4096 |
 memory/ioBufAllocator[5]
  1048576 |  14336 |   2048 |
 memory/ioBufAllocator[4]
   131072 |  0 |   1024 |
 memory/ioBufAllocator[3]
65536 |  0 |512 |
 memory/ioBufAllocator[2]
32768 |  0 |256 |
 memory/ioBufAllocator[1]
16384 |  0 |128 |
 memory/ioBufAllocator[0]
0 |  0 |576 |
 memory/ICPRequestCont_allocator
0 |  0 |112 |
 memory/ICPPeerReadContAllocator
0 |  0 |432 |
 memory/PeerReadDataAllocator
0 |  0 | 32 |
 memory/MIMEFieldSDKHandle
0 |  0 |240 |
 memory/INKVConnAllocator
0 |  0 | 96 |
 memory/INKContAllocator
 4096 |  0 | 32 |
 memory/apiHookAllocator
0 |  0 |288 |
 memory/FetchSMAllocator
0 |  0 | 80 |
 memory/prefetchLockHandlerAllocator
0 |  0 |176 |
 memory/PrefetchBlasterAllocator
0 |  0 | 80 |
 memory/prefetchUrlBlaster
0 |  0 | 96 |
 memory/blasterUrlList
0 |  0 | 96 |
 memory/prefetchUrlEntryAllocator
0 |  0 |128 |
 memory/socksProxyAllocator
0 |  0 |144 |
 memory/ObjectReloadCont
  3258368 | 576016 |592 |
 memory/httpClientSessionAllocator
   825344 | 139568 |208 |
 memory/httpServerSessionAllocator
 22597632 |1284848 |   9808 |
 memory/httpSMAllocator
0 |  0 | 32 |
 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13527859#comment-13527859
 ] 

Igor Galić commented on TS-1006:


[~yunkai] can you explain what exactly you mean by and will not harm the 
performance at most time 

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 memory/CongestRequestParamAllocator
   0 |  0 |128 | 
 memory/CongestionDBContAllocator
 5767168 | 704512 |   2048 | memory/hdrStrHeap
18350080 |1153024 |   2048 | memory/hdrHeap
   53248 |   

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-10 Thread Yunkai Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13527911#comment-13527911
 ] 

Yunkai Zhang commented on TS-1006:
--

[~ibochkarev]

When allocator try to reclaim memory, it will harm performance. By setting 
appropriate max_mem/reclaim_factor/max_overage(they are configurable variables 
in recoreds.config file), we can let allocator running on the state that need 
not to reclaim memory.

In fact there two conditions will lead to memory reclaiming:

1) memory used by allocator exceed max_mem
In this condition, it will always reclaim memory until memory size less than 
max_mem, this reclaiming will *only* slowdown the requests which lead to the 
memory beyond max_mem, but will not affect other requests.

2) idle memory in allocator exceed chunck_size for max_overage times (see 
patch's commit log)
When allocator has so many idle memory, it means that TS is not so busy, so the 
reclaiming in this condition will not harm performance.



 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-10 Thread John Plevyak (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13528528#comment-13528528
 ] 

John Plevyak commented on TS-1006:
--

Some of the volatile variables are not listed as such (e.g. 
InkThreadCache::status).

Also, what is the purpose of this status field and how is it updated?  It is 
set in ink_freelist_new to 0 via simple assignment, then tested/assigned via a 
cas in ink_freelist_free.  Some comments, or documentation would be nice.

Have you tested this against the default memory allocator and TCMalloc?

This seems to be doing something similar to TCMalloc and that code has been 
extensively tested.

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-10 Thread Yunkai Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13528615#comment-13528615
 ] 

Yunkai Zhang commented on TS-1006:
--

[~jplevyak]

InkThreadCache::status is an *inner* status(maybe the variable name is not so 
good, any suggestion?), need not to be listed, it represent the status(state) 
of allocator: Malloc-ing or Free-ing, I use it as an simple state machine - 
executing refresh_average_info() only when the status change from Malloc-ing to 
Free-ing. I will add more comments for it in the code.

I have compare TS original allocator/JEMalloc/TCMalloc/LocklessMalloc before I 
optimize TS InkFreeList allocator, the result is that TS orginal allocator is 
faster than JEMalloc/TCMalloc/LocklessMalloc, this because 
JEMalloc/TCMalloc/LocklessMalloc is designed for *general perpose*.

But my patches learn from those excellent general allocators, they are similar 
in some aspect.

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
 Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 | 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-05 Thread Yunkai Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13511057#comment-13511057
 ] 

Yunkai Zhang commented on TS-1006:
--

Hi guys, I have developed a patchset to solve this issue. By using this 
patchset, the total memory used by TS have been controlled and will not harm 
the performance at most time.

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 memory/CongestRequestParamAllocator
   0 |  0 |128 | 
 memory/CongestionDBContAllocator
 5767168 | 704512 |   2048 | memory/hdrStrHeap
18350080 |1153024 |   2048 | memory/hdrHeap
   53248 |   2912 |208 | 
 memory/httpCacheAltAllocator
   0 |   

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-11-27 Thread samahee (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13505218#comment-13505218
 ] 

samahee commented on TS-1006:
-

it it fixed?

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 memory/CongestRequestParamAllocator
   0 |  0 |128 | 
 memory/CongestionDBContAllocator
 5767168 | 704512 |   2048 | memory/hdrStrHeap
18350080 |1153024 |   2048 | memory/hdrHeap
   53248 |   2912 |208 | 
 memory/httpCacheAltAllocator
   0 |  0 |112 | 
 memory/OneWayTunnelAllocator
  157696 |   1232 |   1232 | 
 memory/hostDBContAllocator
  102240 |  

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-11-27 Thread Zhao Yongming (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13505249#comment-13505249
 ] 

Zhao Yongming commented on TS-1006:
---

not yet, we are still working on another solution, hopes we can submit for 
review in next week or soon.

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Bin Chen
 Fix For: 3.3.2

 Attachments: memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
   0 |  0 | 48 | 
 memory/CongestRequestParamAllocator
   0 |  0 |128 | 
 memory/CongestionDBContAllocator
 5767168 | 704512 |   2048 | memory/hdrStrHeap
18350080 |1153024 |   2048 | memory/hdrHeap
   53248 |   2912 |208 | 
 memory/httpCacheAltAllocator
   0 |  0 |112 | 
 memory/OneWayTunnelAllocator
  157696 | 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2011-12-22 Thread Zhao Yongming (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13175301#comment-13175301
 ] 

Zhao Yongming commented on TS-1006:
---

the memory waste in TS is mostly because the Ramcache:
1, it is counted by really used memory.
2, it will hold the whole block of memory from free to OS, the old glibc memory 
management issue.
3, ramcache use the cache reader buffer, and the buffer is allocated from 
anywhere in the whole memory adress

all those make TS waste much more memory than Ramcache is configured. it will 
eat all you memory at then end.

what we want to do:
1, limit ramcache memory by allocator
2, bound ram lur list and freelist
3, freelist will be split by size
4, split ramcache memory and cache memory

the codes will be ready in days

 memory management, cut down memory waste ?
 --

 Key: TS-1006
 URL: https://issues.apache.org/jira/browse/TS-1006
 Project: Traffic Server
  Issue Type: Improvement
  Components: Core
Affects Versions: 3.1.1
Reporter: Zhao Yongming
Assignee: Zhao Yongming
 Fix For: 3.1.3

 Attachments: memusage.ods, memusage.ods


 when we review the memory usage in the production, there is something 
 abnormal, ie, looks like TS take much memory than index data + common system 
 waste, and here is some memory dump result by set 
 proxy.config.dump_mem_info_frequency
 1, the one on a not so busy forwarding system:
 physics memory: 32G
 RAM cache: 22G
 DISK: 6140 GB
 average_object_size 64000
 {code}
  allocated  |in-use  | type size  |   free list name
 |||--
   671088640 |   37748736 |2097152 | 
 memory/ioBufAllocator[14]
  2248146944 | 2135949312 |1048576 | 
 memory/ioBufAllocator[13]
  1711276032 | 1705508864 | 524288 | 
 memory/ioBufAllocator[12]
  1669332992 | 1667760128 | 262144 | 
 memory/ioBufAllocator[11]
  2214592512 | 221184 | 131072 | 
 memory/ioBufAllocator[10]
  2325741568 | 2323775488 |  65536 | 
 memory/ioBufAllocator[9]
  2091909120 | 2089123840 |  32768 | 
 memory/ioBufAllocator[8]
  1956642816 | 1956478976 |  16384 | 
 memory/ioBufAllocator[7]
  2094530560 | 2094071808 |   8192 | 
 memory/ioBufAllocator[6]
   356515840 |  355540992 |   4096 | 
 memory/ioBufAllocator[5]
 1048576 |  14336 |   2048 | 
 memory/ioBufAllocator[4]
  131072 |  0 |   1024 | 
 memory/ioBufAllocator[3]
   65536 |  0 |512 | 
 memory/ioBufAllocator[2]
   32768 |  0 |256 | 
 memory/ioBufAllocator[1]
   16384 |  0 |128 | 
 memory/ioBufAllocator[0]
   0 |  0 |576 | 
 memory/ICPRequestCont_allocator
   0 |  0 |112 | 
 memory/ICPPeerReadContAllocator
   0 |  0 |432 | 
 memory/PeerReadDataAllocator
   0 |  0 | 32 | 
 memory/MIMEFieldSDKHandle
   0 |  0 |240 | 
 memory/INKVConnAllocator
   0 |  0 | 96 | 
 memory/INKContAllocator
4096 |  0 | 32 | 
 memory/apiHookAllocator
   0 |  0 |288 | 
 memory/FetchSMAllocator
   0 |  0 | 80 | 
 memory/prefetchLockHandlerAllocator
   0 |  0 |176 | 
 memory/PrefetchBlasterAllocator
   0 |  0 | 80 | 
 memory/prefetchUrlBlaster
   0 |  0 | 96 | memory/blasterUrlList
   0 |  0 | 96 | 
 memory/prefetchUrlEntryAllocator
   0 |  0 |128 | 
 memory/socksProxyAllocator
   0 |  0 |144 | 
 memory/ObjectReloadCont
 3258368 | 576016 |592 | 
 memory/httpClientSessionAllocator
  825344 | 139568 |208 | 
 memory/httpServerSessionAllocator
22597632 |1284848 |   9808 | memory/httpSMAllocator
   0 |  0 | 32 | 
 memory/CacheLookupHttpConfigAllocator
   0 |  0 |   9856 | 
 memory/httpUpdateSMAllocator
   0 |  0 |128 | 
 memory/RemapPluginsAlloc
 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2011-11-18 Thread Zhao Yongming (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153365#comment-13153365
 ] 

Zhao Yongming commented on TS-1006:
---

we have made some enhancement and the result looks like the following, and we 
are still tracking for bugs and this is the first try, we will submit then 
codes after it stable for our usage.

{code}
[Nov 19 11:25:42.351] Server {0x4257a940} WARNING: Document 644CE0FF truncated 
at 10485040 of 21289520, missing fragment 9DD38ED0
 allocated  |in-use  | type size  |   free list name
|||--
   67108864 |  0 |2097152 | 
memory/cacheBufAllocator[14]
   33554432 |  0 |1048576 | 
memory/cacheBufAllocator[13]
   33554432 |1572864 | 524288 | 
memory/cacheBufAllocator[12]
   33554432 |6029312 | 262144 | 
memory/cacheBufAllocator[11]
   37748736 |   17301504 | 131072 | 
memory/cacheBufAllocator[10]
   35651584 |   32964608 |  65536 | 
memory/cacheBufAllocator[9]
   70254592 |   64782336 |  32768 | 
memory/cacheBufAllocator[8]
   55574528 |   46366720 |  16384 | 
memory/cacheBufAllocator[7]
   28573696 |   22749184 |   8192 | 
memory/cacheBufAllocator[6]
   11534336 |320 |   4096 | 
memory/cacheBufAllocator[5]
  0 |  0 |   2048 | 
memory/cacheBufAllocator[4]
  0 |  0 |   1024 | 
memory/cacheBufAllocator[3]
  0 |  0 |512 | 
memory/cacheBufAllocator[2]
  0 |  0 |256 | 
memory/cacheBufAllocator[1]
  0 |  0 |128 | 
memory/cacheBufAllocator[0]
1572864 | 997376 |   2048 | memory/hdrStrHeap
2621440 |1466368 |   2048 | memory/hdrHeap
 262144 | 203776 |   1024 | memory/ArenaBlock
 dallocated |in-use  | type size  |   free list name
|||--
  0 |  0 |2097152 | 
memory/ioBufAllocator[14]
  0 |  0 |1048576 | 
memory/ioBufAllocator[13]
  0 |  0 | 524288 | 
memory/ioBufAllocator[12]
  0 |  0 | 262144 | 
memory/ioBufAllocator[11]
  0 |  0 | 131072 | 
memory/ioBufAllocator[10]
  0 |  0 |  65536 | memory/ioBufAllocator[9]
7340032 |6455296 |  32768 | memory/ioBufAllocator[8]
1048576 | 720896 |  16384 | memory/ioBufAllocator[7]
1310720 |1048576 |   8192 | memory/ioBufAllocator[6]
1572864 | 929792 |   4096 | memory/ioBufAllocator[5]
  0 |  0 |   2048 | memory/ioBufAllocator[4]
  0 |  0 |   1024 | memory/ioBufAllocator[3]
  0 |  0 |512 | memory/ioBufAllocator[2]
  0 |  0 |256 | memory/ioBufAllocator[1]
  0 |  0 |128 | memory/ioBufAllocator[0]
  0 |  0 |592 | 
memory/ICPRequestCont_allocator
  0 |  0 |128 | 
memory/ICPPeerReadContAllocator
  0 |  0 |432 | 
memory/PeerReadDataAllocator
  0 |  0 | 32 | 
memory/MIMEFieldSDKHandle
  0 |  0 |256 | memory/INKVConnAllocator
  0 |  0 |112 | memory/INKContAllocator
  0 |  0 | 32 | memory/apiHookAllocator
  0 |  0 |304 | memory/FetchSMAllocator
  0 |  0 | 80 | 
memory/prefetchLockHandlerAllocator
  0 |  0 |176 | 
memory/PrefetchBlasterAllocator
  0 |  0 | 80 | 
memory/prefetchUrlBlaster
  0 |  0 | 96 | memory/blasterUrlList
  0 |  0 | 96 | 
memory/prefetchUrlEntryAllocator
  0 |  0 |128 | 
memory/socksProxyAllocator
  0 |  0 |160 | memory/ObjectReloadCont
 155648 | 120992 |608 | 
memory/httpClientSessionAllocator
  

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2011-10-29 Thread Conan Wang (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13139397#comment-13139397
 ] 

Conan Wang commented on TS-1006:


my reverse box: 3.0.1 on centos 5.4 64bit

physics memory: 4G
RAM cache: 1.8G
DISK: 407 GB (raw device)
average_object_size: 8000

{code}
 allocated  |in-use  | type size  |   free list name
|||--
   67108864 |  0 |2097152 | 
memory/ioBufAllocator[14]
 1744830464 |   76546048 |1048576 | 
memory/ioBufAllocator[13]
   33554432 |4194304 | 524288 | 
memory/ioBufAllocator[12]
   33554432 |6553600 | 262144 | 
memory/ioBufAllocator[11]
   37748736 |8912896 | 131072 | 
memory/ioBufAllocator[10]
   69206016 |   31850496 |  65536 | memory/ioBufAllocator[9]
  529530880 |  365658112 |  32768 | memory/ioBufAllocator[8]
  414187520 |  411942912 |  16384 | memory/ioBufAllocator[7]
  898629632 |  898547712 |   8192 | memory/ioBufAllocator[6]
   83361792 |   83161088 |   4096 | memory/ioBufAllocator[5]
 262144 |  0 |   2048 | memory/ioBufAllocator[4]
 131072 |  0 |   1024 | memory/ioBufAllocator[3]
  65536 |  0 |512 | memory/ioBufAllocator[2]
  32768 |  0 |256 | memory/ioBufAllocator[1]
  0 |  0 |128 | memory/ioBufAllocator[0]
  0 |  0 |576 | 
memory/ICPRequestCont_allocator
  0 |  0 |112 | 
memory/ICPPeerReadContAllocator
  0 |  0 |432 | 
memory/PeerReadDataAllocator
   4096 |160 | 32 | 
memory/MIMEFieldSDKHandle
  0 |  0 |240 | memory/INKVConnAllocator
  12288 | 96 | 96 | memory/INKContAllocator
   4096 | 32 | 32 | memory/apiHookAllocator
  0 |  0 |288 | memory/FetchSMAllocator
  0 |  0 | 80 | 
memory/prefetchLockHandlerAllocator
  0 |  0 |176 | 
memory/PrefetchBlasterAllocator
  0 |  0 | 80 | 
memory/prefetchUrlBlaster
  0 |  0 | 96 | memory/blasterUrlList
  0 |  0 | 96 | 
memory/prefetchUrlEntryAllocator
  0 |  0 |128 | 
memory/socksProxyAllocator
  0 |  0 |144 | memory/ObjectReloadCont
1060864 |   2960 |592 | 
memory/httpClientSessionAllocator
  53248 |416 |208 | 
memory/httpServerSessionAllocator
   17575936 |  19616 |   9808 | memory/httpSMAllocator
  0 |  0 | 32 | 
memory/CacheLookupHttpConfigAllocator
  0 |  0 |   9856 | 
memory/httpUpdateSMAllocator
  0 |  0 |128 | memory/RemapPluginsAlloc
  0 |  0 | 48 | 
memory/CongestRequestParamAllocator
  0 |  0 |128 | 
memory/CongestionDBContAllocator
7340032 | 264192 |   2048 | memory/hdrStrHeap
8650752 | 274432 |   2048 | memory/hdrHeap
#
  26624 |  0 |208 | 
memory/httpCacheAltAllocator
  0 |  0 |112 | 
memory/OneWayTunnelAllocator
 315392 |   1232 |   1232 | 
memory/hostDBContAllocator
  68160 |  17040 |  17040 | memory/dnsBufAllocator
 161792 |  0 |   1264 | memory/dnsEntryAllocator
  0 |  0 | 16 | 
memory/DNSRequestDataAllocator
  0 |  0 |   1072 | memory/SRVAllocator
  0 |  0 | 48 | 
memory/ClusterVConnectionCache::Entry
  0 |  0 |560 | 
memory/cacheContAllocator
  0 |  0 |112 | 
memory/inControlAllocator
  0 |  0 |112 | 
memory/outControlAllocator
  0 |  0 | 32 | memory/byteBankAllocator
  0 |  0 |576 | 
memory/clusterVCAllocator
  0 |