[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-31 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13541595#comment-13541595
 ] 

jaekyung oh commented on TS-1006:
-

sure.

physical mem of OS: 16G
ram_cache.size: 80 (8G)
reclaim_factor: 0.30
max_overage: 3

ram_cache.algorithm: 0

thanks.

> memory management, cut down memory waste ?
> --
>
> Key: TS-1006
> URL: https://issues.apache.org/jira/browse/TS-1006
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 3.1.1
>Reporter: Zhao Yongming
>Assignee: Bin Chen
> Fix For: 3.3.2
>
> Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
> 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
> 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
> 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
> 0005-Allocator-adjust-reclaiming-strategy-of-InkFreeList.patch, 
> 0006-RamCacheLRU-split-LRU-queue-into-multiple-queues-to-.patch, 
> Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods
>
>
> when we review the memory usage in the production, there is something 
> abnormal, ie, looks like TS take much memory than index data + common system 
> waste, and here is some memory dump result by set 
> "proxy.config.dump_mem_info_frequency"
> 1, the one on a not so busy forwarding system:
> physics memory: 32G
> RAM cache: 22G
> DISK: 6140 GB
> average_object_size 64000
> {code}
>  allocated  |in-use  | type size  |   free list name
> |||--
>   671088640 |   37748736 |2097152 | 
> memory/ioBufAllocator[14]
>  2248146944 | 2135949312 |1048576 | 
> memory/ioBufAllocator[13]
>  1711276032 | 1705508864 | 524288 | 
> memory/ioBufAllocator[12]
>  1669332992 | 1667760128 | 262144 | 
> memory/ioBufAllocator[11]
>  2214592512 | 221184 | 131072 | 
> memory/ioBufAllocator[10]
>  2325741568 | 2323775488 |  65536 | 
> memory/ioBufAllocator[9]
>  2091909120 | 2089123840 |  32768 | 
> memory/ioBufAllocator[8]
>  1956642816 | 1956478976 |  16384 | 
> memory/ioBufAllocator[7]
>  2094530560 | 2094071808 |   8192 | 
> memory/ioBufAllocator[6]
>   356515840 |  355540992 |   4096 | 
> memory/ioBufAllocator[5]
> 1048576 |  14336 |   2048 | 
> memory/ioBufAllocator[4]
>  131072 |  0 |   1024 | 
> memory/ioBufAllocator[3]
>   65536 |  0 |512 | 
> memory/ioBufAllocator[2]
>   32768 |  0 |256 | 
> memory/ioBufAllocator[1]
>   16384 |  0 |128 | 
> memory/ioBufAllocator[0]
>   0 |  0 |576 | 
> memory/ICPRequestCont_allocator
>   0 |  0 |112 | 
> memory/ICPPeerReadContAllocator
>   0 |  0 |432 | 
> memory/PeerReadDataAllocator
>   0 |  0 | 32 | 
> memory/MIMEFieldSDKHandle
>   0 |  0 |240 | 
> memory/INKVConnAllocator
>   0 |  0 | 96 | 
> memory/INKContAllocator
>4096 |  0 | 32 | 
> memory/apiHookAllocator
>   0 |  0 |288 | 
> memory/FetchSMAllocator
>   0 |  0 | 80 | 
> memory/prefetchLockHandlerAllocator
>   0 |  0 |176 | 
> memory/PrefetchBlasterAllocator
>   0 |  0 | 80 | 
> memory/prefetchUrlBlaster
>   0 |  0 | 96 | memory/blasterUrlList
>   0 |  0 | 96 | 
> memory/prefetchUrlEntryAllocator
>   0 |  0 |128 | 
> memory/socksProxyAllocator
>   0 |  0 |144 | 
> memory/ObjectReloadCont
> 3258368 | 576016 |592 | 
> memory/httpClientSessionAllocator
>  825344 | 139568 |208 | 
> memory/httpServerSessionAllocator
>22597632 |1284848 |   9808 | memory/httpSMAllocator
>   0 |  0 | 32 | 
> memory/CacheLookupHttpConfigAllocator
>   0 |  0 |   9856 | 
> memory/httpUpdateSMAllocator
>   0 |  0 |128 | 

[jira] [Comment Edited] (TS-1006) memory management, cut down memory waste ?

2012-12-31 Thread Yunkai Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13541586#comment-13541586
 ] 

Yunkai Zhang edited comment on TS-1006 at 1/1/13 4:47 AM:
--

[~genext]

That is the first gift for me in 2013:D.

Can you show us your configuration to here, maybe it will help other users:
{code}
// physical mem of OS: 16G ?
ram_cache.size: ?
reclaim_factor: ?
max_overage: ?

// which algorithm you use?
ram_cache.algorithm: ?
{code}

Happy New Year.

  was (Author: yunkai):
[~genext]

That is the first gift for me in 2013:D.

Can you show us your configuration to here, maybe it will help other users:
{code}
// physical mem of OS: 16G ?
ram_cache.size: ?
reclaim_factor: ?
max_overage: ?

// which algorithm you use?
LRU or CFLUS?
{code}

Happy New Year.
  
> memory management, cut down memory waste ?
> --
>
> Key: TS-1006
> URL: https://issues.apache.org/jira/browse/TS-1006
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 3.1.1
>Reporter: Zhao Yongming
>Assignee: Bin Chen
> Fix For: 3.3.2
>
> Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
> 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
> 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
> 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
> 0005-Allocator-adjust-reclaiming-strategy-of-InkFreeList.patch, 
> 0006-RamCacheLRU-split-LRU-queue-into-multiple-queues-to-.patch, 
> Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods
>
>
> when we review the memory usage in the production, there is something 
> abnormal, ie, looks like TS take much memory than index data + common system 
> waste, and here is some memory dump result by set 
> "proxy.config.dump_mem_info_frequency"
> 1, the one on a not so busy forwarding system:
> physics memory: 32G
> RAM cache: 22G
> DISK: 6140 GB
> average_object_size 64000
> {code}
>  allocated  |in-use  | type size  |   free list name
> |||--
>   671088640 |   37748736 |2097152 | 
> memory/ioBufAllocator[14]
>  2248146944 | 2135949312 |1048576 | 
> memory/ioBufAllocator[13]
>  1711276032 | 1705508864 | 524288 | 
> memory/ioBufAllocator[12]
>  1669332992 | 1667760128 | 262144 | 
> memory/ioBufAllocator[11]
>  2214592512 | 221184 | 131072 | 
> memory/ioBufAllocator[10]
>  2325741568 | 2323775488 |  65536 | 
> memory/ioBufAllocator[9]
>  2091909120 | 2089123840 |  32768 | 
> memory/ioBufAllocator[8]
>  1956642816 | 1956478976 |  16384 | 
> memory/ioBufAllocator[7]
>  2094530560 | 2094071808 |   8192 | 
> memory/ioBufAllocator[6]
>   356515840 |  355540992 |   4096 | 
> memory/ioBufAllocator[5]
> 1048576 |  14336 |   2048 | 
> memory/ioBufAllocator[4]
>  131072 |  0 |   1024 | 
> memory/ioBufAllocator[3]
>   65536 |  0 |512 | 
> memory/ioBufAllocator[2]
>   32768 |  0 |256 | 
> memory/ioBufAllocator[1]
>   16384 |  0 |128 | 
> memory/ioBufAllocator[0]
>   0 |  0 |576 | 
> memory/ICPRequestCont_allocator
>   0 |  0 |112 | 
> memory/ICPPeerReadContAllocator
>   0 |  0 |432 | 
> memory/PeerReadDataAllocator
>   0 |  0 | 32 | 
> memory/MIMEFieldSDKHandle
>   0 |  0 |240 | 
> memory/INKVConnAllocator
>   0 |  0 | 96 | 
> memory/INKContAllocator
>4096 |  0 | 32 | 
> memory/apiHookAllocator
>   0 |  0 |288 | 
> memory/FetchSMAllocator
>   0 |  0 | 80 | 
> memory/prefetchLockHandlerAllocator
>   0 |  0 |176 | 
> memory/PrefetchBlasterAllocator
>   0 |  0 | 80 | 
> memory/prefetchUrlBlaster
>   0 |  0 | 96 | memory/blasterUrlList
>   0 |  0 | 96 | 
> memory/prefetchUrlEntryAllocator
>   0 |  0 |128 | 
> memory/socksProxyAllocator
>   0 |  0 |144 | 
> memo

[jira] [Comment Edited] (TS-1006) memory management, cut down memory waste ?

2012-12-31 Thread Yunkai Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13541586#comment-13541586
 ] 

Yunkai Zhang edited comment on TS-1006 at 1/1/13 4:39 AM:
--

[~genext]

That is the first gift for me in 2013:D.

Can you show us your configuration to here, maybe it will help other users:
{code}
// physical mem of OS: 16G ?
ram_cache.size: ?
reclaim_factor: ?
max_overage: ?

// which algorithm you use?
LRU or CFLUS?
{code}

Happy New Year.

  was (Author: yunkai):
[~genext]

That is the first gift for me in 2013:D.

Can you show us your configuration to here, maybe it will help other users:
{code}
// physical mem of OS: 16G ?
ram_cache.size: ?
reclaim_factor: ?
max_overage: ?
{code}

Happy New Year.
  
> memory management, cut down memory waste ?
> --
>
> Key: TS-1006
> URL: https://issues.apache.org/jira/browse/TS-1006
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 3.1.1
>Reporter: Zhao Yongming
>Assignee: Bin Chen
> Fix For: 3.3.2
>
> Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
> 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
> 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
> 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
> 0005-Allocator-adjust-reclaiming-strategy-of-InkFreeList.patch, 
> 0006-RamCacheLRU-split-LRU-queue-into-multiple-queues-to-.patch, 
> Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods
>
>
> when we review the memory usage in the production, there is something 
> abnormal, ie, looks like TS take much memory than index data + common system 
> waste, and here is some memory dump result by set 
> "proxy.config.dump_mem_info_frequency"
> 1, the one on a not so busy forwarding system:
> physics memory: 32G
> RAM cache: 22G
> DISK: 6140 GB
> average_object_size 64000
> {code}
>  allocated  |in-use  | type size  |   free list name
> |||--
>   671088640 |   37748736 |2097152 | 
> memory/ioBufAllocator[14]
>  2248146944 | 2135949312 |1048576 | 
> memory/ioBufAllocator[13]
>  1711276032 | 1705508864 | 524288 | 
> memory/ioBufAllocator[12]
>  1669332992 | 1667760128 | 262144 | 
> memory/ioBufAllocator[11]
>  2214592512 | 221184 | 131072 | 
> memory/ioBufAllocator[10]
>  2325741568 | 2323775488 |  65536 | 
> memory/ioBufAllocator[9]
>  2091909120 | 2089123840 |  32768 | 
> memory/ioBufAllocator[8]
>  1956642816 | 1956478976 |  16384 | 
> memory/ioBufAllocator[7]
>  2094530560 | 2094071808 |   8192 | 
> memory/ioBufAllocator[6]
>   356515840 |  355540992 |   4096 | 
> memory/ioBufAllocator[5]
> 1048576 |  14336 |   2048 | 
> memory/ioBufAllocator[4]
>  131072 |  0 |   1024 | 
> memory/ioBufAllocator[3]
>   65536 |  0 |512 | 
> memory/ioBufAllocator[2]
>   32768 |  0 |256 | 
> memory/ioBufAllocator[1]
>   16384 |  0 |128 | 
> memory/ioBufAllocator[0]
>   0 |  0 |576 | 
> memory/ICPRequestCont_allocator
>   0 |  0 |112 | 
> memory/ICPPeerReadContAllocator
>   0 |  0 |432 | 
> memory/PeerReadDataAllocator
>   0 |  0 | 32 | 
> memory/MIMEFieldSDKHandle
>   0 |  0 |240 | 
> memory/INKVConnAllocator
>   0 |  0 | 96 | 
> memory/INKContAllocator
>4096 |  0 | 32 | 
> memory/apiHookAllocator
>   0 |  0 |288 | 
> memory/FetchSMAllocator
>   0 |  0 | 80 | 
> memory/prefetchLockHandlerAllocator
>   0 |  0 |176 | 
> memory/PrefetchBlasterAllocator
>   0 |  0 | 80 | 
> memory/prefetchUrlBlaster
>   0 |  0 | 96 | memory/blasterUrlList
>   0 |  0 | 96 | 
> memory/prefetchUrlEntryAllocator
>   0 |  0 |128 | 
> memory/socksProxyAllocator
>   0 |  0 |144 | 
> memory/ObjectReloadCont
> 3258368 | 

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-31 Thread Yunkai Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13541586#comment-13541586
 ] 

Yunkai Zhang commented on TS-1006:
--

[~genext]

That is the first gift for me in 2013:D.

Can you show us your configuration to here, maybe it will help other users:
{code}
// physical mem of OS: 16G ?
ram_cache.size: ?
reclaim_factor: ?
max_overage: /
{code}

Happy New Year.

> memory management, cut down memory waste ?
> --
>
> Key: TS-1006
> URL: https://issues.apache.org/jira/browse/TS-1006
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 3.1.1
>Reporter: Zhao Yongming
>Assignee: Bin Chen
> Fix For: 3.3.2
>
> Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
> 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
> 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
> 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
> 0005-Allocator-adjust-reclaiming-strategy-of-InkFreeList.patch, 
> 0006-RamCacheLRU-split-LRU-queue-into-multiple-queues-to-.patch, 
> Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods
>
>
> when we review the memory usage in the production, there is something 
> abnormal, ie, looks like TS take much memory than index data + common system 
> waste, and here is some memory dump result by set 
> "proxy.config.dump_mem_info_frequency"
> 1, the one on a not so busy forwarding system:
> physics memory: 32G
> RAM cache: 22G
> DISK: 6140 GB
> average_object_size 64000
> {code}
>  allocated  |in-use  | type size  |   free list name
> |||--
>   671088640 |   37748736 |2097152 | 
> memory/ioBufAllocator[14]
>  2248146944 | 2135949312 |1048576 | 
> memory/ioBufAllocator[13]
>  1711276032 | 1705508864 | 524288 | 
> memory/ioBufAllocator[12]
>  1669332992 | 1667760128 | 262144 | 
> memory/ioBufAllocator[11]
>  2214592512 | 221184 | 131072 | 
> memory/ioBufAllocator[10]
>  2325741568 | 2323775488 |  65536 | 
> memory/ioBufAllocator[9]
>  2091909120 | 2089123840 |  32768 | 
> memory/ioBufAllocator[8]
>  1956642816 | 1956478976 |  16384 | 
> memory/ioBufAllocator[7]
>  2094530560 | 2094071808 |   8192 | 
> memory/ioBufAllocator[6]
>   356515840 |  355540992 |   4096 | 
> memory/ioBufAllocator[5]
> 1048576 |  14336 |   2048 | 
> memory/ioBufAllocator[4]
>  131072 |  0 |   1024 | 
> memory/ioBufAllocator[3]
>   65536 |  0 |512 | 
> memory/ioBufAllocator[2]
>   32768 |  0 |256 | 
> memory/ioBufAllocator[1]
>   16384 |  0 |128 | 
> memory/ioBufAllocator[0]
>   0 |  0 |576 | 
> memory/ICPRequestCont_allocator
>   0 |  0 |112 | 
> memory/ICPPeerReadContAllocator
>   0 |  0 |432 | 
> memory/PeerReadDataAllocator
>   0 |  0 | 32 | 
> memory/MIMEFieldSDKHandle
>   0 |  0 |240 | 
> memory/INKVConnAllocator
>   0 |  0 | 96 | 
> memory/INKContAllocator
>4096 |  0 | 32 | 
> memory/apiHookAllocator
>   0 |  0 |288 | 
> memory/FetchSMAllocator
>   0 |  0 | 80 | 
> memory/prefetchLockHandlerAllocator
>   0 |  0 |176 | 
> memory/PrefetchBlasterAllocator
>   0 |  0 | 80 | 
> memory/prefetchUrlBlaster
>   0 |  0 | 96 | memory/blasterUrlList
>   0 |  0 | 96 | 
> memory/prefetchUrlEntryAllocator
>   0 |  0 |128 | 
> memory/socksProxyAllocator
>   0 |  0 |144 | 
> memory/ObjectReloadCont
> 3258368 | 576016 |592 | 
> memory/httpClientSessionAllocator
>  825344 | 139568 |208 | 
> memory/httpServerSessionAllocator
>22597632 |1284848 |   9808 | memory/httpSMAllocator
>   0 |  0 | 32 | 
> memory/CacheLookupHttpConfigAllocator
>   0 |  0 

[jira] [Comment Edited] (TS-1006) memory management, cut down memory waste ?

2012-12-31 Thread Yunkai Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13541586#comment-13541586
 ] 

Yunkai Zhang edited comment on TS-1006 at 1/1/13 4:37 AM:
--

[~genext]

That is the first gift for me in 2013:D.

Can you show us your configuration to here, maybe it will help other users:
{code}
// physical mem of OS: 16G ?
ram_cache.size: ?
reclaim_factor: ?
max_overage: ?
{code}

Happy New Year.

  was (Author: yunkai):
[~genext]

That is the first gift for me in 2013:D.

Can you show us your configuration to here, maybe it will help other users:
{code}
// physical mem of OS: 16G ?
ram_cache.size: ?
reclaim_factor: ?
max_overage: /
{code}

Happy New Year.
  
> memory management, cut down memory waste ?
> --
>
> Key: TS-1006
> URL: https://issues.apache.org/jira/browse/TS-1006
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 3.1.1
>Reporter: Zhao Yongming
>Assignee: Bin Chen
> Fix For: 3.3.2
>
> Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
> 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
> 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
> 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
> 0005-Allocator-adjust-reclaiming-strategy-of-InkFreeList.patch, 
> 0006-RamCacheLRU-split-LRU-queue-into-multiple-queues-to-.patch, 
> Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods
>
>
> when we review the memory usage in the production, there is something 
> abnormal, ie, looks like TS take much memory than index data + common system 
> waste, and here is some memory dump result by set 
> "proxy.config.dump_mem_info_frequency"
> 1, the one on a not so busy forwarding system:
> physics memory: 32G
> RAM cache: 22G
> DISK: 6140 GB
> average_object_size 64000
> {code}
>  allocated  |in-use  | type size  |   free list name
> |||--
>   671088640 |   37748736 |2097152 | 
> memory/ioBufAllocator[14]
>  2248146944 | 2135949312 |1048576 | 
> memory/ioBufAllocator[13]
>  1711276032 | 1705508864 | 524288 | 
> memory/ioBufAllocator[12]
>  1669332992 | 1667760128 | 262144 | 
> memory/ioBufAllocator[11]
>  2214592512 | 221184 | 131072 | 
> memory/ioBufAllocator[10]
>  2325741568 | 2323775488 |  65536 | 
> memory/ioBufAllocator[9]
>  2091909120 | 2089123840 |  32768 | 
> memory/ioBufAllocator[8]
>  1956642816 | 1956478976 |  16384 | 
> memory/ioBufAllocator[7]
>  2094530560 | 2094071808 |   8192 | 
> memory/ioBufAllocator[6]
>   356515840 |  355540992 |   4096 | 
> memory/ioBufAllocator[5]
> 1048576 |  14336 |   2048 | 
> memory/ioBufAllocator[4]
>  131072 |  0 |   1024 | 
> memory/ioBufAllocator[3]
>   65536 |  0 |512 | 
> memory/ioBufAllocator[2]
>   32768 |  0 |256 | 
> memory/ioBufAllocator[1]
>   16384 |  0 |128 | 
> memory/ioBufAllocator[0]
>   0 |  0 |576 | 
> memory/ICPRequestCont_allocator
>   0 |  0 |112 | 
> memory/ICPPeerReadContAllocator
>   0 |  0 |432 | 
> memory/PeerReadDataAllocator
>   0 |  0 | 32 | 
> memory/MIMEFieldSDKHandle
>   0 |  0 |240 | 
> memory/INKVConnAllocator
>   0 |  0 | 96 | 
> memory/INKContAllocator
>4096 |  0 | 32 | 
> memory/apiHookAllocator
>   0 |  0 |288 | 
> memory/FetchSMAllocator
>   0 |  0 | 80 | 
> memory/prefetchLockHandlerAllocator
>   0 |  0 |176 | 
> memory/PrefetchBlasterAllocator
>   0 |  0 | 80 | 
> memory/prefetchUrlBlaster
>   0 |  0 | 96 | memory/blasterUrlList
>   0 |  0 | 96 | 
> memory/prefetchUrlEntryAllocator
>   0 |  0 |128 | 
> memory/socksProxyAllocator
>   0 |  0 |144 | 
> memory/ObjectReloadCont
> 3258368 | 576016 |592 | 
> memory/httpCli

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-31 Thread jaekyung oh (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13541562#comment-13541562
 ] 

jaekyung oh commented on TS-1006:
-

Happy New Year.

after a week monitoring your last patch shows it's effective. At first, for a 
couple of days memory usage didn't stop increase but then it keeps going 
between minimum value of 72% and maximum value of 76%.

Even though I haven't applied 6th patch traffic server is stable now. Thank you.

> memory management, cut down memory waste ?
> --
>
> Key: TS-1006
> URL: https://issues.apache.org/jira/browse/TS-1006
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 3.1.1
>Reporter: Zhao Yongming
>Assignee: Bin Chen
> Fix For: 3.3.2
>
> Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
> 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
> 0003-Allocator-store-InkChunkInfo-into-Chunk.patch, 
> 0004-Allocator-optimize-alignment-size-to-avoid-mmap-fail.patch, 
> 0005-Allocator-adjust-reclaiming-strategy-of-InkFreeList.patch, 
> 0006-RamCacheLRU-split-LRU-queue-into-multiple-queues-to-.patch, 
> Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods
>
>
> when we review the memory usage in the production, there is something 
> abnormal, ie, looks like TS take much memory than index data + common system 
> waste, and here is some memory dump result by set 
> "proxy.config.dump_mem_info_frequency"
> 1, the one on a not so busy forwarding system:
> physics memory: 32G
> RAM cache: 22G
> DISK: 6140 GB
> average_object_size 64000
> {code}
>  allocated  |in-use  | type size  |   free list name
> |||--
>   671088640 |   37748736 |2097152 | 
> memory/ioBufAllocator[14]
>  2248146944 | 2135949312 |1048576 | 
> memory/ioBufAllocator[13]
>  1711276032 | 1705508864 | 524288 | 
> memory/ioBufAllocator[12]
>  1669332992 | 1667760128 | 262144 | 
> memory/ioBufAllocator[11]
>  2214592512 | 221184 | 131072 | 
> memory/ioBufAllocator[10]
>  2325741568 | 2323775488 |  65536 | 
> memory/ioBufAllocator[9]
>  2091909120 | 2089123840 |  32768 | 
> memory/ioBufAllocator[8]
>  1956642816 | 1956478976 |  16384 | 
> memory/ioBufAllocator[7]
>  2094530560 | 2094071808 |   8192 | 
> memory/ioBufAllocator[6]
>   356515840 |  355540992 |   4096 | 
> memory/ioBufAllocator[5]
> 1048576 |  14336 |   2048 | 
> memory/ioBufAllocator[4]
>  131072 |  0 |   1024 | 
> memory/ioBufAllocator[3]
>   65536 |  0 |512 | 
> memory/ioBufAllocator[2]
>   32768 |  0 |256 | 
> memory/ioBufAllocator[1]
>   16384 |  0 |128 | 
> memory/ioBufAllocator[0]
>   0 |  0 |576 | 
> memory/ICPRequestCont_allocator
>   0 |  0 |112 | 
> memory/ICPPeerReadContAllocator
>   0 |  0 |432 | 
> memory/PeerReadDataAllocator
>   0 |  0 | 32 | 
> memory/MIMEFieldSDKHandle
>   0 |  0 |240 | 
> memory/INKVConnAllocator
>   0 |  0 | 96 | 
> memory/INKContAllocator
>4096 |  0 | 32 | 
> memory/apiHookAllocator
>   0 |  0 |288 | 
> memory/FetchSMAllocator
>   0 |  0 | 80 | 
> memory/prefetchLockHandlerAllocator
>   0 |  0 |176 | 
> memory/PrefetchBlasterAllocator
>   0 |  0 | 80 | 
> memory/prefetchUrlBlaster
>   0 |  0 | 96 | memory/blasterUrlList
>   0 |  0 | 96 | 
> memory/prefetchUrlEntryAllocator
>   0 |  0 |128 | 
> memory/socksProxyAllocator
>   0 |  0 |144 | 
> memory/ObjectReloadCont
> 3258368 | 576016 |592 | 
> memory/httpClientSessionAllocator
>  825344 | 139568 |208 | 
> memory/httpServerSessionAllocator
>22597632 |1284848 |   9808 | memory/httpSMAllocator
>   0 |  0 | 32 | 
> memory/CacheLook