[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13527859#comment-13527859
 ] 

Igor Galić commented on TS-1006:


[~yunkai] can you explain what exactly you mean by "and will not harm the 
performance at most time" 

> memory management, cut down memory waste ?
> --
>
> Key: TS-1006
> URL: https://issues.apache.org/jira/browse/TS-1006
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 3.1.1
>Reporter: Zhao Yongming
>Assignee: Bin Chen
> Fix For: 3.3.2
>
> Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
> 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
> Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods
>
>
> when we review the memory usage in the production, there is something 
> abnormal, ie, looks like TS take much memory than index data + common system 
> waste, and here is some memory dump result by set 
> "proxy.config.dump_mem_info_frequency"
> 1, the one on a not so busy forwarding system:
> physics memory: 32G
> RAM cache: 22G
> DISK: 6140 GB
> average_object_size 64000
> {code}
>  allocated  |in-use  | type size  |   free list name
> |||--
>   671088640 |   37748736 |2097152 | 
> memory/ioBufAllocator[14]
>  2248146944 | 2135949312 |1048576 | 
> memory/ioBufAllocator[13]
>  1711276032 | 1705508864 | 524288 | 
> memory/ioBufAllocator[12]
>  1669332992 | 1667760128 | 262144 | 
> memory/ioBufAllocator[11]
>  2214592512 | 221184 | 131072 | 
> memory/ioBufAllocator[10]
>  2325741568 | 2323775488 |  65536 | 
> memory/ioBufAllocator[9]
>  2091909120 | 2089123840 |  32768 | 
> memory/ioBufAllocator[8]
>  1956642816 | 1956478976 |  16384 | 
> memory/ioBufAllocator[7]
>  2094530560 | 2094071808 |   8192 | 
> memory/ioBufAllocator[6]
>   356515840 |  355540992 |   4096 | 
> memory/ioBufAllocator[5]
> 1048576 |  14336 |   2048 | 
> memory/ioBufAllocator[4]
>  131072 |  0 |   1024 | 
> memory/ioBufAllocator[3]
>   65536 |  0 |512 | 
> memory/ioBufAllocator[2]
>   32768 |  0 |256 | 
> memory/ioBufAllocator[1]
>   16384 |  0 |128 | 
> memory/ioBufAllocator[0]
>   0 |  0 |576 | 
> memory/ICPRequestCont_allocator
>   0 |  0 |112 | 
> memory/ICPPeerReadContAllocator
>   0 |  0 |432 | 
> memory/PeerReadDataAllocator
>   0 |  0 | 32 | 
> memory/MIMEFieldSDKHandle
>   0 |  0 |240 | 
> memory/INKVConnAllocator
>   0 |  0 | 96 | 
> memory/INKContAllocator
>4096 |  0 | 32 | 
> memory/apiHookAllocator
>   0 |  0 |288 | 
> memory/FetchSMAllocator
>   0 |  0 | 80 | 
> memory/prefetchLockHandlerAllocator
>   0 |  0 |176 | 
> memory/PrefetchBlasterAllocator
>   0 |  0 | 80 | 
> memory/prefetchUrlBlaster
>   0 |  0 | 96 | memory/blasterUrlList
>   0 |  0 | 96 | 
> memory/prefetchUrlEntryAllocator
>   0 |  0 |128 | 
> memory/socksProxyAllocator
>   0 |  0 |144 | 
> memory/ObjectReloadCont
> 3258368 | 576016 |592 | 
> memory/httpClientSessionAllocator
>  825344 | 139568 |208 | 
> memory/httpServerSessionAllocator
>22597632 |1284848 |   9808 | memory/httpSMAllocator
>   0 |  0 | 32 | 
> memory/CacheLookupHttpConfigAllocator
>   0 |  0 |   9856 | 
> memory/httpUpdateSMAllocator
>   0 |  0 |128 | 
> memory/RemapPluginsAlloc
>   0 |  0 | 48 | 
> memory/CongestRequestParamAllocator
>   0 |  0 |128 | 
> memory/CongestionDBContAllocator
> 5767168 | 704512 |   2048 | memory/hdrStrHeap

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-10 Thread Yunkai Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13527911#comment-13527911
 ] 

Yunkai Zhang commented on TS-1006:
--

[~ibochkarev]

When allocator try to reclaim memory, it will harm performance. By setting 
appropriate max_mem/reclaim_factor/max_overage(they are configurable variables 
in recoreds.config file), we can let allocator running on the state that need 
not to reclaim memory.

In fact there two conditions will lead to memory reclaiming:

1) memory used by allocator exceed max_mem
In this condition, it will always reclaim memory until memory size less than 
max_mem, this reclaiming will *only* slowdown the requests which lead to the 
memory beyond max_mem, but will not affect other requests.

2) idle memory in allocator exceed chunck_size for max_overage times (see 
patch's commit log)
When allocator has so many idle memory, it means that TS is not so busy, so the 
reclaiming in this condition will not harm performance.



> memory management, cut down memory waste ?
> --
>
> Key: TS-1006
> URL: https://issues.apache.org/jira/browse/TS-1006
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 3.1.1
>Reporter: Zhao Yongming
>Assignee: Bin Chen
> Fix For: 3.3.2
>
> Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
> 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
> Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods
>
>
> when we review the memory usage in the production, there is something 
> abnormal, ie, looks like TS take much memory than index data + common system 
> waste, and here is some memory dump result by set 
> "proxy.config.dump_mem_info_frequency"
> 1, the one on a not so busy forwarding system:
> physics memory: 32G
> RAM cache: 22G
> DISK: 6140 GB
> average_object_size 64000
> {code}
>  allocated  |in-use  | type size  |   free list name
> |||--
>   671088640 |   37748736 |2097152 | 
> memory/ioBufAllocator[14]
>  2248146944 | 2135949312 |1048576 | 
> memory/ioBufAllocator[13]
>  1711276032 | 1705508864 | 524288 | 
> memory/ioBufAllocator[12]
>  1669332992 | 1667760128 | 262144 | 
> memory/ioBufAllocator[11]
>  2214592512 | 221184 | 131072 | 
> memory/ioBufAllocator[10]
>  2325741568 | 2323775488 |  65536 | 
> memory/ioBufAllocator[9]
>  2091909120 | 2089123840 |  32768 | 
> memory/ioBufAllocator[8]
>  1956642816 | 1956478976 |  16384 | 
> memory/ioBufAllocator[7]
>  2094530560 | 2094071808 |   8192 | 
> memory/ioBufAllocator[6]
>   356515840 |  355540992 |   4096 | 
> memory/ioBufAllocator[5]
> 1048576 |  14336 |   2048 | 
> memory/ioBufAllocator[4]
>  131072 |  0 |   1024 | 
> memory/ioBufAllocator[3]
>   65536 |  0 |512 | 
> memory/ioBufAllocator[2]
>   32768 |  0 |256 | 
> memory/ioBufAllocator[1]
>   16384 |  0 |128 | 
> memory/ioBufAllocator[0]
>   0 |  0 |576 | 
> memory/ICPRequestCont_allocator
>   0 |  0 |112 | 
> memory/ICPPeerReadContAllocator
>   0 |  0 |432 | 
> memory/PeerReadDataAllocator
>   0 |  0 | 32 | 
> memory/MIMEFieldSDKHandle
>   0 |  0 |240 | 
> memory/INKVConnAllocator
>   0 |  0 | 96 | 
> memory/INKContAllocator
>4096 |  0 | 32 | 
> memory/apiHookAllocator
>   0 |  0 |288 | 
> memory/FetchSMAllocator
>   0 |  0 | 80 | 
> memory/prefetchLockHandlerAllocator
>   0 |  0 |176 | 
> memory/PrefetchBlasterAllocator
>   0 |  0 | 80 | 
> memory/prefetchUrlBlaster
>   0 |  0 | 96 | memory/blasterUrlList
>   0 |  0 | 96 | 
> memory/prefetchUrlEntryAllocator
>   0 |  0 |128 | 
> memory/socksProxyAllocator
>   0 |  0 |144 | 
> memory/ObjectReloadCont
> 3258368 | 576016 |592 | 
> memory/httpClientSess

[jira] [Comment Edited] (TS-1006) memory management, cut down memory waste ?

2012-12-10 Thread Yunkai Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13527911#comment-13527911
 ] 

Yunkai Zhang edited comment on TS-1006 at 12/10/12 12:44 PM:
-

[~ibochkarev]

When allocator try to reclaim memory, it will harm performance. By setting 
appropriate max_mem/reclaim_factor/max_overage(they are configurable variables 
in recoreds.config file), we can let allocator running on the state that need 
not to reclaim memory at most time.

In fact there are two conditions will lead to memory reclaiming:

1) memory used by allocator exceed max_mem
In this condition, it will always reclaim memory until memory size less than 
max_mem, this reclaiming will *only* slowdown the requests which lead to the 
memory beyond max_mem, but will not affect other requests.

2) idle memory in allocator exceed chunck_size for max_overage times (see 
patch's commit log)
When allocator has so many idle memory, it means that TS is not so busy, so the 
reclaiming in this condition will not harm performance.



  was (Author: yunkai):
[~ibochkarev]

When allocator try to reclaim memory, it will harm performance. By setting 
appropriate max_mem/reclaim_factor/max_overage(they are configurable variables 
in recoreds.config file), we can let allocator running on the state that need 
not to reclaim memory.

In fact there two conditions will lead to memory reclaiming:

1) memory used by allocator exceed max_mem
In this condition, it will always reclaim memory until memory size less than 
max_mem, this reclaiming will *only* slowdown the requests which lead to the 
memory beyond max_mem, but will not affect other requests.

2) idle memory in allocator exceed chunck_size for max_overage times (see 
patch's commit log)
When allocator has so many idle memory, it means that TS is not so busy, so the 
reclaiming in this condition will not harm performance.


  
> memory management, cut down memory waste ?
> --
>
> Key: TS-1006
> URL: https://issues.apache.org/jira/browse/TS-1006
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 3.1.1
>Reporter: Zhao Yongming
>Assignee: Bin Chen
> Fix For: 3.3.2
>
> Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
> 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
> Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods
>
>
> when we review the memory usage in the production, there is something 
> abnormal, ie, looks like TS take much memory than index data + common system 
> waste, and here is some memory dump result by set 
> "proxy.config.dump_mem_info_frequency"
> 1, the one on a not so busy forwarding system:
> physics memory: 32G
> RAM cache: 22G
> DISK: 6140 GB
> average_object_size 64000
> {code}
>  allocated  |in-use  | type size  |   free list name
> |||--
>   671088640 |   37748736 |2097152 | 
> memory/ioBufAllocator[14]
>  2248146944 | 2135949312 |1048576 | 
> memory/ioBufAllocator[13]
>  1711276032 | 1705508864 | 524288 | 
> memory/ioBufAllocator[12]
>  1669332992 | 1667760128 | 262144 | 
> memory/ioBufAllocator[11]
>  2214592512 | 221184 | 131072 | 
> memory/ioBufAllocator[10]
>  2325741568 | 2323775488 |  65536 | 
> memory/ioBufAllocator[9]
>  2091909120 | 2089123840 |  32768 | 
> memory/ioBufAllocator[8]
>  1956642816 | 1956478976 |  16384 | 
> memory/ioBufAllocator[7]
>  2094530560 | 2094071808 |   8192 | 
> memory/ioBufAllocator[6]
>   356515840 |  355540992 |   4096 | 
> memory/ioBufAllocator[5]
> 1048576 |  14336 |   2048 | 
> memory/ioBufAllocator[4]
>  131072 |  0 |   1024 | 
> memory/ioBufAllocator[3]
>   65536 |  0 |512 | 
> memory/ioBufAllocator[2]
>   32768 |  0 |256 | 
> memory/ioBufAllocator[1]
>   16384 |  0 |128 | 
> memory/ioBufAllocator[0]
>   0 |  0 |576 | 
> memory/ICPRequestCont_allocator
>   0 |  0 |112 | 
> memory/ICPPeerReadContAllocator
>   0 |  0 |432 | 
> memory/PeerReadDataAllocator
>   0 |  0 | 32 | 
> memory/MIMEFieldSDKHandle
>   0 |  0 |240 | 
> memory/INKVConnAllocator
>  

[jira] [Resolved] (TS-1307) Enable using client IP family for server connection

2012-12-10 Thread Alan M. Carroll (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan M. Carroll resolved TS-1307.
-

Resolution: Fixed

Commit 043815e7a7a67b79a2ca6fdc3f6d6751e5150411.
TS-1422 is related because it needed this fix as well.

> Enable using client IP family for server connection
> ---
>
> Key: TS-1307
> URL: https://issues.apache.org/jira/browse/TS-1307
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Configuration, Network
>Affects Versions: 3.3.0
>Reporter: Alan M. Carroll
>Assignee: Alan M. Carroll
>Priority: Trivial
>  Labels: dns, ipv6, net
> Fix For: 3.3.1
>
>
> Enable the IP address family of the client connection to control the IP 
> address family of the server connection. Currently it can only be set 
> globally to always prefer IPv4 or IPv6. This should be changed to preserve 
> those options and add the ability to prefer the same family as the client 
> connection. This should be configurably globally and be able to be overridden 
> by port configuration and the plugin API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (TS-1422) TProxy + proxy.config.http.use_client_target_addr can caused site-specific DoS when DNS records are bad/stale or point to unreachable servers

2012-12-10 Thread Alan M. Carroll (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-1422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan M. Carroll resolved TS-1422.
-

Resolution: Fixed

Commit 043815e7a7a67b79a2ca6fdc3f6d6751e5150411

> TProxy + proxy.config.http.use_client_target_addr can caused site-specific 
> DoS when DNS records are bad/stale or point to unreachable servers
> -
>
> Key: TS-1422
> URL: https://issues.apache.org/jira/browse/TS-1422
> Project: Traffic Server
>  Issue Type: Bug
>  Components: HTTP
>Affects Versions: 3.2.0
> Environment: Version 3.2 running with TProxy interception and 
> proxy.config.http.use_client_target_addr == 1
>Reporter: B Wyatt
>Assignee: Alan M. Carroll
> Fix For: 3.3.3
>
>
> In the presence of multiple A(AA) records from DNS, most consumer browsers 
> will choose an alternate record if their current selected record is 
> unreachable.  This allows the browser to successfully mitigate downed servers 
> and stale/erroneous DNS entries.
> However, an intercepting proxy will establish a connection for a given 
> endpoint regardless of the state of the upstream endpoint.  As a result, the 
> browsers ability to detect downed origin servers is completely neutralized.
> When enabling proxy.config.http.use_client_target_addr this situation creates 
> a localized service outage.  ATS will skip DNS checks in favor of using the 
> endpoint address that the client was attempting to connect to during 
> interception.  If this endpoint is unreachable, ATS will send an error 
> response (50x) to the user browser.  Since the browser assumes this is from 
> the Origin Server, it makes no attempt to move to the next DNS record. 
> In the event that a DNS record is erroneous or the most selected record (aka 
> first?) points to a down server, this can deny access to a destination for 
> users behind the transparent proxy, while users that are not intercepted 
> merely see increased latency as their browser cycles through bad DNS entries 
> looking for a good address.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-10 Thread John Plevyak (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13528528#comment-13528528
 ] 

John Plevyak commented on TS-1006:
--

Some of the volatile variables are not listed as such (e.g. 
InkThreadCache::status).

Also, what is the purpose of this status field and how is it updated?  It is 
set in ink_freelist_new to 0 via simple assignment, then tested/assigned via a 
cas in ink_freelist_free.  Some comments, or documentation would be nice.

Have you tested this against the default memory allocator and TCMalloc?

This seems to be doing something similar to TCMalloc and that code has been 
extensively tested.

> memory management, cut down memory waste ?
> --
>
> Key: TS-1006
> URL: https://issues.apache.org/jira/browse/TS-1006
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 3.1.1
>Reporter: Zhao Yongming
>Assignee: Bin Chen
> Fix For: 3.3.2
>
> Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
> 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
> Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods
>
>
> when we review the memory usage in the production, there is something 
> abnormal, ie, looks like TS take much memory than index data + common system 
> waste, and here is some memory dump result by set 
> "proxy.config.dump_mem_info_frequency"
> 1, the one on a not so busy forwarding system:
> physics memory: 32G
> RAM cache: 22G
> DISK: 6140 GB
> average_object_size 64000
> {code}
>  allocated  |in-use  | type size  |   free list name
> |||--
>   671088640 |   37748736 |2097152 | 
> memory/ioBufAllocator[14]
>  2248146944 | 2135949312 |1048576 | 
> memory/ioBufAllocator[13]
>  1711276032 | 1705508864 | 524288 | 
> memory/ioBufAllocator[12]
>  1669332992 | 1667760128 | 262144 | 
> memory/ioBufAllocator[11]
>  2214592512 | 221184 | 131072 | 
> memory/ioBufAllocator[10]
>  2325741568 | 2323775488 |  65536 | 
> memory/ioBufAllocator[9]
>  2091909120 | 2089123840 |  32768 | 
> memory/ioBufAllocator[8]
>  1956642816 | 1956478976 |  16384 | 
> memory/ioBufAllocator[7]
>  2094530560 | 2094071808 |   8192 | 
> memory/ioBufAllocator[6]
>   356515840 |  355540992 |   4096 | 
> memory/ioBufAllocator[5]
> 1048576 |  14336 |   2048 | 
> memory/ioBufAllocator[4]
>  131072 |  0 |   1024 | 
> memory/ioBufAllocator[3]
>   65536 |  0 |512 | 
> memory/ioBufAllocator[2]
>   32768 |  0 |256 | 
> memory/ioBufAllocator[1]
>   16384 |  0 |128 | 
> memory/ioBufAllocator[0]
>   0 |  0 |576 | 
> memory/ICPRequestCont_allocator
>   0 |  0 |112 | 
> memory/ICPPeerReadContAllocator
>   0 |  0 |432 | 
> memory/PeerReadDataAllocator
>   0 |  0 | 32 | 
> memory/MIMEFieldSDKHandle
>   0 |  0 |240 | 
> memory/INKVConnAllocator
>   0 |  0 | 96 | 
> memory/INKContAllocator
>4096 |  0 | 32 | 
> memory/apiHookAllocator
>   0 |  0 |288 | 
> memory/FetchSMAllocator
>   0 |  0 | 80 | 
> memory/prefetchLockHandlerAllocator
>   0 |  0 |176 | 
> memory/PrefetchBlasterAllocator
>   0 |  0 | 80 | 
> memory/prefetchUrlBlaster
>   0 |  0 | 96 | memory/blasterUrlList
>   0 |  0 | 96 | 
> memory/prefetchUrlEntryAllocator
>   0 |  0 |128 | 
> memory/socksProxyAllocator
>   0 |  0 |144 | 
> memory/ObjectReloadCont
> 3258368 | 576016 |592 | 
> memory/httpClientSessionAllocator
>  825344 | 139568 |208 | 
> memory/httpServerSessionAllocator
>22597632 |1284848 |   9808 | memory/httpSMAllocator
>   0 |  0 | 32 | 
> memory/CacheLookupHttpConfigAllocator
>   0 |  0 |   

[jira] [Commented] (TS-1006) memory management, cut down memory waste ?

2012-12-10 Thread Yunkai Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13528615#comment-13528615
 ] 

Yunkai Zhang commented on TS-1006:
--

[~jplevyak]

InkThreadCache::status is an *inner* status(maybe the variable name is not so 
good, any suggestion?), need not to be listed, it represent the status(state) 
of allocator: Malloc-ing or Free-ing, I use it as an simple state machine - 
executing refresh_average_info() only when the status change from Malloc-ing to 
Free-ing. I will add more comments for it in the code.

I have compare TS original allocator/JEMalloc/TCMalloc/LocklessMalloc before I 
optimize TS InkFreeList allocator, the result is that TS orginal allocator is 
faster than JEMalloc/TCMalloc/LocklessMalloc, this because 
JEMalloc/TCMalloc/LocklessMalloc is designed for *general perpose*.

But my patches learn from those excellent general allocators, they are similar 
in some aspect.

> memory management, cut down memory waste ?
> --
>
> Key: TS-1006
> URL: https://issues.apache.org/jira/browse/TS-1006
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 3.1.1
>Reporter: Zhao Yongming
>Assignee: Bin Chen
> Fix For: 3.3.2
>
> Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
> 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
> Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods
>
>
> when we review the memory usage in the production, there is something 
> abnormal, ie, looks like TS take much memory than index data + common system 
> waste, and here is some memory dump result by set 
> "proxy.config.dump_mem_info_frequency"
> 1, the one on a not so busy forwarding system:
> physics memory: 32G
> RAM cache: 22G
> DISK: 6140 GB
> average_object_size 64000
> {code}
>  allocated  |in-use  | type size  |   free list name
> |||--
>   671088640 |   37748736 |2097152 | 
> memory/ioBufAllocator[14]
>  2248146944 | 2135949312 |1048576 | 
> memory/ioBufAllocator[13]
>  1711276032 | 1705508864 | 524288 | 
> memory/ioBufAllocator[12]
>  1669332992 | 1667760128 | 262144 | 
> memory/ioBufAllocator[11]
>  2214592512 | 221184 | 131072 | 
> memory/ioBufAllocator[10]
>  2325741568 | 2323775488 |  65536 | 
> memory/ioBufAllocator[9]
>  2091909120 | 2089123840 |  32768 | 
> memory/ioBufAllocator[8]
>  1956642816 | 1956478976 |  16384 | 
> memory/ioBufAllocator[7]
>  2094530560 | 2094071808 |   8192 | 
> memory/ioBufAllocator[6]
>   356515840 |  355540992 |   4096 | 
> memory/ioBufAllocator[5]
> 1048576 |  14336 |   2048 | 
> memory/ioBufAllocator[4]
>  131072 |  0 |   1024 | 
> memory/ioBufAllocator[3]
>   65536 |  0 |512 | 
> memory/ioBufAllocator[2]
>   32768 |  0 |256 | 
> memory/ioBufAllocator[1]
>   16384 |  0 |128 | 
> memory/ioBufAllocator[0]
>   0 |  0 |576 | 
> memory/ICPRequestCont_allocator
>   0 |  0 |112 | 
> memory/ICPPeerReadContAllocator
>   0 |  0 |432 | 
> memory/PeerReadDataAllocator
>   0 |  0 | 32 | 
> memory/MIMEFieldSDKHandle
>   0 |  0 |240 | 
> memory/INKVConnAllocator
>   0 |  0 | 96 | 
> memory/INKContAllocator
>4096 |  0 | 32 | 
> memory/apiHookAllocator
>   0 |  0 |288 | 
> memory/FetchSMAllocator
>   0 |  0 | 80 | 
> memory/prefetchLockHandlerAllocator
>   0 |  0 |176 | 
> memory/PrefetchBlasterAllocator
>   0 |  0 | 80 | 
> memory/prefetchUrlBlaster
>   0 |  0 | 96 | memory/blasterUrlList
>   0 |  0 | 96 | 
> memory/prefetchUrlEntryAllocator
>   0 |  0 |128 | 
> memory/socksProxyAllocator
>   0 |  0 |144 | 
> memory/ObjectReloadCont
> 3258368 | 576016 |592 | 
> memory/httpClientSessionAllocator
>  825344 | 139568 |   

[jira] [Comment Edited] (TS-1006) memory management, cut down memory waste ?

2012-12-10 Thread Yunkai Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13528615#comment-13528615
 ] 

Yunkai Zhang edited comment on TS-1006 at 12/11/12 2:33 AM:


[~jplevyak]

InkThreadCache::status is an *inner* status(maybe the variable name is not so 
good, any suggestion?), need not to be listed, it represent the status(state) 
of allocator: Malloc-ing or Free-ing, I use it as an simple state machine - 
executing refresh_average_info() to calculate the minimum of free memory only 
when the status change from Malloc-ing to Free-ing. I will add more comments 
for it in the code.

I have compare TS original allocator/JEMalloc/TCMalloc/LocklessMalloc before I 
optimize TS InkFreeList allocator, the result is that TS orginal allocator is 
faster than JEMalloc/TCMalloc/LocklessMalloc, this because 
JEMalloc/TCMalloc/LocklessMalloc is designed for *general perpose*.

But my patches learn from those excellent general allocators, they are similar 
in some aspect.

  was (Author: yunkai):
[~jplevyak]

InkThreadCache::status is an *inner* status(maybe the variable name is not so 
good, any suggestion?), need not to be listed, it represent the status(state) 
of allocator: Malloc-ing or Free-ing, I use it as an simple state machine - 
executing refresh_average_info() only when the status change from Malloc-ing to 
Free-ing. I will add more comments for it in the code.

I have compare TS original allocator/JEMalloc/TCMalloc/LocklessMalloc before I 
optimize TS InkFreeList allocator, the result is that TS orginal allocator is 
faster than JEMalloc/TCMalloc/LocklessMalloc, this because 
JEMalloc/TCMalloc/LocklessMalloc is designed for *general perpose*.

But my patches learn from those excellent general allocators, they are similar 
in some aspect.
  
> memory management, cut down memory waste ?
> --
>
> Key: TS-1006
> URL: https://issues.apache.org/jira/browse/TS-1006
> Project: Traffic Server
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 3.1.1
>Reporter: Zhao Yongming
>Assignee: Bin Chen
> Fix For: 3.3.2
>
> Attachments: 0001-Allocator-optimize-InkFreeList-memory-pool.patch, 
> 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, 
> Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods
>
>
> when we review the memory usage in the production, there is something 
> abnormal, ie, looks like TS take much memory than index data + common system 
> waste, and here is some memory dump result by set 
> "proxy.config.dump_mem_info_frequency"
> 1, the one on a not so busy forwarding system:
> physics memory: 32G
> RAM cache: 22G
> DISK: 6140 GB
> average_object_size 64000
> {code}
>  allocated  |in-use  | type size  |   free list name
> |||--
>   671088640 |   37748736 |2097152 | 
> memory/ioBufAllocator[14]
>  2248146944 | 2135949312 |1048576 | 
> memory/ioBufAllocator[13]
>  1711276032 | 1705508864 | 524288 | 
> memory/ioBufAllocator[12]
>  1669332992 | 1667760128 | 262144 | 
> memory/ioBufAllocator[11]
>  2214592512 | 221184 | 131072 | 
> memory/ioBufAllocator[10]
>  2325741568 | 2323775488 |  65536 | 
> memory/ioBufAllocator[9]
>  2091909120 | 2089123840 |  32768 | 
> memory/ioBufAllocator[8]
>  1956642816 | 1956478976 |  16384 | 
> memory/ioBufAllocator[7]
>  2094530560 | 2094071808 |   8192 | 
> memory/ioBufAllocator[6]
>   356515840 |  355540992 |   4096 | 
> memory/ioBufAllocator[5]
> 1048576 |  14336 |   2048 | 
> memory/ioBufAllocator[4]
>  131072 |  0 |   1024 | 
> memory/ioBufAllocator[3]
>   65536 |  0 |512 | 
> memory/ioBufAllocator[2]
>   32768 |  0 |256 | 
> memory/ioBufAllocator[1]
>   16384 |  0 |128 | 
> memory/ioBufAllocator[0]
>   0 |  0 |576 | 
> memory/ICPRequestCont_allocator
>   0 |  0 |112 | 
> memory/ICPPeerReadContAllocator
>   0 |  0 |432 | 
> memory/PeerReadDataAllocator
>   0 |  0 | 32 | 
> memory/MIMEFieldSDKHandle
>   0 |  0 |240 | 
> memory/INKVConnAllocator
>   0 |  0 | 96 | 
> memory/INKContAllocator
>4096 |