On Wed, Dec 12, 2012 at 12:09 AM, Leif Hedstrom (JIRA) <j...@apache.org>wrote:
> > [ > https://issues.apache.org/jira/browse/TS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13529082#comment-13529082] > > Leif Hedstrom commented on TS-1006: > ----------------------------------- > > I haven't had a chance to review the patch (yet), but a few comments / > concerns: > > 1) I understand you want to reclaim the memory, to use it for another > (non-ATS) process. I guess that's reasonable, but bear in mind that bad > things can happen here. Such as, if you need 10G RAM to handle certain > load, reclaim 4G of it to use for some other process, if you now see that > same load again, you might not be able to allocate that full 10G again (at > least not without going on swap). > The main propose of this patch is to fasten total memory size, so that we can prevent TS running into swap which will lead to OOM. I can disable reclaiming when the total memory size less than *max_mem* by default (In fact, we can control the reclaiming speed by setting *reclaim_factor* and *max_overage* variables). > > 2) I also see / hear about problems where the claim is that we sort of > leak, or don't use, the freelist as efficiently as possible. If that's the > case, this patch seems like a bandaid (or duct tape) solution. I'm not > opposing it per se, but doing this sort of garbage collection could then > hide real, serious problems that we should otherwise fix. > I agree with you that we need to fix the real problems. But there is a serious problem about original InkFreeList -- the original InkFreeList can't reclaim itself, so some free memory used by one kind of idle Class/RAM-cache can't reused by another kind of busy Class/RAM-cahe. > > 3) This becomes more difficult to configure. How do I know what to set the > max memory to? How do I avoid the case where it constantly garbage collects > off the freelist, just to cause it to malloc() new objects, and then > garbage collect again? Such cycles could completely kill performance. > Right, I have observed that when TS exceed the max_mem, it will kill some performance(CPU will become more busy). But it should only kill the performance of requests which lead to memory beyond max_mem. The other request should no be affected. > > > At a minimum, I'd encourage that we make this behavior optional, and > disabled by default. I'd be interested to hear from the Taoabao team and > John about these concerns as well. > I can make it not reclaiming by default:) > > Thanks! > > -- Leif > > > > memory management, cut down memory waste ? > > ------------------------------------------ > > > > Key: TS-1006 > > URL: https://issues.apache.org/jira/browse/TS-1006 > > Project: Traffic Server > > Issue Type: Improvement > > Components: Core > > Affects Versions: 3.1.1 > > Reporter: Zhao Yongming > > Assignee: Bin Chen > > Fix For: 3.3.2 > > > > Attachments: > 0001-Allocator-optimize-InkFreeList-memory-pool.patch, > 0002-Allocator-make-InkFreeList-memory-pool-configurable.patch, > Memory-Usage-After-Introduced-New-Allocator.png, memusage.ods, memusage.ods > > > > > > when we review the memory usage in the production, there is something > abnormal, ie, looks like TS take much memory than index data + common > system waste, and here is some memory dump result by set > "proxy.config.dump_mem_info_frequency" > > 1, the one on a not so busy forwarding system: > > physics memory: 32G > > RAM cache: 22G > > DISK: 6140 GB > > average_object_size 64000 > > {code} > > allocated | in-use | type size | free list name > > > --------------------|--------------------|------------|---------------------------------- > > 671088640 | 37748736 | 2097152 | > memory/ioBufAllocator[14] > > 2248146944 | 2135949312 | 1048576 | > memory/ioBufAllocator[13] > > 1711276032 | 1705508864 | 524288 | > memory/ioBufAllocator[12] > > 1669332992 | 1667760128 | 262144 | > memory/ioBufAllocator[11] > > 2214592512 | 2211840000 | 131072 | > memory/ioBufAllocator[10] > > 2325741568 | 2323775488 | 65536 | > memory/ioBufAllocator[9] > > 2091909120 | 2089123840 | 32768 | > memory/ioBufAllocator[8] > > 1956642816 | 1956478976 | 16384 | > memory/ioBufAllocator[7] > > 2094530560 | 2094071808 | 8192 | > memory/ioBufAllocator[6] > > 356515840 | 355540992 | 4096 | > memory/ioBufAllocator[5] > > 1048576 | 14336 | 2048 | > memory/ioBufAllocator[4] > > 131072 | 0 | 1024 | > memory/ioBufAllocator[3] > > 65536 | 0 | 512 | > memory/ioBufAllocator[2] > > 32768 | 0 | 256 | > memory/ioBufAllocator[1] > > 16384 | 0 | 128 | > memory/ioBufAllocator[0] > > 0 | 0 | 576 | > memory/ICPRequestCont_allocator > > 0 | 0 | 112 | > memory/ICPPeerReadContAllocator > > 0 | 0 | 432 | > memory/PeerReadDataAllocator > > 0 | 0 | 32 | > memory/MIMEFieldSDKHandle > > 0 | 0 | 240 | > memory/INKVConnAllocator > > 0 | 0 | 96 | > memory/INKContAllocator > > 4096 | 0 | 32 | > memory/apiHookAllocator > > 0 | 0 | 288 | > memory/FetchSMAllocator > > 0 | 0 | 80 | > memory/prefetchLockHandlerAllocator > > 0 | 0 | 176 | > memory/PrefetchBlasterAllocator > > 0 | 0 | 80 | > memory/prefetchUrlBlaster > > 0 | 0 | 96 | > memory/blasterUrlList > > 0 | 0 | 96 | > memory/prefetchUrlEntryAllocator > > 0 | 0 | 128 | > memory/socksProxyAllocator > > 0 | 0 | 144 | > memory/ObjectReloadCont > > 3258368 | 576016 | 592 | > memory/httpClientSessionAllocator > > 825344 | 139568 | 208 | > memory/httpServerSessionAllocator > > 22597632 | 1284848 | 9808 | > memory/httpSMAllocator > > 0 | 0 | 32 | > memory/CacheLookupHttpConfigAllocator > > 0 | 0 | 9856 | > memory/httpUpdateSMAllocator > > 0 | 0 | 128 | > memory/RemapPluginsAlloc > > 0 | 0 | 48 | > memory/CongestRequestParamAllocator > > 0 | 0 | 128 | > memory/CongestionDBContAllocator > > 5767168 | 704512 | 2048 | memory/hdrStrHeap > > 18350080 | 1153024 | 2048 | memory/hdrHeap > > 53248 | 2912 | 208 | > memory/httpCacheAltAllocator > > 0 | 0 | 112 | > memory/OneWayTunnelAllocator > > 157696 | 1232 | 1232 | > memory/hostDBContAllocator > > 102240 | 17040 | 17040 | > memory/dnsBufAllocator > > 323584 | 0 | 1264 | > memory/dnsEntryAllocator > > 0 | 0 | 16 | > memory/DNSRequestDataAllocator > > 0 | 0 | 1072 | > memory/SRVAllocator > > 0 | 0 | 48 | > memory/ClusterVConnectionCache::Entry > > 0 | 0 | 560 | > memory/cacheContAllocator > > 0 | 0 | 112 | > memory/inControlAllocator > > 0 | 0 | 112 | > memory/outControlAllocator > > 0 | 0 | 32 | > memory/byteBankAllocator > > 0 | 0 | 576 | > memory/clusterVCAllocator > > 0 | 0 | 48 | > memory/evacuationKey > > 6144 | 0 | 48 | > memory/cacheRemoveCont > > 270336 | 262560 | 96 | > memory/evacuationBlock > > 4997120 | 3968416 | 976 | > memory/cacheVConnection > > 798720 | 522080 | 160 | > memory/openDirEntry > > 0 | 0 | 64 | > memory/RamCacheLRUEntry > > 56426496 | 56426304 | 96 | > memory/RamCacheCLFUSEntry > > 9584640 | 6168000 | 960 | > memory/netVCAllocator > > 0 | 0 | 128 | > memory/udpReadContAllocator > > 0 | 0 | 128 | > memory/udpWorkContinuationAllocator > > 0 | 0 | 160 | > memory/udpPacketAllocator > > 0 | 0 | 304 | > memory/socksAllocator > > 139264 | 68544 | 1088 | > memory/sslNetVCAllocator > > 0 | 0 | 128 | > memory/UDPIOEventAllocator > > 671744 | 115520 | 64 | > memory/ioBlockAllocator > > 28305408 | 28301520 | 48 | > memory/ioDataAllocator > > 2273280 | 406320 | 240 | > memory/ioAllocator > > 1904640 | 1489920 | 80 | > memory/mutexAllocator > > 1105920 | 188544 | 96 | > memory/eventAllocator > > 2359296 | 129024 | 1024 | memory/ArenaBlock > > {code} > > this box will crash every 2days, so the memory waste may no that high > > 2, our production reverse system: > > physics memory: 16G > > RAM cache: 8G > > DISK: 1516 GB > > average_object_size 16384 > > and it run for a much long time: > > {code} > > allocated | in-use | type size | free list name > > > --------------------|--------------------|------------|---------------------------------- > > 805306368 | 0 | 2097152 | > memory/ioBufAllocator[14] > > 738197504 | 8388608 | 1048576 | > memory/ioBufAllocator[13] > > 1258291200 | 46661632 | 524288 | > memory/ioBufAllocator[12] > > 1300234240 | 183762944 | 262144 | > memory/ioBufAllocator[11] > > 1170210816 | 466223104 | 131072 | > memory/ioBufAllocator[10] > > 1790967808 | 1223426048 | 65536 | > memory/ioBufAllocator[9] > > 2970615808 | 2601418752 | 32768 | > memory/ioBufAllocator[8] > > 2067791872 | 2044608512 | 16384 | > memory/ioBufAllocator[7] > > 1169424384 | 1169121280 | 8192 | > memory/ioBufAllocator[6] > > 711458816 | 710463488 | 4096 | > memory/ioBufAllocator[5] > > 1572864 | 0 | 2048 | > memory/ioBufAllocator[4] > > 131072 | 0 | 1024 | > memory/ioBufAllocator[3] > > 65536 | 0 | 512 | > memory/ioBufAllocator[2] > > 32768 | 0 | 256 | > memory/ioBufAllocator[1] > > 16384 | 0 | 128 | > memory/ioBufAllocator[0] > > 0 | 0 | 576 | > memory/ICPRequestCont_allocator > > 0 | 0 | 112 | > memory/ICPPeerReadContAllocator > > 0 | 0 | 432 | > memory/PeerReadDataAllocator > > 0 | 0 | 32 | > memory/MIMEFieldSDKHandle > > 0 | 0 | 240 | > memory/INKVConnAllocator > > 0 | 0 | 96 | > memory/INKContAllocator > > 4096 | 0 | 32 | > memory/apiHookAllocator > > 0 | 0 | 288 | > memory/FetchSMAllocator > > 0 | 0 | 80 | > memory/prefetchLockHandlerAllocator > > 0 | 0 | 176 | > memory/PrefetchBlasterAllocator > > 0 | 0 | 80 | > memory/prefetchUrlBlaster > > 0 | 0 | 96 | > memory/blasterUrlList > > 0 | 0 | 96 | > memory/prefetchUrlEntryAllocator > > 0 | 0 | 128 | > memory/socksProxyAllocator > > 0 | 0 | 144 | > memory/ObjectReloadCont > > 1136640 | 125504 | 592 | > memory/httpClientSessionAllocator > > 372736 | 27248 | 208 | > memory/httpServerSessionAllocator > > 11317248 | 39296 | 9824 | > memory/httpSMAllocator > > 0 | 0 | 32 | > memory/CacheLookupHttpConfigAllocator > > 0 | 0 | 9888 | > memory/httpUpdateSMAllocator > > 0 | 0 | 128 | > memory/RemapPluginsAlloc > > 0 | 0 | 512 | > memory/HCSMAllocator > > 0 | 0 | 48 | > memory/VCEntryAllocator > > 0 | 0 | 96 | > memory/HCEntryAllocator > > 0 | 0 | 64 | > memory/HCHandlerAllocator > > 0 | 0 | 48 | > memory/CongestRequestParamAllocator > > 0 | 0 | 128 | > memory/CongestionDBContAllocator > > 6029312 | 643072 | 2048 | memory/hdrStrHeap > > 7077888 | 657408 | 2048 | memory/hdrHeap > > 26624 | 208 | 208 | > memory/httpCacheAltAllocator > > 0 | 0 | 112 | > memory/OneWayTunnelAllocator > > 630784 | 1232 | 1232 | > memory/hostDBContAllocator > > 238560 | 17040 | 17040 | > memory/dnsBufAllocator > > 161792 | 0 | 1264 | > memory/dnsEntryAllocator > > 0 | 0 | 16 | > memory/DNSRequestDataAllocator > > 0 | 0 | 1072 | > memory/SRVAllocator > > 0 | 0 | 48 | > memory/ClusterVConnectionCache::Entry > > 0 | 0 | 560 | > memory/cacheContAllocator > > 0 | 0 | 112 | > memory/inControlAllocator > > 0 | 0 | 112 | > memory/outControlAllocator > > 0 | 0 | 32 | > memory/byteBankAllocator > > 0 | 0 | 576 | > memory/clusterVCAllocator > > 0 | 0 | 48 | > memory/evacuationKey > > 6144 | 0 | 48 | > memory/cacheRemoveCont > > 17006592 | 14972928 | 96 | > memory/evacuationBlock > > 1777664 | 759872 | 992 | > memory/cacheVConnection > > 307200 | 111520 | 160 | > memory/openDirEntry > > 0 | 0 | 64 | > memory/RamCacheLRUEntry > > 104275968 | 104274048 | 96 | > memory/RamCacheCLFUSEntry > > 3440640 | 1819200 | 960 | > memory/netVCAllocator > > 0 | 0 | 128 | > memory/udpReadContAllocator > > 0 | 0 | 128 | > memory/udpWorkContinuationAllocator > > 0 | 0 | 160 | > memory/udpPacketAllocator > > 0 | 0 | 304 | > memory/socksAllocator > > 0 | 0 | 1088 | > memory/sslNetVCAllocator > > 0 | 0 | 128 | > memory/UDPIOEventAllocator > > 237568 | 22528 | 64 | > memory/ioBlockAllocator > > 26087424 | 26081904 | 48 | > memory/ioDataAllocator > > 890880 | 84240 | 240 | > memory/ioAllocator > > 1525760 | 1403440 | 80 | > memory/mutexAllocator > > 565248 | 129696 | 96 | > memory/eventAllocator > > 1179648 | 4096 | 1024 | memory/ArenaBlock > > {code} > > our team is working on the memory free issue, trying to improve the > memory management. and this a big project, the more input|comment the > better. > > -- > This message is automatically generated by JIRA. > If you think it was sent incorrectly, please contact your JIRA > administrators > For more information on JIRA, see: http://www.atlassian.com/software/jira > -- Yunkai Zhang Work at Taobao