The fact that the memory is high is not necessarily a bad thing.
Faster garbage collection implies more CPU usage.

I had some success following the tuning advice here, to make my memory
usage less spikey

http://blog.mikiobraun.de/2010/08/cassandra-gc-tuning.html

Again, less spikes != better performance, is not a fact.

On Tue, Sep 7, 2010 at 9:25 PM, shangan <shan...@corp.kaixin001.com> wrote:
> how to change the configure in order to trigger GC earlier not when it is 
> close to the memory maximum?
>
>
> 2010-09-08
>
>
>
> shangan
>
>
>
> 发件人: Steve Loughran
> 发送时间: 2010-09-06  18:16:51
> 收件人: common-user
> 抄送:
> 主题: Re: namenode consume quite a lot of memory with only serveral hundredsof 
> files in it
>
> On 06/09/10 08:27, shangan wrote:
>> my cluster consists of 8 nodes with the namenode in an independent 
>> machine,the following info is what I get from namenode web ui:
>> 291 files and directories, 1312 blocks = 1603 total. Heap Size is 2.92 GB / 
>> 4.34 GB (67%)
>> I'm wondering why the namenode take so much memory while I only store 
>> hundreds of files. I've check the fsimage and edits files, the size of the 
>> sum of both is only 232 KB. So far as I know namenode can store the 
>> information of millions of files with 1G RAM, why my cluster consume so much 
>> memory ? If it goes on,I can't store that many files before the memory is 
>> eaten up.
>>
> It might just been there isn't enough memory consumption on your
> pre-allocated heap to trigger GC yet; have a play with the GC tooling
> and jvisualvm to see what's going on.
> __________ Information from ESET NOD32 Antivirus, version of virus signature 
> database 5418 (20100902) __________
> The message was checked by ESET NOD32 Antivirus.
> http://www.eset.com
>

Reply via email to