Hannes, 
        To get a baseline of behaviour set disk_access to standard. You will 
probably want to keep it like that if you want better control over the memory 
on the box. 

        Also connect to the box with JConsole and look at the PermGen space 
used it is not included in the max heap space setting. You can also check the 
heap usage there, running inside of 1G is very tricky. 

        If you want to keep it inside of 2Gb trying setting the heap max to 
1.5G, use standard IO, disable caches, and use a low memtable threshold (it 
depends on how many CF's you have, try 32mb)

Hope that helps.
        
-----------------
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 5 May 2011, at 22:30, Hannes Schmidt wrote:

> This was my first thought, too. We switched to mmap_index_only and
> didn't see any change in behavior. Looking at the smaps file attached
> to my original post, one can see that the mmapped index files take up
> only a minuscule part of RSS.
> 
> On Wed, May 4, 2011 at 11:37 PM, Oleg Anastasyev <olega...@gmail.com> wrote:
>> Probably this is because of mmapped io access mode, which is enabled by 
>> default
>> in 64-bit VMs - RAM is occupied by data files.
>> If you have such a tight memory reqs, you can turn on standard access mode in
>> storage-conf.xml, but dont expect it to work fast then:
>> <!--
>> 
>> 
>>  ~ Access mode.  mmapped i/o is substantially faster, but only practical on
>> 
>> 
>>  ~ a 64bit machine (which notably does not include EC2 "small" instances)
>> 
>> 
>>  ~ or relatively small datasets.  "auto", the safe choice, will enable
>> 
>> 
>>  ~ mmapping on a 64bit JVM.  Other values are "mmap", "mmap_index_only"
>> 
>> 
>>  ~ (which may allow you to get part of the benefits of mmap on a 32bit
>> 
>> 
>>  ~ machine by mmapping only index files) and "standard".
>> 
>> 
>>  ~ (The buffer size settings that follow only apply to standard,
>> 
>> 
>>  ~ non-mmapped i/o.)
>> 
>> 
>>  -->
>> 
>> 
>>  <DiskAccessMode>standard</DiskAccessMode>
>> 
>> 

Reply via email to