>
> You might also have some gains setting in_memory_compaction_limit_in_mb
> to something very low to force Cassandra to use on disk compaction rather
> than doing it in memory.


Cool Ben.. thanks I'll add that to my config as well.

Glad that helped. Thanks for reporting back!


No problem, Nate! That's the least I can do. All I can hope is that this
thread adds to the overall fund of knowledge for the list.

Cheers,
Tim



On Mon, Feb 23, 2015 at 11:46 AM, Nate McCall <n...@thelastpickle.com>
wrote:

> Glad that helped. Thanks for reporting back!
>
> On Sun, Feb 22, 2015 at 9:12 PM, Tim Dunphy <bluethu...@gmail.com> wrote:
>
>> Nate,
>>
>>  Definitely thank you for this advice. After leaving the new Cassandra
>> node running on the 2GB instance for the past couple of days, I think I've
>> had ample reason to report complete success in getting it stabilized on
>> that instance! Here are the changes I've been able to make:
>>
>>  I think manipulating the key cache and other stuff like concurrent
>> writes and some of the other stuff I worked on based on that thread from
>> the cassandra list definitely was key in getting Cassandra to work on the
>> new instance.
>>
>> Check out the before and after (before working/ after working):
>>
>> Before in cassandra-env.sh:
>>    MAX_HEAP_SIZE="800M"
>>    HEAP_NEWSIZE="200M"
>>
>> After:
>>     MAX_HEAP_SIZE="512M"
>>     HEAP_NEWSIZE="100M"
>>
>> And before in the cassandra.yaml file:
>>
>>    concurrent_writes: 32
>>    compaction_throughput_mb_per_sec: 16
>>    key_cache_size_in_mb:
>>    key_cache_save_period: 14400
>>    # native_transport_max_threads: 128
>>
>> And after:
>>
>>     concurrent_writes: 2
>>     compaction_throughput_mb_per_sec: 8
>>     key_cache_size_in_mb: 4
>>     key_cache_save_period: 0
>>     native_transport_max_threads: 4
>>
>>
>> That really made the difference. I'm a puppet user, so these changes are
>> in puppet. So any new 2GB instances I should bring up on Digital Ocean
>> should absolutely work the way the first 2GB node does, there.  But I was
>> able to make enough sense of your chef recipe to adapt what you were
>> showing me.
>>
>> Thanks again!
>> Tim
>>
>> On Fri, Feb 20, 2015 at 10:31 PM, Tim Dunphy <bluethu...@gmail.com>
>> wrote:
>>
>>> The most important things to note:
>>>> - don't include JNA (it needs to lock pages larger than what will be
>>>> available)
>>>> - turn down threadpools for transports
>>>> - turn compaction throughput way down
>>>> - make concurrent reads and writes very small
>>>> I have used the above run a healthy 5 node clusters locally in it's own
>>>> private network with a 6th monitoring server for light to moderate local
>>>> testing in 16g of laptop ram. YMMV but it is possible.
>>>
>>>
>>> Thanks!! That was very helpful. I just tried applying your suggestions
>>> to my cassandra.yaml file. I used the info from your chef recipe. Well like
>>> I've been saying typically it takes about 5 hours or so for this situation
>>> to shake itself out. I'll provide an update to the list once I have a
>>> better idea of how this is working.
>>>
>>> Thanks again!
>>> Tim
>>>
>>> On Fri, Feb 20, 2015 at 9:37 PM, Nate McCall <n...@thelastpickle.com>
>>> wrote:
>>>
>>>> I frequently test with multi-node vagrant-based clusters locally. The
>>>> following chef attributes should give you an idea of what to turn down in
>>>> cassandra.yaml and cassandra-env.sh to build a decent testing cluster:
>>>>
>>>>           :cassandra => {'cluster_name' => 'VerifyCluster',
>>>>                          'package_name' => 'dsc20',
>>>>                          'version' => '2.0.11',
>>>>                          'release' => '1',
>>>>                          'setup_jna' => false,
>>>>                          'max_heap_size' => '512M',
>>>>                          'heap_new_size' => '100M',
>>>>                          'initial_token' => server['initial_token'],
>>>>                          'seeds' => "192.168.33.10",
>>>>                          'listen_address' => server['ip'],
>>>>                          'broadcast_address' => server['ip'],
>>>>                          'rpc_address' => server['ip'],
>>>>                          'conconcurrent_reads' => "2",
>>>>                          'concurrent_writes' => "2",
>>>>                          'memtable_flush_queue_size' => "2",
>>>>                          'compaction_throughput_mb_per_sec' => "8",
>>>>                          'key_cache_size_in_mb' => "4",
>>>>                          'key_cache_save_period' => "0",
>>>>                          'native_transport_min_threads' => "2",
>>>>                          'native_transport_max_threads' => "4",
>>>>                          'notify_restart' => true,
>>>>                          'reporter' => {
>>>>                            'riemann' => {
>>>>                              'enable' => true,
>>>>                              'host' => '192.168.33.51'
>>>>                            },
>>>>                            'graphite' => {
>>>>                              'enable' => true,
>>>>                              'host' => '192.168.33.51'
>>>>                            }
>>>>                          }
>>>>                        },
>>>>
>>>> The most important things to note:
>>>> - don't include JNA (it needs to lock pages larger than what will be
>>>> available)
>>>> - turn down threadpools for transports
>>>> - turn compaction throughput way down
>>>> - make concurrent reads and writes very small
>>>>
>>>> I have used the above run a healthy 5 node clusters locally in it's own
>>>> private network with a 6th monitoring server for light to moderate local
>>>> testing in 16g of laptop ram. YMMV but it is possible.
>>>>
>>>
>>>
>>>
>>> --
>>> GPG me!!
>>>
>>> gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
>>>
>>>
>>
>>
>> --
>> GPG me!!
>>
>> gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
>>
>>
>
>
> --
> -----------------
> Nate McCall
> Austin, TX
> @zznate
>
> Co-Founder & Sr. Technical Consultant
> Apache Cassandra Consulting
> http://www.thelastpickle.com
>



-- 
GPG me!!

gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B

Reply via email to