Thank you all to respond and discuss my question.

I agree with you all basically,
but, I think, in Cassandra case, it seems a matter of how much data we use
with how much memory we have.

As Jack's (and datastax's) suggestion,
I also used 4GM RAM machine (t2.medium) with 1 billion records (about 100GB
in size) with default configuration except for leveledCompactionStrategy,
but after completion of insertion from an application program, probably
compaction kept working,
and again, later Cassandra was killed by OOM killer.

Insertion from application side is finished, so the issue is maybe from
compaction happening in background.
Is there any recommended configuration in compaction to make Cassandra
stable with large dataset (more than 100GB) with kind of low memory (4GB)
environment ?

I think it would be the same thing if I try the experiment with 8GB memory
and larger data set (maybe more than 2 billion records).
(If it is not correct, please explain why.)


Best regards,
Hiro

On Fri, Mar 11, 2016 at 4:19 AM, Robert Coli <rc...@eventbrite.com> wrote:

> On Thu, Mar 10, 2016 at 3:27 AM, Alain RODRIGUEZ <arodr...@gmail.com>
> wrote:
>
>> So, like Jack, I globally really not recommend it unless you know what
>> you are doing and don't care about facing those issues.
>>
>
> Certainly a spectrum of views here, but everyone (including OP) seems to
> agree with the above. :D
>
> =Rob
>
>

Reply via email to