Re: commitlog content

2018-08-30 Thread Vitaliy Semochkin
Thank you for the  excellent response Alain!
On Thu, Aug 30, 2018 at 5:25 PM Alain RODRIGUEZ  wrote:
>
> Hello Vitaly.
>
> This sounds weird to me (unless we are speaking about a small size MB, a few 
> GB maybe). Then the commit log size is limited, by default (see below) and 
> the data should grow bigger in most cases.
>
> According to the documentation 
> (http://cassandra.apache.org/doc/latest/architecture/storage_engine.html#commitlog):
>
>> commitlog_total_space_in_mb: Total space to use for commit logs on disk.
>> If space gets above this value, Cassandra will flush every dirty CF in the 
>> oldest segment and remove it. So a small total commitlog space will tend to 
>> cause more flush activity on less-active columnfamilies.
>> The default value is the smaller of 8192, and 1/4 of the total space of the 
>> commitlog volume.
>> Default Value: 8192
>
>
> The commit log is supposed to be cleaned on flush, thus the solution to 
> reduce the disk space used by commit logs are multiple:
> - Decrease the value for 'commitlog_total_space_in_mb' (probably the best 
> option, you say what you want, and you get it)
> - Use the table option 'memtable_flush_period_in_ms' (default is 0, pick what 
> you would like here - has to be done on all the table you want it to apply)
> - Manually run: 'nodetool flush' should also clean the commit logs
> - Reduce the size of the memtables
> - Limit the maximum size per table before a flush is triggered with 
> 'memtable_cleanup_threshold'. According to the doc it's not a good idea 
> though 
> (http://cassandra.apache.org/doc/latest/configuration/cassandra_config_file.html#memtable-cleanup-threshold).
>
> Also, the data in Cassandra is compacted and compressed. Over a short time 
> period of test or if the data is small compared to the memory available and 
> fits mostly in memory, I can imagine that what you describe can happen.
>
> C*heers,
> ---
> Alain Rodriguez - @arodream - al...@thelastpickle.com
> France / Spain
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> Le mar. 28 août 2018 à 18:24, Vitaliy Semochkin  a 
> écrit :
>>
>> Hello,
>>
>> I've noticed that after a stress test that does only inserts a
>> commitlog content exceeds data dir 20 times.
>> What can be cause of such behavior?
>>
>> Running nodetool compact did not change anything.
>>
>> Regards,
>> Vitaliy
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: commitlog content

2018-08-30 Thread Alain RODRIGUEZ
Hello Vitaly.

This sounds weird to me (unless we are speaking about a small size MB, a
few GB maybe). Then the commit log size is limited, by default (see below)
and the data should grow bigger in most cases.

According to the documentation (
http://cassandra.apache.org/doc/latest/architecture/storage_engine.html#commitlog
):

commitlog_total_space_in_mb: Total space to use for commit logs on disk.
> If space gets above this value, Cassandra will flush every dirty CF in the
> oldest segment and remove it. So a small total commitlog space will tend to
> cause more flush activity on less-active columnfamilies.
> The default value is the smaller of 8192, and 1/4 of the total space of
> the commitlog volume.
> Default Value: 8192


The commit log is supposed to be cleaned on flush, thus the solution to
reduce the disk space used by commit logs are multiple:
- Decrease the value for 'commitlog_total_space_in_mb' (probably the best
option, you say what you want, and you get it)
- Use the table option 'memtable_flush_period_in_ms' (default is 0, pick
what you would like here - has to be done on all the table you want it to
apply)
- Manually run: 'nodetool flush' should also clean the commit logs
- Reduce the size of the memtables
- Limit the maximum size per table before a flush is triggered with
'memtable_cleanup_threshold'. According to the doc it's not a good idea
though (
http://cassandra.apache.org/doc/latest/configuration/cassandra_config_file.html#memtable-cleanup-threshold
).

Also, the data in Cassandra is compacted and compressed. Over a short time
period of test or if the data is small compared to the memory available and
fits mostly in memory, I can imagine that what you describe can happen.

C*heers,
---
Alain Rodriguez - @arodream - al...@thelastpickle.com
France / Spain

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com

Le mar. 28 août 2018 à 18:24, Vitaliy Semochkin  a
écrit :

> Hello,
>
> I've noticed that after a stress test that does only inserts a
> commitlog content exceeds data dir 20 times.
> What can be cause of such behavior?
>
> Running nodetool compact did not change anything.
>
> Regards,
> Vitaliy
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


commitlog content

2018-08-28 Thread Vitaliy Semochkin
Hello,

I've noticed that after a stress test that does only inserts a
commitlog content exceeds data dir 20 times.
What can be cause of such behavior?

Running nodetool compact did not change anything.

Regards,
Vitaliy

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org