Re: High IO and poor read performance on 3.11.2 cassandra cluster

2018-09-11 Thread Elliott Sims
A few reasons I can think of offhand why your test setup might not see
problems from large readahead:
Your sstables are <4MB or your reads are typically <4MB from the end of the
file
Your queries tend to use the 4MB of data anyways
Your dataset is small enough that most of it fits in the VM cache, and it
rarely goes to disk
Load is low enough that the read I/O amplification doesn't hurt performance
Less likely but still possible is that there's a subtle difference in the
way that 2.1 does reads vs 3.x that's affecting it.  The less subtle
explanation is that 3.x has smaller rows and a smaller readahead is
therefore probably optimal, but that would only decrease your performance
benefit and not cause a regression from 2.1->3.x.


On Mon, Sep 10, 2018 at 1:27 AM, Laxmikant Upadhyay  wrote:

> Thank you so much Alexander !
>
> Your doubt was right. It was due to the very high value of readahead only
> (4 mb).
>
> Although We had set readahead value to 8kb in our /etc/rc.local but some
> how this was not working.
> we are keeping the value to 64 kb as we this is giving better performance
> than 8kb. Now we are able to meet our sla.
>
> One interesting observation is that we have a setup on cassandra 2.1.16
> also and on that system the readahead value is 4mb only but we are not
> observing any performance dip there. I am not sure why.
>
>
> On Wed, Sep 5, 2018 at 11:31 AM Alexander Dejanovski <
> a...@thelastpickle.com> wrote:
>
>> Don't forget to run "nodetool upgradesstables -a" after you ran the ALTER
>> statement so that all SSTables get re-written with the new compression
>> settings.
>>
>> Since you have a lot of tables in your cluster, be aware that lowering
>> the chunk length will grow the offheap memory usage of Cassandra.
>> You can get more informations here : http://thelastpickle.com/
>> blog/2018/08/08/compression_performance.html
>>
>> You should also check your readahead settings as it may be set too high :
>> sudo blockdev --report
>> The default is usually 256 but Cassandra would rather favor low readahead
>> values to get more IOPS instead of more throughput (and readahead is
>> usually not that useful for Cassandra). A conservative setting is 64 (you
>> can go down to 8 and see how Cassandra performs then).
>> Do note that changing the readahead settings requires to restart
>> Cassandra as it is only read once by the JVM during startup.
>>
>> Cheers,
>>
>> On Wed, Sep 5, 2018 at 7:27 AM CPC  wrote:
>>
>>> Could you decrease chunk_length_in_kb to 16 or 8 and repeat the test.
>>>
>>> On Wed, Sep 5, 2018, 5:51 AM wxn...@zjqunshuo.com 
>>> wrote:
>>>
>>>> How large is your row? You may meet reading wide row problem.
>>>>
>>>> -Simon
>>>>
>>>> *From:* Laxmikant Upadhyay 
>>>> *Date:* 2018-09-05 01:01
>>>> *To:* user 
>>>> *Subject:* High IO and poor read performance on 3.11.2 cassandra
>>>> cluster
>>>>
>>>> We have 3 node cassandra cluster (3.11.2) in single dc.
>>>>
>>>> We have written 450 million records on the table with LCS. The write
>>>> latency is fine.  After write we perform read and update operations.
>>>>
>>>> When we run read+update operations on newly inserted 1 million records
>>>> (on top of 450 m records) then the read latency and io usage is under
>>>> control. However when we perform read+update on old 1 million records which
>>>> are part of 450 million records we observe high read latency (The
>>>> performance goes down by 4 times in comparison 1st case ).  We have not
>>>> observed major gc pauses.
>>>>
>>>> *system information:*
>>>> *cpu core :*  24
>>>> *disc type : *ssd . we are using raid with deadline schedular
>>>> *disk space:*
>>>> df -h :
>>>> Filesystem  Size  Used Avail Use% Mounted on
>>>> /dev/sdb11.9T  393G  1.5T  22%
>>>> /var/lib/cassandra
>>>> *memory:*
>>>> free -g
>>>>   totalusedfree  shared  buff/cache
>>>>  available
>>>> Mem: 62  30   0   0  32
>>>>   31
>>>> Swap: 8   0   8
>>>>
>>>> ==
>>>>
>>>> *schema*
>>>>
>>>> desc table ks.xyz;
>>>>
>>>> CREATE

Re: High IO and poor read performance on 3.11.2 cassandra cluster

2018-09-09 Thread Laxmikant Upadhyay
Thank you so much Alexander !

Your doubt was right. It was due to the very high value of readahead only
(4 mb).

Although We had set readahead value to 8kb in our /etc/rc.local but some
how this was not working.
we are keeping the value to 64 kb as we this is giving better performance
than 8kb. Now we are able to meet our sla.

One interesting observation is that we have a setup on cassandra 2.1.16
also and on that system the readahead value is 4mb only but we are not
observing any performance dip there. I am not sure why.


On Wed, Sep 5, 2018 at 11:31 AM Alexander Dejanovski 
wrote:

> Don't forget to run "nodetool upgradesstables -a" after you ran the ALTER
> statement so that all SSTables get re-written with the new compression
> settings.
>
> Since you have a lot of tables in your cluster, be aware that lowering the
> chunk length will grow the offheap memory usage of Cassandra.
> You can get more informations here :
> http://thelastpickle.com/blog/2018/08/08/compression_performance.html
>
> You should also check your readahead settings as it may be set too high :
> sudo blockdev --report
> The default is usually 256 but Cassandra would rather favor low readahead
> values to get more IOPS instead of more throughput (and readahead is
> usually not that useful for Cassandra). A conservative setting is 64 (you
> can go down to 8 and see how Cassandra performs then).
> Do note that changing the readahead settings requires to restart Cassandra
> as it is only read once by the JVM during startup.
>
> Cheers,
>
> On Wed, Sep 5, 2018 at 7:27 AM CPC  wrote:
>
>> Could you decrease chunk_length_in_kb to 16 or 8 and repeat the test.
>>
>> On Wed, Sep 5, 2018, 5:51 AM wxn...@zjqunshuo.com 
>> wrote:
>>
>>> How large is your row? You may meet reading wide row problem.
>>>
>>> -Simon
>>>
>>> *From:* Laxmikant Upadhyay 
>>> *Date:* 2018-09-05 01:01
>>> *To:* user 
>>> *Subject:* High IO and poor read performance on 3.11.2 cassandra cluster
>>>
>>> We have 3 node cassandra cluster (3.11.2) in single dc.
>>>
>>> We have written 450 million records on the table with LCS. The write
>>> latency is fine.  After write we perform read and update operations.
>>>
>>> When we run read+update operations on newly inserted 1 million records
>>> (on top of 450 m records) then the read latency and io usage is under
>>> control. However when we perform read+update on old 1 million records which
>>> are part of 450 million records we observe high read latency (The
>>> performance goes down by 4 times in comparison 1st case ).  We have not
>>> observed major gc pauses.
>>>
>>> *system information:*
>>> *cpu core :*  24
>>> *disc type : *ssd . we are using raid with deadline schedular
>>> *disk space:*
>>> df -h :
>>> Filesystem  Size  Used Avail Use% Mounted on
>>> /dev/sdb11.9T  393G  1.5T  22% /var/lib/cassandra
>>> *memory:*
>>> free -g
>>>   totalusedfree  shared  buff/cache
>>>  available
>>> Mem: 62  30   0   0  32
>>> 31
>>> Swap: 8   0   8
>>>
>>> ==
>>>
>>> *schema*
>>>
>>> desc table ks.xyz;
>>>
>>> CREATE TABLE ks.xyz (
>>> key text,
>>> column1 text,
>>> value text,
>>> PRIMARY KEY (key, column1)
>>> ) WITH COMPACT STORAGE
>>> AND CLUSTERING ORDER BY (column1 ASC)
>>> AND bloom_filter_fp_chance = 0.1
>>> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
>>> AND comment = ''
>>> AND compaction = {'class':
>>> 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
>>> AND compression = {'chunk_length_in_kb': '64', 'class':
>>> 'org.apache.cassandra.io.compress.LZ4Compressor'}
>>> AND crc_check_chance = 1.0
>>> AND dclocal_read_repair_chance = 0.0
>>> AND default_time_to_live = 0
>>> AND gc_grace_seconds = 864000
>>> AND max_index_interval = 2048
>>> AND memtable_flush_period_in_ms = 0
>>> AND min_index_interval = 128
>>> AND read_repair_chance = 0.0
>>> AND speculative_retry = '99PERCENTILE';
>>>
>>> ==
>>> Below is some system stats snippet when read operat

Re: High IO and poor read performance on 3.11.2 cassandra cluster

2018-09-05 Thread Alexander Dejanovski
Don't forget to run "nodetool upgradesstables -a" after you ran the ALTER
statement so that all SSTables get re-written with the new compression
settings.

Since you have a lot of tables in your cluster, be aware that lowering the
chunk length will grow the offheap memory usage of Cassandra.
You can get more informations here :
http://thelastpickle.com/blog/2018/08/08/compression_performance.html

You should also check your readahead settings as it may be set too high :
sudo blockdev --report
The default is usually 256 but Cassandra would rather favor low readahead
values to get more IOPS instead of more throughput (and readahead is
usually not that useful for Cassandra). A conservative setting is 64 (you
can go down to 8 and see how Cassandra performs then).
Do note that changing the readahead settings requires to restart Cassandra
as it is only read once by the JVM during startup.

Cheers,

On Wed, Sep 5, 2018 at 7:27 AM CPC  wrote:

> Could you decrease chunk_length_in_kb to 16 or 8 and repeat the test.
>
> On Wed, Sep 5, 2018, 5:51 AM wxn...@zjqunshuo.com 
> wrote:
>
>> How large is your row? You may meet reading wide row problem.
>>
>> -Simon
>>
>> *From:* Laxmikant Upadhyay 
>> *Date:* 2018-09-05 01:01
>> *To:* user 
>> *Subject:* High IO and poor read performance on 3.11.2 cassandra cluster
>>
>> We have 3 node cassandra cluster (3.11.2) in single dc.
>>
>> We have written 450 million records on the table with LCS. The write
>> latency is fine.  After write we perform read and update operations.
>>
>> When we run read+update operations on newly inserted 1 million records
>> (on top of 450 m records) then the read latency and io usage is under
>> control. However when we perform read+update on old 1 million records which
>> are part of 450 million records we observe high read latency (The
>> performance goes down by 4 times in comparison 1st case ).  We have not
>> observed major gc pauses.
>>
>> *system information:*
>> *cpu core :*  24
>> *disc type : *ssd . we are using raid with deadline schedular
>> *disk space:*
>> df -h :
>> Filesystem  Size  Used Avail Use% Mounted on
>> /dev/sdb11.9T  393G  1.5T  22% /var/lib/cassandra
>> *memory:*
>> free -g
>>   totalusedfree  shared  buff/cache
>>  available
>> Mem: 62  30   0   0  32
>> 31
>> Swap: 8   0   8
>>
>> ==
>>
>> *schema*
>>
>> desc table ks.xyz;
>>
>> CREATE TABLE ks.xyz (
>> key text,
>> column1 text,
>> value text,
>> PRIMARY KEY (key, column1)
>> ) WITH COMPACT STORAGE
>> AND CLUSTERING ORDER BY (column1 ASC)
>> AND bloom_filter_fp_chance = 0.1
>> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
>> AND comment = ''
>> AND compaction = {'class':
>> 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
>> AND compression = {'chunk_length_in_kb': '64', 'class':
>> 'org.apache.cassandra.io.compress.LZ4Compressor'}
>> AND crc_check_chance = 1.0
>> AND dclocal_read_repair_chance = 0.0
>> AND default_time_to_live = 0
>> AND gc_grace_seconds = 864000
>> AND max_index_interval = 2048
>> AND memtable_flush_period_in_ms = 0
>> AND min_index_interval = 128
>> AND read_repair_chance = 0.0
>> AND speculative_retry = '99PERCENTILE';
>>
>> ==
>> Below is some system stats snippet when read operations was running:
>>
>> *iotop -o * : Observation : the disk read goes up to 5.5 G/s
>>
>> Total DISK READ :   *3.86 G/s* | Total DISK WRITE :1252.88 K/s
>> Actual DISK READ:  * 3.92 G/s* | Actual DISK WRITE:   0.00 B/s
>>   TID  PRIO  USER DISK READ  DISK WRITE  SWAPIN IO>COMMAND
>> 10715 be/4 cassandr  375.89 M/s   99.79 K/s  0.00 % 29.15 % java
>> -Dorg.xerial.snappy.tempdir=/var/tmp
>> -Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
>> 10714 be/4 cassandr  358.56 M/s  107.18 K/s  0.00 % 27.06 % java
>> -Dorg.xerial.snappy.tempdir=/var/tmp
>> -Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
>> 10712 be/4 cassandr  351.86 M/s  147.83 K/s  0.00 % 25.02 % java
>> -Dorg.xerial.snappy.tempdir=/var/tmp
>> -Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
>> 10718 be/4 cassandr  359.82 M/s  110.87 K/s  0.00 % 24.49 % java
>> -Dorg.xerial.sna

Re: High IO and poor read performance on 3.11.2 cassandra cluster

2018-09-04 Thread CPC
Could you decrease chunk_length_in_kb to 16 or 8 and repeat the test.

On Wed, Sep 5, 2018, 5:51 AM wxn...@zjqunshuo.com 
wrote:

> How large is your row? You may meet reading wide row problem.
>
> -Simon
>
> *From:* Laxmikant Upadhyay 
> *Date:* 2018-09-05 01:01
> *To:* user 
> *Subject:* High IO and poor read performance on 3.11.2 cassandra cluster
> We have 3 node cassandra cluster (3.11.2) in single dc.
>
> We have written 450 million records on the table with LCS. The write
> latency is fine.  After write we perform read and update operations.
>
> When we run read+update operations on newly inserted 1 million records (on
> top of 450 m records) then the read latency and io usage is under control.
> However when we perform read+update on old 1 million records which are part
> of 450 million records we observe high read latency (The performance goes
> down by 4 times in comparison 1st case ).  We have not observed major gc
> pauses.
>
> *system information:*
> *cpu core :*  24
> *disc type : *ssd . we are using raid with deadline schedular
> *disk space:*
> df -h :
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/sdb11.9T  393G  1.5T  22% /var/lib/cassandra
> *memory:*
> free -g
>   totalusedfree  shared  buff/cache
>  available
> Mem: 62  30   0   0  32
>   31
> Swap: 8   0   8
>
> ==
>
> *schema*
>
> desc table ks.xyz;
>
> CREATE TABLE ks.xyz (
> key text,
> column1 text,
> value text,
> PRIMARY KEY (key, column1)
> ) WITH COMPACT STORAGE
> AND CLUSTERING ORDER BY (column1 ASC)
> AND bloom_filter_fp_chance = 0.1
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class':
> 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
> AND compression = {'chunk_length_in_kb': '64', 'class':
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.0
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
>
> ==
> Below is some system stats snippet when read operations was running:
>
> *iotop -o * : Observation : the disk read goes up to 5.5 G/s
>
> Total DISK READ :   *3.86 G/s* | Total DISK WRITE :1252.88 K/s
> Actual DISK READ:  * 3.92 G/s* | Actual DISK WRITE:   0.00 B/s
>   TID  PRIO  USER DISK READ  DISK WRITE  SWAPIN IO>COMMAND
> 10715 be/4 cassandr  375.89 M/s   99.79 K/s  0.00 % 29.15 % java
> -Dorg.xerial.snappy.tempdir=/var/tmp
> -Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
> 10714 be/4 cassandr  358.56 M/s  107.18 K/s  0.00 % 27.06 % java
> -Dorg.xerial.snappy.tempdir=/var/tmp
> -Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
> 10712 be/4 cassandr  351.86 M/s  147.83 K/s  0.00 % 25.02 % java
> -Dorg.xerial.snappy.tempdir=/var/tmp
> -Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
> 10718 be/4 cassandr  359.82 M/s  110.87 K/s  0.00 % 24.49 % java
> -Dorg.xerial.snappy.tempdir=/var/tmp
> -Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
> 10711 be/4 cassandr  333.03 M/s  125.66 K/s  0.00 % 23.37 % java
> -Dorg.xerial.snappy.tempdir=/var/tmp
> -Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
> 10716 be/4 cassandr  330.80 M/s  103.48 K/s  0.00 % 23.02 % java
> -Dorg.xerial.snappy.tempdir=/var/tmp
> -Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
> 10717 be/4 cassandr  319.49 M/s  118.27 K/s  0.00 % 22.11 % java
> -Dorg.xerial.snappy.tempdir=/var/tmp
> -Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
> 10713 be/4 cassandr  300.62 M/s  118.27 K/s  0.00 % 21.65 % java
> -Dorg.xerial.snappy.tempdir=/var/tmp
> -Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
> 10719 be/4 cassandr  294.98 M/s   81.31 K/s  0.00 % 21.60 % java
> -Dorg.xerial.snappy.tempdir=/var/tmp
> -Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
> 10720 be/4 cassandr  289.00 M/s   73.92 K/s  0.00 % 21.45 % java
> -Dorg.xerial.snappy.tempdir=/var/tmp
> -Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
> 10742 be/4 cassandr  240.98 M/s   81.31 K/s  0.00 % 17.68 % java
> -Dorg.xerial.snappy.tempdir=/var/tmp
> -Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
> 10743 be/4 cassandr  224.43 M/s   36.96 K/s  0.00 % 17.

Re: High IO and poor read performance on 3.11.2 cassandra cluster

2018-09-04 Thread wxn...@zjqunshuo.com
How large is your row? You may meet reading wide row problem. 

-Simon

From: Laxmikant Upadhyay
Date: 2018-09-05 01:01
To: user
Subject: High IO and poor read performance on 3.11.2 cassandra cluster
We have 3 node cassandra cluster (3.11.2) in single dc.

We have written 450 million records on the table with LCS. The write latency is 
fine.  After write we perform read and update operations.

When we run read+update operations on newly inserted 1 million records (on top 
of 450 m records) then the read latency and io usage is under control. However 
when we perform read+update on old 1 million records which are part of 450 
million records we observe high read latency (The performance goes down by 4 
times in comparison 1st case ).  We have not observed major gc pauses.

system information:
cpu core :  24
disc type : ssd . we are using raid with deadline schedular
disk space:
df -h :
Filesystem  Size  Used Avail Use% Mounted on
/dev/sdb11.9T  393G  1.5T  22% /var/lib/cassandra
memory:
free -g
  totalusedfree  shared  buff/cache   available
Mem: 62  30   0   0  32  31
Swap: 8   0   8

==

schema

desc table ks.xyz;

CREATE TABLE ks.xyz (
key text,
column1 text,
value text,
PRIMARY KEY (key, column1)
) WITH COMPACT STORAGE
AND CLUSTERING ORDER BY (column1 ASC)
AND bloom_filter_fp_chance = 0.1
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
==
Below is some system stats snippet when read operations was running:

iotop -o  : Observation : the disk read goes up to 5.5 G/s

Total DISK READ :   3.86 G/s | Total DISK WRITE :1252.88 K/s
Actual DISK READ:   3.92 G/s | Actual DISK WRITE:   0.00 B/s
  TID  PRIO  USER DISK READ  DISK WRITE  SWAPIN IO>COMMAND
10715 be/4 cassandr  375.89 M/s   99.79 K/s  0.00 % 29.15 % java 
-Dorg.xerial.snappy.tempdir=/var/tmp 
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10714 be/4 cassandr  358.56 M/s  107.18 K/s  0.00 % 27.06 % java 
-Dorg.xerial.snappy.tempdir=/var/tmp 
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10712 be/4 cassandr  351.86 M/s  147.83 K/s  0.00 % 25.02 % java 
-Dorg.xerial.snappy.tempdir=/var/tmp 
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10718 be/4 cassandr  359.82 M/s  110.87 K/s  0.00 % 24.49 % java 
-Dorg.xerial.snappy.tempdir=/var/tmp 
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10711 be/4 cassandr  333.03 M/s  125.66 K/s  0.00 % 23.37 % java 
-Dorg.xerial.snappy.tempdir=/var/tmp 
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10716 be/4 cassandr  330.80 M/s  103.48 K/s  0.00 % 23.02 % java 
-Dorg.xerial.snappy.tempdir=/var/tmp 
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10717 be/4 cassandr  319.49 M/s  118.27 K/s  0.00 % 22.11 % java 
-Dorg.xerial.snappy.tempdir=/var/tmp 
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10713 be/4 cassandr  300.62 M/s  118.27 K/s  0.00 % 21.65 % java 
-Dorg.xerial.snappy.tempdir=/var/tmp 
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10719 be/4 cassandr  294.98 M/s   81.31 K/s  0.00 % 21.60 % java 
-Dorg.xerial.snappy.tempdir=/var/tmp 
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10720 be/4 cassandr  289.00 M/s   73.92 K/s  0.00 % 21.45 % java 
-Dorg.xerial.snappy.tempdir=/var/tmp 
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10742 be/4 cassandr  240.98 M/s   81.31 K/s  0.00 % 17.68 % java 
-Dorg.xerial.snappy.tempdir=/var/tmp 
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10743 be/4 cassandr  224.43 M/s   36.96 K/s  0.00 % 17.57 % java 
-Dorg.xerial.snappy.tempdir=/var/tmp 
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10744 be/4 cassandr  113.29 M/s   14.78 K/s  0.00 % 10.22 % java 
-Dorg.xerial.snappy.tempdir=/var/tmp 
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10745 be/4 cassandr   61.63 M/s   33.26 K/s  0.00 %  4.20 % java 
-Dorg.xerial.snappy.tempdir=/var/tmp 
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/

==
iostats -x 5

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
 2.350.007.09   11.260

High IO and poor read performance on 3.11.2 cassandra cluster

2018-09-04 Thread Laxmikant Upadhyay
We have 3 node cassandra cluster (3.11.2) in single dc.

We have written 450 million records on the table with LCS. The write
latency is fine.  After write we perform read and update operations.

When we run read+update operations on newly inserted 1 million records (on
top of 450 m records) then the read latency and io usage is under control.
However when we perform read+update on old 1 million records which are part
of 450 million records we observe high read latency (The performance goes
down by 4 times in comparison 1st case ).  We have not observed major gc
pauses.

*system information:*
*cpu core :*  24
*disc type : *ssd . we are using raid with deadline schedular
*disk space:*
df -h :
Filesystem  Size  Used Avail Use% Mounted on
/dev/sdb11.9T  393G  1.5T  22% /var/lib/cassandra
*memory:*
free -g
  totalusedfree  shared  buff/cache
 available
Mem: 62  30   0   0  32
  31
Swap: 8   0   8

==

*schema*

desc table ks.xyz;

CREATE TABLE ks.xyz (
key text,
column1 text,
value text,
PRIMARY KEY (key, column1)
) WITH COMPACT STORAGE
AND CLUSTERING ORDER BY (column1 ASC)
AND bloom_filter_fp_chance = 0.1
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class':
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
AND compression = {'chunk_length_in_kb': '64', 'class':
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
==
Below is some system stats snippet when read operations was running:

*iotop -o * : Observation : the disk read goes up to 5.5 G/s

Total DISK READ :   *3.86 G/s* | Total DISK WRITE :1252.88 K/s
Actual DISK READ:  * 3.92 G/s* | Actual DISK WRITE:   0.00 B/s
  TID  PRIO  USER DISK READ  DISK WRITE  SWAPIN IO>COMMAND
10715 be/4 cassandr  375.89 M/s   99.79 K/s  0.00 % 29.15 % java
-Dorg.xerial.snappy.tempdir=/var/tmp
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10714 be/4 cassandr  358.56 M/s  107.18 K/s  0.00 % 27.06 % java
-Dorg.xerial.snappy.tempdir=/var/tmp
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10712 be/4 cassandr  351.86 M/s  147.83 K/s  0.00 % 25.02 % java
-Dorg.xerial.snappy.tempdir=/var/tmp
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10718 be/4 cassandr  359.82 M/s  110.87 K/s  0.00 % 24.49 % java
-Dorg.xerial.snappy.tempdir=/var/tmp
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10711 be/4 cassandr  333.03 M/s  125.66 K/s  0.00 % 23.37 % java
-Dorg.xerial.snappy.tempdir=/var/tmp
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10716 be/4 cassandr  330.80 M/s  103.48 K/s  0.00 % 23.02 % java
-Dorg.xerial.snappy.tempdir=/var/tmp
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10717 be/4 cassandr  319.49 M/s  118.27 K/s  0.00 % 22.11 % java
-Dorg.xerial.snappy.tempdir=/var/tmp
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10713 be/4 cassandr  300.62 M/s  118.27 K/s  0.00 % 21.65 % java
-Dorg.xerial.snappy.tempdir=/var/tmp
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10719 be/4 cassandr  294.98 M/s   81.31 K/s  0.00 % 21.60 % java
-Dorg.xerial.snappy.tempdir=/var/tmp
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10720 be/4 cassandr  289.00 M/s   73.92 K/s  0.00 % 21.45 % java
-Dorg.xerial.snappy.tempdir=/var/tmp
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10742 be/4 cassandr  240.98 M/s   81.31 K/s  0.00 % 17.68 % java
-Dorg.xerial.snappy.tempdir=/var/tmp
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10743 be/4 cassandr  224.43 M/s   36.96 K/s  0.00 % 17.57 % java
-Dorg.xerial.snappy.tempdir=/var/tmp
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10744 be/4 cassandr  113.29 M/s   14.78 K/s  0.00 % 10.22 % java
-Dorg.xerial.snappy.tempdir=/var/tmp
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/
10745 be/4 cassandr   61.63 M/s   33.26 K/s  0.00 %  4.20 % java
-Dorg.xerial.snappy.tempdir=/var/tmp
-Dja~etrics-core-3.1.0.jar:/usr/share/cassandra/lib/

==
*iostats -x 5*

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
 2.350.007.09   *11.26 *   0.00   79.30

Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz
avgqu-sz   await r_await w_await  svctm  %util
sdb 205.60 0.00 5304.603.00 3707651.20  2450.40
1398.0311.372.14