Repairs are slow after upgrade to 3.11.3

2018-08-28 Thread Maxim Parkachov
Hi everyone,

couple of days ago I have upgraded Cassandra from 3.11.2 to 3.11.3 and I
see that repair time is practically doubled. Does someone else experience
the same regression ?

Regards,
Maxim.


Re: Cassandra 3.11 with NFS mounted system

2018-08-28 Thread Vineet G H
Try to use iSCSI , also play around with mount configurations for the filesystem
On Tue, Aug 28, 2018 at 8:46 AM ZAIDI, ASAD A  wrote:
>
> Hello Folks,
>
>
>
> I’ve an virtualized environment running with VMware  where Cassandra is 
> humming on NFS mounted storage. As the application load increases ,they 
> increase number of nodes in data center however writes are getting slower, 
> nodes are flapping and application complains in write performance. I can see 
> compaction is getting behind, flush writers are struggling – symptoms that 
> points to storage system but system Admins says they have expensive, topnotch 
> SSDs under data volume.
>
>
>
> I know having nfs is not recommended though getting totally new h/w is not 
> possible at this time. I’m wondering if  there is better way to utilize 
> available SAN resources to run with Cassandra ? I’ll much appreciate your 
> insight here.
>
>
>
> Thanks/Asad
>
>
>
>
>
> Nfsiostat
>
> ===
>
> $ nfsiostat /cassandra/data
>
> 172.XX.XX.16:/vol/cns_mount_0020_data mounted on /cassandra/data:
>
>
>
>op/s rpc bklog
>
> 839.330.00
>
> read: ops/skB/s   kB/op retrans   
>   avg RTT (ms)avg exe (ms)
>
> 236.611 13471.72956.936   20 (0.0%)   
> 6.125  13.236
>
> write:ops/skB/s   kB/op retrans   
>   avg RTT (ms)avg exe (ms)
>
> 302.627 19048.74462.9453 (0.0%)   
>23.154 253.329
>
>
>
> nodetool tpstats
>
> ==
>
> $ nodetool tpstats
>
> Pool Name Active   Pending  Completed   Blocked  
> All time blocked
>
> ReadStage  0 03474362 0   
>   0
>
> MiscStage  0 0  0 0   
>   0
>
> CompactionExecutor12   337  47130 0   
>   0
>
> MutationStage256238996  313857751 0   
>   0
>
> MemtableReclaimMemory  0 0  27081 0   
>   0
>
> PendingRangeCalculator 0 0 37 0   
>   0
>
> GossipStage0 0 173852 0   
>   0
>
> SecondaryIndexManagement   0 0  0 0   
>   0
>
> HintsDispatcher0 0  0 0   
>   0
>
> RequestResponseStage   0 0  337132828 0   
>   0
>
> Native-Transport-Requests  1 0  120206509 0   
> 7073177
>
> ReadRepairStage0 0   8934 0   
>   0
>
> CounterMutationStage   0 0  0 0   
>   0
>
> MigrationStage 1 2102 0   
>   0
>
> MemtablePostFlush  1   223  27753 0   
>   0
>
> PerDiskMemtableFlushWriter_0   2 2  27095 0   
>   0
>
> ValidationExecutor 0 0  0 0   
>   0
>
> Sampler0 0  0 0   
>   0
>
> MemtableFlushWriter5   175  27093 0   
>   0
>
> InternalResponseStage  0 0 46 0   
>   0
>
> ViewMutationStage  0 0  0 0   
>   0
>
> AntiEntropyStage   0 0  0 0   
>   0
>
> CacheCleanupExecutor   0 0  0 0   
>   0
>
>
>
> Message type   Dropped
>
> READ 0
>
> RANGE_SLICE  0
>
> _TRACE   0
>
> HINT944297
>
> MUTATION  15707984
>
> COUNTER_MUTATION 0
>
> BATCH_STORE  0
>
> BATCH_REMOVE 0
>
> REQUEST_RESPONSE 0
>
> PAGED_RANGE  0
>
> READ_REPAIR   3195

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: SSTable Compression Ratio -1.0

2018-08-28 Thread Vitaliy Semochkin
Thank you ZAIDI,  can you please explain why mentioned ratio is negative?
On Tue, Aug 28, 2018 at 8:18 PM ZAIDI, ASAD A  wrote:
>
> Compression ratio is ratio of compression to its original size - smaller is 
> better; see it like compressed/uncompressed
> 1 would mean no change in size after compression!
>
>
>
> -Original Message-
> From: Vitaliy Semochkin [mailto:vitaliy...@gmail.com]
> Sent: Tuesday, August 28, 2018 12:03 PM
> To: user@cassandra.apache.org
> Subject: SSTable Compression Ratio -1.0
>
> Hello,
>
> nodetool tablestats my_kespace
> returns SSTable Compression Ratio -1.0
>
> Can someone explain, what does -1.0 mean?
>
> Regards,
> Vitaliy
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



commitlog content

2018-08-28 Thread Vitaliy Semochkin
Hello,

I've noticed that after a stress test that does only inserts a
commitlog content exceeds data dir 20 times.
What can be cause of such behavior?

Running nodetool compact did not change anything.

Regards,
Vitaliy

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



RE: SSTable Compression Ratio -1.0

2018-08-28 Thread ZAIDI, ASAD A
Compression ratio is ratio of compression to its original size - smaller is 
better; see it like compressed/uncompressed 
1 would mean no change in size after compression!



-Original Message-
From: Vitaliy Semochkin [mailto:vitaliy...@gmail.com] 
Sent: Tuesday, August 28, 2018 12:03 PM
To: user@cassandra.apache.org
Subject: SSTable Compression Ratio -1.0

Hello,

nodetool tablestats my_kespace
returns SSTable Compression Ratio -1.0

Can someone explain, what does -1.0 mean?

Regards,
Vitaliy

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org


SSTable Compression Ratio -1.0

2018-08-28 Thread Vitaliy Semochkin
Hello,

nodetool tablestats my_kespace
returns SSTable Compression Ratio -1.0

Can someone explain, what does -1.0 mean?

Regards,
Vitaliy

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



RE: Upgrade from 2.1 to 3.11

2018-08-28 Thread ZAIDI, ASAD A
You may want to check if coincidentally you’re having expired cells in heap. GC 
log should be able to tell you OR look for tombstones in system.log file. See 
your compactions are under control and normal.  This may not be related to 
upgrade at all!


From: Pradeep Chhetri [mailto:prad...@stashaway.com]
Sent: Tuesday, August 28, 2018 3:32 AM
To: user@cassandra.apache.org
Subject: Re: Upgrade from 2.1 to 3.11

You may want to try upgrading to 3.11.3 instead which has some memory leaks 
fixes.

On Tue, Aug 28, 2018 at 9:59 AM, Mun Dega 
mailto:mundeg...@gmail.com>> wrote:
I am surprised that no one else ran into any issues with this version.  GC 
can't catch up fast enough and there is constant Full GC taking place.

The result? unresponsive nodes makeing entire cluster unusable.

Any insight on this issue from anyone that is using this version would be 
appreciated.

Ma

On Fri, Aug 24, 2018, 04:30 Mohamadreza Rostami 
mailto:mohamadrezarosta...@gmail.com>> wrote:
You have very large heap,it’s take most of  cpu time in GC stage.you should in 
maximum set heap on 12GB and enable row cache to your cluster become faster.

On Friday, 24 August 2018, Mun Dega 
mailto:mundeg...@gmail.com>> wrote:
120G data
28G heap out of 48 on system
9 node cluster, RF3

On Thu, Aug 23, 2018, 17:19 Mohamadreza Rostami 
mailto:mohamadrezarosta...@gmail.com>> wrote:
Hi,
How much data do you have? How much RAM do your servers have? How much do you 
have a heep?
On Thu, Aug 23, 2018 at 10:14 PM Mun Dega 
mailto:mundeg...@gmail.com>> wrote:
Hello,

We recently upgraded from Cassandra 2.1 to 3.11.2 on one cluster.  The process 
went OK including upgradesstable but we started to experience high latency for 
r/w, occasional OOM and long GC pause after.

For the same cluster with 2.1, we didn't have any issues like this.  We also 
kept server specs, heap, all the same in post upgrade

Has anyone else had similar issues going to 3.11 and what are the major changes 
that could have such a major setback in the new version?

Ma Dega



Cassandra 3.11 with NFS mounted system

2018-08-28 Thread ZAIDI, ASAD A
Hello Folks,

I've an virtualized environment running with VMware  where Cassandra is humming 
on NFS mounted storage. As the application load increases ,they increase number 
of nodes in data center however writes are getting slower, nodes are flapping 
and application complains in write performance. I can see compaction is getting 
behind, flush writers are struggling - symptoms that points to storage system 
but system Admins says they have expensive, topnotch SSDs under data volume.

I know having nfs is not recommended though getting totally new h/w is not 
possible at this time. I'm wondering if  there is better way to utilize 
available SAN resources to run with Cassandra ? I'll much appreciate your 
insight here.

Thanks/Asad


Nfsiostat
===
$ nfsiostat /cassandra/data
172.XX.XX.16:/vol/cns_mount_0020_data mounted on /cassandra/data:

   op/s rpc bklog
839.330.00
read: ops/skB/s   kB/op retrans 
avg RTT (ms)avg exe (ms)
236.611 13471.72956.936   20 (0.0%) 
  6.125  13.236
write:ops/skB/s   kB/op retrans 
avg RTT (ms)avg exe (ms)
302.627 19048.74462.9453 (0.0%) 
 23.154 253.329

nodetool tpstats
==
$ nodetool tpstats
Pool Name Active   Pending  Completed   Blocked  
All time blocked
ReadStage  0 03474362 0 
0
MiscStage  0 0  0 0 
0
CompactionExecutor12   337  47130 0 
0
MutationStage256238996  313857751 0 
0
MemtableReclaimMemory  0 0  27081 0 
0
PendingRangeCalculator 0 0 37 0 
0
GossipStage0 0 173852 0 
0
SecondaryIndexManagement   0 0  0 0 
0
HintsDispatcher0 0  0 0 
0
RequestResponseStage   0 0  337132828 0 
0
Native-Transport-Requests  1 0  120206509 0 
  7073177
ReadRepairStage0 0   8934 0 
0
CounterMutationStage   0 0  0 0 
0
MigrationStage 1 2102 0 
0
MemtablePostFlush  1   223  27753 0 
0
PerDiskMemtableFlushWriter_0   2 2  27095 0 
0
ValidationExecutor 0 0  0 0 
0
Sampler0 0  0 0 
0
MemtableFlushWriter5   175  27093 0 
0
InternalResponseStage  0 0 46 0 
0
ViewMutationStage  0 0  0 0 
0
AntiEntropyStage   0 0  0 0 
0
CacheCleanupExecutor   0 0  0 0 
0

Message type   Dropped
READ 0
RANGE_SLICE  0
_TRACE   0
HINT944297
MUTATION  15707984
COUNTER_MUTATION 0
BATCH_STORE  0
BATCH_REMOVE 0
REQUEST_RESPONSE 0
PAGED_RANGE  0
READ_REPAIR   3195


Re: Upgrade from 2.1 to 3.11

2018-08-28 Thread Brian Spindler
Ma, did you try what Mohamadreza suggested?  Have a such a large heap means
you are getting a ton of stuff that needs full GC.

On Tue, Aug 28, 2018 at 4:31 AM Pradeep Chhetri 
wrote:

> You may want to try upgrading to 3.11.3 instead which has some memory
> leaks fixes.
>
> On Tue, Aug 28, 2018 at 9:59 AM, Mun Dega  wrote:
>
>> I am surprised that no one else ran into any issues with this version.
>> GC can't catch up fast enough and there is constant Full GC taking place.
>>
>> The result? unresponsive nodes makeing entire cluster unusable.
>>
>> Any insight on this issue from anyone that is using this version would be
>> appreciated.
>>
>> Ma
>>
>> On Fri, Aug 24, 2018, 04:30 Mohamadreza Rostami <
>> mohamadrezarosta...@gmail.com> wrote:
>>
>>> You have very large heap,it’s take most of  cpu time in GC stage.you
>>> should in maximum set heap on 12GB and enable row cache to your cluster
>>> become faster.
>>>
>>> On Friday, 24 August 2018, Mun Dega  wrote:
>>>
 120G data
 28G heap out of 48 on system
 9 node cluster, RF3


 On Thu, Aug 23, 2018, 17:19 Mohamadreza Rostami <
 mohamadrezarosta...@gmail.com> wrote:

> Hi,
> How much data do you have? How much RAM do your servers have? How much
> do you have a heep?
> On Thu, Aug 23, 2018 at 10:14 PM Mun Dega  wrote:
>
>> Hello,
>>
>> We recently upgraded from Cassandra 2.1 to 3.11.2 on one cluster.
>> The process went OK including upgradesstable but we started to experience
>> high latency for r/w, occasional OOM and long GC pause after.
>>
>> For the same cluster with 2.1, we didn't have any issues like this.  We
>> also kept server specs, heap, all the same in post upgrade
>>
>> Has anyone else had similar issues going to 3.11 and what are the
>> major changes that could have such a major setback in the new version?
>>
>> Ma Dega
>>
>
>


Re: Upgrade from 2.1 to 3.11

2018-08-28 Thread Pradeep Chhetri
You may want to try upgrading to 3.11.3 instead which has some memory leaks
fixes.

On Tue, Aug 28, 2018 at 9:59 AM, Mun Dega  wrote:

> I am surprised that no one else ran into any issues with this version.  GC
> can't catch up fast enough and there is constant Full GC taking place.
>
> The result? unresponsive nodes makeing entire cluster unusable.
>
> Any insight on this issue from anyone that is using this version would be
> appreciated.
>
> Ma
>
> On Fri, Aug 24, 2018, 04:30 Mohamadreza Rostami <
> mohamadrezarosta...@gmail.com> wrote:
>
>> You have very large heap,it’s take most of  cpu time in GC stage.you
>> should in maximum set heap on 12GB and enable row cache to your cluster
>> become faster.
>>
>> On Friday, 24 August 2018, Mun Dega  wrote:
>>
>>> 120G data
>>> 28G heap out of 48 on system
>>> 9 node cluster, RF3
>>>
>>>
>>> On Thu, Aug 23, 2018, 17:19 Mohamadreza Rostami <
>>> mohamadrezarosta...@gmail.com> wrote:
>>>
 Hi,
 How much data do you have? How much RAM do your servers have? How much
 do you have a heep?
 On Thu, Aug 23, 2018 at 10:14 PM Mun Dega  wrote:

> Hello,
>
> We recently upgraded from Cassandra 2.1 to 3.11.2 on one cluster.  The
> process went OK including upgradesstable but we started to experience high
> latency for r/w, occasional OOM and long GC pause after.
>
> For the same cluster with 2.1, we didn't have any issues like this.  We
> also kept server specs, heap, all the same in post upgrade
>
> Has anyone else had similar issues going to 3.11 and what are the
> major changes that could have such a major setback in the new version?
>
> Ma Dega
>