Re: Bursts of Thrift threads make cluster unresponsive

2019-06-27 Thread Avinash Mandava
y help in this situation? > > > > org.apache.cassandra.metrics.type=ThreadPools.path=transport.scope=Native-Transport-Requests.name=TotalBlockedTasks.Count > This metric is 0 at all cluster nodes. > > пт, 28 июн. 2019 г. в 00:34, Avinash Mandava : > >> Have you tried i

Re: Get information about GC pause (Stop the world) via JMX, it's possible ?

2019-06-27 Thread Avinash Mandava
Here's the metrics you want. Depends on what GC you're using as Dimo said above. *1) If you're using CMS - Collection time / Collection count (Avg time per collection)* *ParNew* (java.lang.type=GarbageCollector.name=ParNew.CollectionTime /

Re: Bursts of Thrift threads make cluster unresponsive

2019-06-27 Thread Avinash Mandava
Have you tried increasing concurrent reads until you see more activity in disk? If you've always got 32 active reads and high pending reads it could just be dropping the reads because the queues are saturated. Could be artificially bottlenecking at the C* process level. Also what does this metric

Re: One of the cassandra node is going down.

2019-05-30 Thread Avinash Mandava
Have you checked system log for GC messages on the node that’s going down? On Thu, May 30, 2019 at 1:53 PM Kunal wrote: > Hi All, > > I am facing a situation in my 3 nodes cassandra wherein one of the > cassandra nodes is going down after around 5-10mins. > > Below messages are seen in

Re: Data Inconsistencies - Tables Vs Materialized Views.

2019-05-14 Thread Avinash Mandava
Hi Bharath, Not sure if you've seen the known limitations https://docs.datastax.com/en/cql/3.3/cql/cql_using/knownLimitationsMV.html

Re: Cassandra cross dc replication row isolationCassandra cross dc replication row isolation

2019-05-07 Thread Avinash Mandava
Are you asking if writes are atomic at the partition level? If so yes. If you have N columns in a simple k/v schema, and you send a write with X/N of those columns set, all X will be updated at the same time wherever that writes goes. The CL thing is more about how tolerant you are to stale data,

Re: CL=LQ, RF=3: Can a Write be Lost If Two Nodes ACK'ing it Die

2019-05-02 Thread Avinash Mandava
attachments may contain confidential and legally > privileged information. If you are not the intended recipient, do not copy > or disclose its content, but please reply to this email immediately and > highlight the error to the sender and then immediately delete the message. > >

Re: CL=LQ, RF=3: Can a Write be Lost If Two Nodes ACK'ing it Die

2019-05-02 Thread Avinash Mandava
In scenario 2 it's lost, if both nodes die and get replaced entirely there's no history anywhere that the write ever happened, as it wouldn't be in commitlog, memtable, or sstable in node 3. Surviving that failure scenario of two nodes with same data simultaneously failing requires upping CL or