y help in this situation?
>
> >
> org.apache.cassandra.metrics.type=ThreadPools.path=transport.scope=Native-Transport-Requests.name=TotalBlockedTasks.Count
> This metric is 0 at all cluster nodes.
>
> пт, 28 июн. 2019 г. в 00:34, Avinash Mandava :
>
>> Have you tried i
Here's the metrics you want. Depends on what GC you're using as Dimo said
above.
*1) If you're using CMS - Collection time / Collection count (Avg time per
collection)*
*ParNew*
(java.lang.type=GarbageCollector.name=ParNew.CollectionTime /
Have you tried increasing concurrent reads until you see more activity in
disk? If you've always got 32 active reads and high pending reads it could
just be dropping the reads because the queues are saturated. Could be
artificially bottlenecking at the C* process level.
Also what does this metric
Have you checked system log for GC messages on the node that’s going down?
On Thu, May 30, 2019 at 1:53 PM Kunal wrote:
> Hi All,
>
> I am facing a situation in my 3 nodes cassandra wherein one of the
> cassandra nodes is going down after around 5-10mins.
>
> Below messages are seen in
Hi Bharath,
Not sure if you've seen the known limitations
https://docs.datastax.com/en/cql/3.3/cql/cql_using/knownLimitationsMV.html
Are you asking if writes are atomic at the partition level? If so yes. If
you have N columns in a simple k/v schema, and you send a write with X/N of
those columns set, all X will be updated at the same time wherever that
writes goes.
The CL thing is more about how tolerant you are to stale data,
attachments may contain confidential and legally
> privileged information. If you are not the intended recipient, do not copy
> or disclose its content, but please reply to this email immediately and
> highlight the error to the sender and then immediately delete the message.
>
>
In scenario 2 it's lost, if both nodes die and get replaced entirely
there's no history anywhere that the write ever happened, as it wouldn't be
in commitlog, memtable, or sstable in node 3. Surviving that failure
scenario of two nodes with same data simultaneously failing requires upping
CL or