Re: default_time_to_live vs TTL on insert statement

2018-07-11 Thread kurt greaves
The Datastax documentation is wrong. It won't error, and it shouldn't. If you want to fix that documentation I suggest contacting Datastax. On 11 July 2018 at 19:56, Nitan Kainth wrote: > Hi DuyHai, > > Could you please explain in what case C* will error based on documented > statement: > > You

Re: Inconsistent Quorum Read after Quorum Write

2018-07-11 Thread Mick Semb Wever
Li, I did not reset repairedAt and ran repair with -pr directly. That’s > probably why the inconsistency occurred. > Yes, this will be a likely cause. There's enough docs out there to help you with this. Shout out if not. > As our tables are pretty big, full repair takes many days to finish.

Re: Check Cluster Health

2018-07-11 Thread Thouraya TH
Hi, Ok. Thanks a lot for answers. Kind regards. 2018-07-11 16:13 GMT+01:00 Furkan Cifci : > No, you dont need to install Prometheus on each node. Install Prometheus > on one machine and configure it. For basic configuration use this > documentation:

Re: default_time_to_live vs TTL on insert statement

2018-07-11 Thread Nitan Kainth
Hi DuyHai, Could you please explain in what case C* will error based on documented statement: You can set a default TTL for an entire table by setting the table's default_time_to_live

Re: default_time_to_live vs TTL on insert statement

2018-07-11 Thread DuyHai Doan
default_time_to_live property applies if you don't specify any TTL on your CQL statement However you can always override the default_time_to_live

default_time_to_live vs TTL on insert statement

2018-07-11 Thread Nitan Kainth
Hi, As per document: https://docs.datastax.com/en/cql/3.3/cql/cql_using/useExpireExample.html - You can set a default TTL for an entire table by setting the table's default_time_to_live

Re: Inconsistent Quorum Read after Quorum Write

2018-07-11 Thread Visa
Hi Mick, Thanks for replying! I did not reset repairedAt and ran repair with -pr directly. That’s probably why the inconsistency occurred. As our tables are pretty big, full repair takes many days to finish. Given the 10 days gc period, it means repair almost will run all the time.

Re: Check Cluster Health

2018-07-11 Thread Furkan Cifci
No, you dont need to install Prometheus on each node. Install Prometheus on one machine and configure it. For basic configuration use this documentation: https://www.robustperception.io/monitoring-cassandra-with-prometheus/ You need to use exporter on each node for collecting metrics. Node

Re: Write Time of a Row in Multi DC Cassandra Cluster

2018-07-11 Thread Jeff Jirsa
For tracking delay in very recent versions, this exists: https://issues.apache.org/jira/browse/CASSANDRA-13289 -- Jeff Jirsa > On Jul 11, 2018, at 1:05 AM, Simon Fontana Oscarsson > wrote: > > I thought you just wanted to test how big delay you have, that's why I > suggested trace. > >

Re: Inconsistent Quorum Read after Quorum Write

2018-07-11 Thread Mick Semb Wever
Li, I’ve confirmed that the inconsistency issues disappeared after repair > finished. > > Anything changed with repair in 3.11.1? One difference I noticed is that > the validation step during repair could turn down the node upon large > tables, which never happen in 3.10. I had to throttle

Re: Check Cluster Health

2018-07-11 Thread Thouraya TH
Hi all; Please, i have a history of the state of each node in my data center. an history about the failure of my cluster (UN: node up, DN: node down). Here is some lines of the history: 08:51:36 UN 127.0.0.1 08:51:36 UN 127.0.0.2 08:51:36 UN 127.0.0.3 08:53:50 DN 127.0.0.1

Re: Tuning Replication Factor - All, Consistency ONE

2018-07-11 Thread Jürgen Albersdorfer
And by all means, do not treat Cassandra as a relational Database. - Beware of the limitations of CQL in contrast to SQL. I don't want to argue angainst Cassandra because I like it for what it was primarly designed - horizontal scalability for HUGE amounts of data. It is good to access your Data

Re: Write Time of a Row in Multi DC Cassandra Cluster

2018-07-11 Thread Simon Fontana Oscarsson
I thought you just wanted to test how big delay you have, that's why I suggested trace. Your best option is to write with EACH_QUORUM as Alain said. That way you will get a response when the write is successful on all dcs. The downside is that the request will fail if one dc is down. As usual