RE: [EXTERNAL] Re: Getting Consistency level TWO when it is requested LOCAL_ONE

2019-04-11 Thread Durity, Sean R
https://issues.apache.org/jira/browse/CASSANDRA-9620 has something similar that 
was determined to be a driver error. I would start with looking at the driver 
version and also the RetryPolicy that is in effect for the Cluster. Secondly, I 
would look at whether a batch is really needed for the statements. Cassandra 
batches are for atomicity – not speed.

[cid:image003.png@01D4F04A.61E8CD40]

Sean Durity
Staff Systems Engineer – Cassandra
MTC 2250
#cassandra - for the latest news and updates



From: Mahesh Daksha 
Sent: Thursday, April 11, 2019 5:21 AM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Re: Getting Consistency level TWO when it is requested 
LOCAL_ONE

Hi Jean,

I want to understand how you are setting the write consistency level as LOCAL 
ONE. That is with every query you mentioning consistency level or you have set 
the spring cassandra config with provided consistency level.
Like this:
cluster.setQueryOptions(new 
QueryOptions().setConsistencyLevel(ConsistencyLevel.valueOf(cassandraConsistencyLevel)));

The only possibility i see of such behavior is its getting overridden from some 
where.

Thanks,
Mahesh Daksha

On Thu, Apr 11, 2019 at 1:43 PM Jean Carlo 
mailto:jean.jeancar...@gmail.com>> wrote:
Hello everyone,

I have a case where the developers are using spring data framework for 
Cassandra. We are writing batches setting consistency level at LOCAL_ONE but we 
got a timeout like this

Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra 
timeout during BATCH_LOG write query at consistency TWO (2 replica were 
required but only 1 acknowledged the write)

Is it Cassandra that somehow writes to the system.batchlog using consistency 
TWO or is it spring data that makes some dirty things behind the scenes ?
(I want to believe it is the second one)

Cheers

Jean Carlo

"The best way to predict the future is to invent it" Alan Kay



The information in this Internet Email is confidential and may be legally 
privileged. It is intended solely for the addressee. Access to this Email by 
anyone else is unauthorized. If you are not the intended recipient, any 
disclosure, copying, distribution or any action taken or omitted to be taken in 
reliance on it, is prohibited and may be unlawful. When addressed to our 
clients any opinions or advice contained in this Email are subject to the terms 
and conditions expressed in any applicable governing The Home Depot terms of 
business or client engagement letter. The Home Depot disclaims all 
responsibility and liability for the accuracy and content of this attachment 
and for any damages or losses arising from any inaccuracies, errors, viruses, 
e.g., worms, trojan horses, etc., or other items of a destructive nature, which 
may be contained in this attachment and shall not be liable for direct, 
indirect, consequential or special damages in connection with this e-mail 
message or its attachment.


Moving a node to another RAC in the same DC.

2019-04-11 Thread R. T.
Hi,

I have accidentally bootstrapped a node in a wrong RAC  (RAC11) and I would 
like to move it to the correct RAC with the remaining nodes (RAC1). The status 
now is

x.x.x.x=DC1:RAC1
x.x.x.x=DC1:RAC1
x.x.x.x=DC1:RAC1
x.x.x.x=DC1:RAC11

Due to issues with free storage I think will be dangerous to decommission and 
bootstrap back the node to RAC1. So my question is:

Can I move directly the last node to RAC1?

My settings:
Cassandra v3.0.9
2 DCs (4/3 nodes respectively) (RF=2)
endpoint_snitch: PropertyFileSnitch
vnodes setup: num_tokens: 265

Thank you,

Robert,

Re: cass-2.2 trigger - how to get clustering columns and value?

2019-04-11 Thread Carl Mueller
Thank you all.

On Thu, Apr 11, 2019 at 4:35 AM Paul Chandler  wrote:

> Hi Carl,
>
> I now this is not exactly answering your question, but it may help with
> the split.
>
> I have split a multi tenancy  cluster several times using a similar
> process to TLP’s Data Centre Switch:
> http://thelastpickle.com/blog/2019/02/26/data-center-switch.html
>
> However instead of phase 3, we have split the cluster, by changing the
> seeds definition to only point at nodes within their own DC, and change the
> cluster name of the new DC. This last step does require a short downtime of
> the cluster.
>
> We have had success with this method, and if you are only want to track
> the updates to feed into the new cluster, this this will work, however it
> you want it for anything else then it doesn’t help at all.
>
> I can supply more details later if this method is of interest.
>
> Thanks
>
> Paul Chandler
>
> > On 10 Apr 2019, at 22:52, Carl Mueller 
> > 
> wrote:
> >
> > We have a multitenant cluster that we can't upgrade to 3.x easily, and
> we'd like to migrate some apps off of the shared cluster to dedicated
> clusters.
> >
> > This is a 2.2 cluster.
> >
> > So I'm trying a trigger to track updates while we transition and will
> send via kafka. Right now I'm just trying to extract all the data from the
> incoming updates
> >
> > so for
> >
> > public Collection augment(ByteBuffer key, ColumnFamily
> update) {
> >
> > the names returned by the update.getColumnNames() for an update of a
> table with two clustering columns and had a regular column update produced
> two CellName/Cells:
> >
> > one has no name, and no apparent raw value (bytebuffer is empty)
> >
> > the other is the data column.
> >
> > I can extract the primary key from the key field
> >
> > But how do I get the values of the two clustering columns? They aren't
> listed in the iterator, and they don't appear to be in the key field. Since
> clustering columns are encoded into the name of a cell, I'd imagine there
> might be some "unpacking" trick to that.
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


Re: cass-2.2 trigger - how to get clustering columns and value?

2019-04-11 Thread Paul Chandler
Hi Carl,

I now this is not exactly answering your question, but it may help with the 
split.

I have split a multi tenancy  cluster several times using a similar process to 
TLP’s Data Centre Switch: 
http://thelastpickle.com/blog/2019/02/26/data-center-switch.html

However instead of phase 3, we have split the cluster, by changing the seeds 
definition to only point at nodes within their own DC, and change the cluster 
name of the new DC. This last step does require a short downtime of the cluster.

We have had success with this method, and if you are only want to track the 
updates to feed into the new cluster, this this will work, however it you want 
it for anything else then it doesn’t help at all.

I can supply more details later if this method is of interest. 

Thanks 

Paul Chandler 

> On 10 Apr 2019, at 22:52, Carl Mueller  
> wrote:
> 
> We have a multitenant cluster that we can't upgrade to 3.x easily, and we'd 
> like to migrate some apps off of the shared cluster to dedicated clusters.
> 
> This is a 2.2 cluster.
> 
> So I'm trying a trigger to track updates while we transition and will send 
> via kafka. Right now I'm just trying to extract all the data from the 
> incoming updates
> 
> so for 
> 
> public Collection augment(ByteBuffer key, ColumnFamily update) {
> 
> the names returned by the update.getColumnNames() for an update of a table 
> with two clustering columns and had a regular column update produced two 
> CellName/Cells: 
> 
> one has no name, and no apparent raw value (bytebuffer is empty)
> 
> the other is the data column. 
> 
> I can extract the primary key from the key field
> 
> But how do I get the values of the two clustering columns? They aren't listed 
> in the iterator, and they don't appear to be in the key field. Since 
> clustering columns are encoded into the name of a cell, I'd imagine there 
> might be some "unpacking" trick to that. 


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Getting Consistency level TWO when it is requested LOCAL_ONE

2019-04-11 Thread Mahesh Daksha
Hi Jean,

I want to understand how you are setting the write consistency level as
LOCAL ONE. That is with every query you mentioning consistency level or you
have set the spring cassandra config with provided consistency level.
Like this:
cluster.setQueryOptions(new
QueryOptions().setConsistencyLevel(ConsistencyLevel.valueOf(cassandraConsistencyLevel)));

The only possibility i see of such behavior is its getting overridden from
some where.

Thanks,
Mahesh Daksha

On Thu, Apr 11, 2019 at 1:43 PM Jean Carlo 
wrote:

> Hello everyone,
>
> I have a case where the developers are using spring data framework for
> Cassandra. We are writing batches setting consistency level at LOCAL_ONE
> but we got a timeout like this
>
> *Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException:
> Cassandra timeout during BATCH_LOG write query at consistency TWO (2
> replica were required but only 1 acknowledged the write)*
>
> Is it Cassandra that somehow writes to the *system.batchlog* using
> consistency TWO or is it spring data that makes some dirty things behind
> the scenes ?
> (I want to believe it is the second one)
>
> Cheers
>
> Jean Carlo
>
> "The best way to predict the future is to invent it" Alan Kay
>


Re: All time blocked in nodetool tpstats

2019-04-11 Thread Paul Chandler
Hi Abdul,

That all depends on the cluster, so it really is best to experiment.

By adding more threads you will use more of the system resources, so before you 
start you need to know if there is spare capacity in the CPU usage and the disk 
throughput. If there is spare capacity then increase the threads in steps, I 
normally go in steps of 32., but that is based on the size of machines I 
normally work with. 

But as Anthony said, if it is a high read system, then it could easily be 
tombstones or garbage collection. 

Thanks 

Paul Chandler

> On 11 Apr 2019, at 03:57, Abdul Patel  wrote:
> 
> Do we have any recommendations on concurrents reads ans writes settings?
> Mine is 18 node 3 dc cluster with 20 core cpu
> 
> On Wednesday, April 10, 2019, Anthony Grasso  > wrote:
> Hi Abdul,
> 
> Usually we get no noticeable improvement at tuning concurrent_reads and 
> concurrent_writes above 128. I generally try to keep current_reads to no 
> higher than 64 and concurrent_writes to no higher than 128. In creasing the 
> values beyond that you might start running into issues where the kernel IO 
> scheduler and/or the disk become saturated. As Paul mentioned, it will depend 
> on the size of your nodes though.
> 
> If the client is timing out, it is likely that the node that is selected as 
> the coordinator for the read has a resource contention somewhere. The root 
> cause is usually due to a number of things going on though. As Paul 
> mentioned, one of the issues could be the query design. It is worth 
> investigating if a particular read query is timing out.
> 
> I would also inspect the Cassandra logs and garbage collection logs on the 
> node where you are seeing the timeouts. The things to look out for is high 
> garbage collection frequency, long garbage collection pauses, and high 
> tombstone read warnings.
> 
> Regards,
> Anthony
> 
> On Thu, 11 Apr 2019 at 06:01, Abdul Patel  > wrote:
> Yes the queries are all select queries as they are more of read intensive app.
> Last night i rebooted cluster and today they are fine .(i know its temporary) 
> as i still is all time blocked values.
> I am thinking of incresiing concurrent 
> 
> On Wednesday, April 10, 2019, Paul Chandler  > wrote:
> Hi Abdul,
> 
> When I have seen dropped messages, I normally double check to ensure the node 
> not CPU bound. 
> 
> If you have a high CPU idle value, then it is likely that tuning the thread 
> counts will help.
> 
> I normally start with concurrent_reads and concurrent_writes, so in your case 
> as reads are being dropped then increase concurrent_reads, I normally change 
> it to 96 to start with, but it will depend on size of your nodes.
> 
> Otherwise it might be badly designed queries, have you investigated which 
> queries are producing the client timeouts?
> 
> Regards 
> 
> Paul Chandler 
> 
> 
> 
> > On 9 Apr 2019, at 18:58, Abdul Patel  > > wrote:
> > 
> > Hi,
> > 
> > My nodetool tpstats arw showing all time blocked high numbers a d also read 
> > dropped messages as 400 .
> > Client is expeirince high timeouts.
> > Checked few online forums they recommend to increase, 
> > native_transport_max_threads.
> > As of jow its commented with 128 ..
> > Is it adviabke to increase this and also can this fix timeout issue?
> > 
> 
> 
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org 
> 
> For additional commands, e-mail: user-h...@cassandra.apache.org 
> 
> 



Cassandra client side versus server side query timestamp.

2019-04-11 Thread Mahesh Daksha
Hello all,

As per my knowledge spring data cassandra (recent version) uses by default
cassandra client side query timestamp.
I am just curious to know which once is more preferable and recommended to
have out of client side or server side query timestamp.
Also if any logical reason for the same.

Thanks,
Mahesh Daksha


Getting Consistency level TWO when it is requested LOCAL_ONE

2019-04-11 Thread Jean Carlo
Hello everyone,

I have a case where the developers are using spring data framework for
Cassandra. We are writing batches setting consistency level at LOCAL_ONE
but we got a timeout like this

*Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException:
Cassandra timeout during BATCH_LOG write query at consistency TWO (2
replica were required but only 1 acknowledged the write)*

Is it Cassandra that somehow writes to the *system.batchlog* using
consistency TWO or is it spring data that makes some dirty things behind
the scenes ?
(I want to believe it is the second one)

Cheers

Jean Carlo

"The best way to predict the future is to invent it" Alan Kay


Re: cass-2.2 trigger - how to get clustering columns and value?

2019-04-11 Thread Jacques-Henri Berthemet
Hi,

You should take a look at how Stratio’s Lucene index decodes CFs and keys, 
start from RowService.doIndex() implementations:
https://github.com/Stratio/cassandra-lucene-index/tree/branch-2.2.13/plugin/src/main/java/com/stratio/cassandra/lucene/service

Note that in some cases an update without values is a delete of the Cell.

Regards,
Jacques-Henri Berthemet

From: Carl Mueller 
Reply-To: "user@cassandra.apache.org" 
Date: Wednesday 10 April 2019 at 23:53
To: "user@cassandra.apache.org" 
Subject: cass-2.2 trigger - how to get clustering columns and value?

We have a multitenant cluster that we can't upgrade to 3.x easily, and we'd 
like to migrate some apps off of the shared cluster to dedicated clusters.

This is a 2.2 cluster.

So I'm trying a trigger to track updates while we transition and will send via 
kafka. Right now I'm just trying to extract all the data from the incoming 
updates

so for
public Collection augment(ByteBuffer key, ColumnFamily update) {

the names returned by the update.getColumnNames() for an update of a table with 
two clustering columns and had a regular column update produced two 
CellName/Cells:

one has no name, and no apparent raw value (bytebuffer is empty)

the other is the data column.

I can extract the primary key from the key field

But how do I get the values of the two clustering columns? They aren't listed 
in the iterator, and they don't appear to be in the key field. Since clustering 
columns are encoded into the name of a cell, I'd imagine there might be some 
"unpacking" trick to that.