Re: All time blocked in nodetool tpstats

2019-04-10 Thread Jean Carlo
In my cluster, I have it at 4096. I think you can start with 1024 and check
if you have no native requested blocked.

I believe this parameter depends on the cluster traffic

Cheers

Jean Carlo

"The best way to predict the future is to invent it" Alan Kay


On Tue, Apr 9, 2019 at 7:59 PM Abdul Patel  wrote:

> Hi,
>
> My nodetool tpstats arw showing all time blocked high numbers a d also
> read dropped messages as 400 .
> Client is expeirince high timeouts.
> Checked few online forums they recommend to increase,
> native_transport_max_threads.
> As of jow its commented with 128 ..
> Is it adviabke to increase this and also can this fix timeout issue?
>
>


Re: Questions about C* performance related to tombstone

2019-04-10 Thread Alok Dwivedi
Your delete query 
>> "DELETE FROM myTable WHERE course_id = 'C' AND assignment_id = 'A1';”.
will generate multi row range tombstones. Since you are reading entire 
partition which effectively will be read in pages (slice query equivalent) you 
may get tombstones in certain pages depending upon how much deletes you are 
doing. However looking at your use case I don’t think you will end with very 
high ratio of deleted to live data so normal deletes should be fine as is 
already pointed out below. Note that range tombstones are more effective 
storage space wise as they have start/end range rather than deleted info for 
every deleted row. So I also don’t think  your workaround of using ‘active’ 
flag is really needed unless its for auditing. Another thing to note is if you 
have a use case where you want to be more aggressive in evicting tombstones 
then here are some settings worth exploring
- tombstone_threshold
- unchecked_tombstone_compaction
-tombstone_compaction_interval
Additionally gc_grace_seconds can be looked at but it must be handled very 
carefully as we must ensure that repair completes in an interval less than this 
setting to prevent any deleted data reappearing. 

Regards
Alok


> On 9 Apr 2019, at 15:56, Jon Haddad  wrote:
> 
> Normal deletes are fine.
> 
> Sadly there's a lot of hand wringing about tombstones in the generic
> sense which leads people to try to work around *every* case where
> they're used.  This is unnecessary.  A tombstone over a single row
> isn't a problem, especially if you're only fetching that one row back.
> Tombstones can be quite terrible under a few conditions:
> 
> 1. When a range tombstone shadows hundreds / thousands / millions of
> rows.  This wasn't even detectable prior to Cassandra 3 unless you
> were either looking for it specifically or were doing CPU profiling:
> http://thelastpickle.com/blog/2018/07/05/undetectable-tombstones-in-apache-cassandra.html
> 2. When rows were frequently created then deleted, and scanned over.
> This is the queue pattern that we detest so much.
> 3. When they'd be created as a side effect from over writing
> collections.  This is an accident typically.
> 
> The 'active' flag is good if you want to be able to go back and look
> at old deleted assignments.  If you don't care about that, use a
> normal delete.
> 
> Jon
> 
> On Tue, Apr 9, 2019 at 7:00 AM Li, George  wrote:
>> 
>> Hi,
>> 
>> I have a table defined like this:
>> 
>> CREATE TABLE myTable (
>> course_id text,
>> assignment_id text,
>> assignment_item_id text,
>> data text,
>> boolean active,
>> PRIMARY KEY (course_id, assignment_id, assignment_item_id)
>> );
>> i.e. course_id as the partition key and assignment_id, assignment_item_id as 
>> clustering keys.
>> 
>> After data is populated, some delete queries by course_id and assignment_id 
>> occurs, e.g. "DELETE FROM myTable WHERE course_id = 'C' AND assignment_id = 
>> 'A1';". This would create tombstones so query "SELECT * FROM myTable WHERE 
>> course_id = 'C';" would be affected, right? Would query "SELECT * FROM 
>> myTable WHERE course_id = 'C' AND assignment_id = 'A2';" be affected too?
>> 
>> For query "SELECT * FROM myTable WHERE course_id = 'C';", to workaround the 
>> tombstone problem, we are thinking about not doing hard deletes, instead 
>> doing soft deletes. So instead of doing "DELETE FROM myTable WHERE course_id 
>> = 'C' AND assignment_id = 'A1';", we do "UPDATE myTable SET active = false 
>> WHERE course_id = 'C' AND assignment_id = 'A1';". Then in the application, 
>> we do query "SELECT * FROM myTable WHERE course_id = 'C';" and filter out 
>> records that have "active" equal to "false". I am not really sure this would 
>> improve performance because C* still has to scan through all records with 
>> the partition key "C". It is just instead of scanning through X records + Y 
>> tombstone records with hard deletes that generate tombstones, it now scans 
>> through X + Y records with soft deletes and no tombstones. Am I right?
>> 
>> Thanks.
>> 
>> George
> 
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
> 



Re: All time blocked in nodetool tpstats

2019-04-10 Thread Paul Chandler
Hi Abdul,

When I have seen dropped messages, I normally double check to ensure the node 
not CPU bound. 

If you have a high CPU idle value, then it is likely that tuning the thread 
counts will help.

I normally start with concurrent_reads and concurrent_writes, so in your case 
as reads are being dropped then increase concurrent_reads, I normally change it 
to 96 to start with, but it will depend on size of your nodes.

Otherwise it might be badly designed queries, have you investigated which 
queries are producing the client timeouts?

Regards 

Paul Chandler 



> On 9 Apr 2019, at 18:58, Abdul Patel  wrote:
> 
> Hi,
> 
> My nodetool tpstats arw showing all time blocked high numbers a d also read 
> dropped messages as 400 .
> Client is expeirince high timeouts.
> Checked few online forums they recommend to increase, 
> native_transport_max_threads.
> As of jow its commented with 128 ..
> Is it adviabke to increase this and also can this fix timeout issue?
> 


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Questions about C* performance related to tombstone

2019-04-10 Thread Li, George
Thanks for the information.

George.

On Wed, Apr 10, 2019 at 3:14 AM Alok Dwivedi 
wrote:

> Your delete query
>
> "DELETE FROM myTable WHERE course_id = 'C' AND assignment_id = 'A1';”.
>
> will generate multi row range tombstones. Since you are reading entire
> partition which effectively will be read in pages (slice query equivalent)
> you may get tombstones in certain pages depending upon how much deletes you
> are doing. However looking at your use case I don’t think you will end with
> very high ratio of deleted to live data so normal deletes should be fine as
> is already pointed out below. Note that range tombstones are more effective
> storage space wise as they have start/end range rather than deleted info
> for every deleted row. So I also don’t think  your workaround of using
> ‘active’ flag is really needed unless its for auditing. Another thing to
> note is if you have a use case where you want to be more aggressive in
> evicting tombstones then here are some settings worth exploring
> - tombstone_threshold
> - unchecked_tombstone_compaction
> -tombstone_compaction_interval
> Additionally gc_grace_seconds can be looked at but it must be handled very
> carefully as we must ensure that repair completes in an interval less than
> this setting to prevent any deleted data reappearing.
>
> Regards
> Alok
>
>
> On 9 Apr 2019, at 15:56, Jon Haddad  wrote:
>
> Normal deletes are fine.
>
> Sadly there's a lot of hand wringing about tombstones in the generic
> sense which leads people to try to work around *every* case where
> they're used.  This is unnecessary.  A tombstone over a single row
> isn't a problem, especially if you're only fetching that one row back.
> Tombstones can be quite terrible under a few conditions:
>
> 1. When a range tombstone shadows hundreds / thousands / millions of
> rows.  This wasn't even detectable prior to Cassandra 3 unless you
> were either looking for it specifically or were doing CPU profiling:
>
> http://thelastpickle.com/blog/2018/07/05/undetectable-tombstones-in-apache-cassandra.html
> 
> 2. When rows were frequently created then deleted, and scanned over.
> This is the queue pattern that we detest so much.
> 3. When they'd be created as a side effect from over writing
> collections.  This is an accident typically.
>
> The 'active' flag is good if you want to be able to go back and look
> at old deleted assignments.  If you don't care about that, use a
> normal delete.
>
> Jon
>
> On Tue, Apr 9, 2019 at 7:00 AM Li, George 
> wrote:
>
>
> Hi,
>
> I have a table defined like this:
>
> CREATE TABLE myTable (
> course_id text,
> assignment_id text,
> assignment_item_id text,
> data text,
> boolean active,
> PRIMARY KEY (course_id, assignment_id, assignment_item_id)
> );
> i.e. course_id as the partition key and assignment_id, assignment_item_id
> as clustering keys.
>
> After data is populated, some delete queries by course_id and
> assignment_id occurs, e.g. "DELETE FROM myTable WHERE course_id = 'C' AND
> assignment_id = 'A1';". This would create tombstones so query "SELECT *
> FROM myTable WHERE course_id = 'C';" would be affected, right? Would query
> "SELECT * FROM myTable WHERE course_id = 'C' AND assignment_id = 'A2';" be
> affected too?
>
> For query "SELECT * FROM myTable WHERE course_id = 'C';", to workaround
> the tombstone problem, we are thinking about not doing hard deletes,
> instead doing soft deletes. So instead of doing "DELETE FROM myTable WHERE
> course_id = 'C' AND assignment_id = 'A1';", we do "UPDATE myTable SET
> active = false WHERE course_id = 'C' AND assignment_id = 'A1';". Then in
> the application, we do query "SELECT * FROM myTable WHERE course_id = 'C';"
> and filter out records that have "active" equal to "false". I am not really
> sure this would improve performance because C* still has to scan through
> all records with the partition key "C". It is just instead of scanning
> through X records + Y tombstone records with hard deletes that generate
> tombstones, it now scans through X + Y records with soft deletes and no
> tombstones. Am I right?
>
> Thanks.
>
> George
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>
>


Topology settings before/after decommission node

2019-04-10 Thread rastrent
Hi there,

I am running a cassandra cluster (v3.0.9) with 2 DCs (4/3 nodes respectively) 
using endpoint_snitch: PropertyFileSnitch and I would like to decommission one 
node in DC1 but I wonder about what kind of actions I need to take related with 
the the topology settings.
My cassandra-topology.properties has the those simple settings below:

x.x.x.x=DC1:RAC1
x.x.x.x=DC1:RAC1
x.x.x.x=DC1:RAC1
x.x.x.x=DC1:RAC1
x.x.x.x=DC2:RAC1
x.x.x.x=DC2:RAC1
x.x.x.x=DC2:RAC1

default=DC1:r1

My action plan is to:

1)Decomission a node in DC1
2) After node leaves cluster,  edit cassandra-topology.properties in every node 
in the cluster
3) Question: No I need to restart all nodes in cluster? (one each time of 
course)

Bonus question: Do I need to change the cassandra-topology.properties before 
move/remove nodes?

Cheers,

Robert,

Sent with [ProtonMail](https://protonmail.com) Secure Email.

Driver Debug trace log

2019-04-10 Thread Rajasekhar Kommineni
Hi All,


We are seeing some timeouts for Web calls which are going to remote DC, so 
enabled debug tracing for DataStax java driver and getting below messages in 
the application log file.However not able to get the exact reason for the 
timeout. Need assistance in finding the same.

DataStax Java driver version : 3.3.1
Apache Cassandra Version :  3.11.1
Cluster Setup : 2 DC (4+4)


019-04-10T09:29:57,228 DEBUG [Connection] (cluster1-nio-worker-2:)  
Connection[/hostname:port-1, inFlight=0, closed=false] Response received on 
stream 6784 but no handler set anymore (either the request has timed out or it 
was closed due to another error). Received message is ROWS [1 columns]
 | 
0x7b22636172644964223a7b2266616d696c794964223a22353634222c2262616e6e65724964223a225f6361746368616c6c5f222c22636172644964223a223130303030363939353037227d2c226f657273223a5b7b226f65724964223a2231353338303838222c226f65725075624964223a223131303034363036323c2273636f7265223a33383231363739343031322c226461746552616e6765223a7b22666972737444617465223a22323031392d30342d3031222c226c61737444617465223a22323031392d30342d3330227d7d2c7b226f65724964223a2231353430303930222c22...
 [message of size 82098 truncated]

2019-04-10T09:29:57,226 ERROR [CassandraBucket] (qtp1395262169-485:) 
trn-daa3ac4a5713402bb79078a12733ae72 get caught OperationTimedOutException: 
tryNumber=1 id=primarykey 
class=com.datastax.driver.core.exceptions.OperationTimedOutException 
ex=[/hostname:port] Timed out waiting for server response

Thanks,



-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: All time blocked in nodetool tpstats

2019-04-10 Thread Abdul Patel
Yes the queries are all select queries as they are more of read intensive
app.
Last night i rebooted cluster and today they are fine .(i know its
temporary) as i still is all time blocked values.
I am thinking of incresiing concurrent reads and writes to 256 and native
transport threads to 256 and see how it performs.

On Wednesday, April 10, 2019, Paul Chandler  wrote:

> Hi Abdul,
>
> When I have seen dropped messages, I normally double check to ensure the
> node not CPU bound.
>
> If you have a high CPU idle value, then it is likely that tuning the
> thread counts will help.
>
> I normally start with concurrent_reads and concurrent_writes, so in your
> case as reads are being dropped then increase concurrent_reads, I normally
> change it to 96 to start with, but it will depend on size of your nodes.
>
> Otherwise it might be badly designed queries, have you investigated which
> queries are producing the client timeouts?
>
> Regards
>
> Paul Chandler
>
>
>
> > On 9 Apr 2019, at 18:58, Abdul Patel  wrote:
> >
> > Hi,
> >
> > My nodetool tpstats arw showing all time blocked high numbers a d also
> read dropped messages as 400 .
> > Client is expeirince high timeouts.
> > Checked few online forums they recommend to increase,
> native_transport_max_threads.
> > As of jow its commented with 128 ..
> > Is it adviabke to increase this and also can this fix timeout issue?
> >
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


All time blocked in nodetool tpstats

2019-04-10 Thread Abdul Patel
Yes the queries are all select queries as they are more of read intensive
app.
Last night i rebooted cluster and today they are fine .(i know its
temporary) as i still is all time blocked values.
I am thinking of incresiing concurrent

On Wednesday, April 10, 2019, Paul Chandler  wrote:

> Hi Abdul,
>
> When I have seen dropped messages, I normally double check to ensure the
> node not CPU bound.
>
> If you have a high CPU idle value, then it is likely that tuning the
> thread counts will help.
>
> I normally start with concurrent_reads and concurrent_writes, so in your
> case as reads are being dropped then increase concurrent_reads, I normally
> change it to 96 to start with, but it will depend on size of your nodes.
>
> Otherwise it might be badly designed queries, have you investigated which
> queries are producing the client timeouts?
>
> Regards
>
> Paul Chandler
>
>
>
> > On 9 Apr 2019, at 18:58, Abdul Patel  wrote:
> >
> > Hi,
> >
> > My nodetool tpstats arw showing all time blocked high numbers a d also
> read dropped messages as 400 .
> > Client is expeirince high timeouts.
> > Checked few online forums they recommend to increase,
> native_transport_max_threads.
> > As of jow its commented with 128 ..
> > Is it adviabke to increase this and also can this fix timeout issue?
> >
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


cass-2.2 trigger - how to get clustering columns and value?

2019-04-10 Thread Carl Mueller
We have a multitenant cluster that we can't upgrade to 3.x easily, and we'd
like to migrate some apps off of the shared cluster to dedicated clusters.

This is a 2.2 cluster.

So I'm trying a trigger to track updates while we transition and will send
via kafka. Right now I'm just trying to extract all the data from the
incoming updates

so for

public Collection augment(ByteBuffer key, ColumnFamily
update) {

the names returned by the update.getColumnNames() for an update of a table
with two clustering columns and had a regular column update produced two
CellName/Cells:

one has no name, and no apparent raw value (bytebuffer is empty)

the other is the data column.

I can extract the primary key from the key field

But how do I get the values of the two clustering columns? They aren't
listed in the iterator, and they don't appear to be in the key field. Since
clustering columns are encoded into the name of a cell, I'd imagine there
might be some "unpacking" trick to that.


Re: How to install an older minor release?

2019-04-10 Thread Carl Mueller
You'll have to setup a local repo like artifactory.

On Wed, Apr 3, 2019 at 4:33 AM Kyrylo Lebediev 
wrote:

> Hi Oleksandr,
>
> Yes, that was always the case. All older versions are removed from Debian
> repo index :(
>
>
>
> *From: *Oleksandr Shulgin 
> *Reply-To: *"user@cassandra.apache.org" 
> *Date: *Tuesday, April 2, 2019 at 20:04
> *To: *User 
> *Subject: *How to install an older minor release?
>
>
>
> Hello,
>
>
>
> We've just noticed that we cannot install older minor releases of Apache
> Cassandra from Debian packages, as described on this page:
> http://cassandra.apache.org/download/
>
>
>
> Previously we were doing the following at the last step: apt-get install
> cassandra==3.0.17
>
>
>
> Today it fails with error:
>
> E: Version '3.0.17' for 'cassandra' was not found
>
>
>
> And `apt-get show cassandra` reports only one version available, the
> latest released one: 3.0.18
>
> The packages for the older versions are still in the pool:
> http://dl.bintray.com/apache/cassandra/pool/main/c/cassandra/
>
>
>
> Was it always the case that only the latest version is available to be
> installed directly with apt or did something change recently?
>
>
>
> Regards,
>
> --
>
> Alex
>
>
>


Re: All time blocked in nodetool tpstats

2019-04-10 Thread Anthony Grasso
Hi Abdul,

Usually we get no noticeable improvement at tuning concurrent_reads and
concurrent_writes above 128. I generally try to keep current_reads to no
higher than 64 and concurrent_writes to no higher than 128. In creasing the
values beyond that you might start running into issues where the kernel IO
scheduler and/or the disk become saturated. As Paul mentioned, it will
depend on the size of your nodes though.

If the client is timing out, it is likely that the node that is selected as
the coordinator for the read has a resource contention somewhere. The root
cause is usually due to a number of things going on though. As Paul
mentioned, one of the issues could be the query design. It is worth
investigating if a particular read query is timing out.

I would also inspect the Cassandra logs and garbage collection logs on the
node where you are seeing the timeouts. The things to look out for is high
garbage collection frequency, long garbage collection pauses, and high
tombstone read warnings.

Regards,
Anthony

On Thu, 11 Apr 2019 at 06:01, Abdul Patel  wrote:

> Yes the queries are all select queries as they are more of read intensive
> app.
> Last night i rebooted cluster and today they are fine .(i know its
> temporary) as i still is all time blocked values.
> I am thinking of incresiing concurrent
>
> On Wednesday, April 10, 2019, Paul Chandler  wrote:
>
>> Hi Abdul,
>>
>> When I have seen dropped messages, I normally double check to ensure the
>> node not CPU bound.
>>
>> If you have a high CPU idle value, then it is likely that tuning the
>> thread counts will help.
>>
>> I normally start with concurrent_reads and concurrent_writes, so in your
>> case as reads are being dropped then increase concurrent_reads, I normally
>> change it to 96 to start with, but it will depend on size of your nodes.
>>
>> Otherwise it might be badly designed queries, have you investigated which
>> queries are producing the client timeouts?
>>
>> Regards
>>
>> Paul Chandler
>>
>>
>>
>> > On 9 Apr 2019, at 18:58, Abdul Patel  wrote:
>> >
>> > Hi,
>> >
>> > My nodetool tpstats arw showing all time blocked high numbers a d also
>> read dropped messages as 400 .
>> > Client is expeirince high timeouts.
>> > Checked few online forums they recommend to increase,
>> native_transport_max_threads.
>> > As of jow its commented with 128 ..
>> > Is it adviabke to increase this and also can this fix timeout issue?
>> >
>>
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>>


Re: All time blocked in nodetool tpstats

2019-04-10 Thread Abdul Patel
Do we have any recommendations on concurrents reads ans writes settings?
Mine is 18 node 3 dc cluster with 20 core cpu

On Wednesday, April 10, 2019, Anthony Grasso 
wrote:

> Hi Abdul,
>
> Usually we get no noticeable improvement at tuning concurrent_reads and
> concurrent_writes above 128. I generally try to keep current_reads to no
> higher than 64 and concurrent_writes to no higher than 128. In creasing
> the values beyond that you might start running into issues where the kernel
> IO scheduler and/or the disk become saturated. As Paul mentioned, it will
> depend on the size of your nodes though.
>
> If the client is timing out, it is likely that the node that is selected
> as the coordinator for the read has a resource contention somewhere. The
> root cause is usually due to a number of things going on though. As Paul
> mentioned, one of the issues could be the query design. It is worth
> investigating if a particular read query is timing out.
>
> I would also inspect the Cassandra logs and garbage collection logs on the
> node where you are seeing the timeouts. The things to look out for is high
> garbage collection frequency, long garbage collection pauses, and high
> tombstone read warnings.
>
> Regards,
> Anthony
>
> On Thu, 11 Apr 2019 at 06:01, Abdul Patel  wrote:
>
>> Yes the queries are all select queries as they are more of read intensive
>> app.
>> Last night i rebooted cluster and today they are fine .(i know its
>> temporary) as i still is all time blocked values.
>> I am thinking of incresiing concurrent
>>
>> On Wednesday, April 10, 2019, Paul Chandler  wrote:
>>
>>> Hi Abdul,
>>>
>>> When I have seen dropped messages, I normally double check to ensure the
>>> node not CPU bound.
>>>
>>> If you have a high CPU idle value, then it is likely that tuning the
>>> thread counts will help.
>>>
>>> I normally start with concurrent_reads and concurrent_writes, so in your
>>> case as reads are being dropped then increase concurrent_reads, I normally
>>> change it to 96 to start with, but it will depend on size of your nodes.
>>>
>>> Otherwise it might be badly designed queries, have you investigated
>>> which queries are producing the client timeouts?
>>>
>>> Regards
>>>
>>> Paul Chandler
>>>
>>>
>>>
>>> > On 9 Apr 2019, at 18:58, Abdul Patel  wrote:
>>> >
>>> > Hi,
>>> >
>>> > My nodetool tpstats arw showing all time blocked high numbers a d also
>>> read dropped messages as 400 .
>>> > Client is expeirince high timeouts.
>>> > Checked few online forums they recommend to increase,
>>> native_transport_max_threads.
>>> > As of jow its commented with 128 ..
>>> > Is it adviabke to increase this and also can this fix timeout issue?
>>> >
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>>
>>>


Re: Topology settings before/after decommission node

2019-04-10 Thread Anthony Grasso
Hi Robert,

Your action plan looks good.

You can think of the *cassandra-topology.properties* file as a map for the
cluster. The map between the nodes must be consistent because each node
uses it to determine where it is meant to be located logically.

It is good hygiene to maintain the *cassandra-topology.properties* so it
contains only the IPs (broadcast addresses) currently used in the cluster.
Technically, you cloud leave the entry for the decommissioned node in
there. The problem is later on if that IP address is used by a node it will
be placed back in DC1 and this could be the wrong logical placement for it.
So I would advise removing the address if the node is inactive.

For step 3, there is no need to restart all the nodes. However if you do
want them to reload the configuration you will need to perform a rolling
restart on the cluster (i.e. restart one node at a time).

Regards,
Anthony

On Thu, 11 Apr 2019 at 03:38, rastrent 
wrote:

> Hi there,
>
> I am running a cassandra cluster (v3.0.9) with 2 DCs (4/3 nodes
> respectively) using endpoint_snitch: PropertyFileSnitch and I would like to
> decommission one node in DC1 but I wonder about what kind of actions I need
> to take related with the the topology settings.
> My cassandra-topology.properties has the those simple settings below:
>
> x.x.x.x=DC1:RAC1
> x.x.x.x=DC1:RAC1
> x.x.x.x=DC1:RAC1
> x.x.x.x=DC1:RAC1
> x.x.x.x=DC2:RAC1
> x.x.x.x=DC2:RAC1
> x.x.x.x=DC2:RAC1
>
> default=DC1:r1
>
> My action plan is to:
>
> 1)Decomission a node in DC1
> 2) After node leaves cluster,  edit cassandra-topology.properties in every
> node in the cluster
> 3) Question: No I need to restart all nodes in cluster? (one each time of
> course)
>
> Bonus question: Do I need to change the cassandra-topology.properties
> before move/remove nodes?
>
> Cheers,
>
> Robert,
>
>
> Sent with ProtonMail  Secure Email.
>
>