Optimal Heap Size Cassandra Configuration

2019-05-20 Thread Akshay Bhardwaj
Hi Experts,

I have a 5 node cluster with 8 core CPU and 32 GiB RAM

If I have a write TPS of 5K/s and read TPS of 8K/s, I want to know what is
the optimal heap size configuration for each cassandra node.

Currently, the heap size is set at 8GB. How can I know if cassandra
requires more or less heap memory?

Akshay Bhardwaj
+91-97111-33849


Re: Decommissioning a new node when the state is JOINING

2019-04-30 Thread Akshay Bhardwaj
Thank you for prompt replies. The solutions worked!

Akshay Bhardwaj
+91-97111-33849


On Tue, Apr 30, 2019 at 5:56 PM ZAIDI, ASAD A  wrote:

> Just stop the server/kill C* process  as node never fully joined the
> cluster yet – that should be enough. You can safely remove the data i.e.
> streamed-in on new node so you can use the node for other new cluster.
>
>
>
>
>
> *From:* Akshay Bhardwaj [mailto:akshay.bhardwaj1...@gmail.com]
> *Sent:* Tuesday, April 30, 2019 6:35 AM
> *To:* user@cassandra.apache.org
> *Subject:* Decommissioning a new node when the state is JOINING
>
>
>
> Hi Experts,
>
>
>
> I have a cassandra cluster running with 5 nodes. For some reason, I was
> creating a new cassandra cluster, but one of the nodes intended for new
> cluster had the same cassandra.yml file as the existing cluster. This
> resulted in the new node joining the existing cluster, making total no. of
> nodes as 6.
>
>
>
> As of now in "nodetool status" command, I see that the state of the new
> node is JOINING, and also rebalancing data with other nodes.
>
> What is the best way to decommission the node?
>
>1. Can I execute "nodetool decommission" immediately for the new node?
>2. Should I wait for the new node to finish sync, and decommission
>    only after that?
>3. Any other quick approach without data loss for existing cluster?
>
>
>
> Thanks in advance!
>
>
> Akshay Bhardwaj
>
> +91-97111-33849
>


Decommissioning a new node when the state is JOINING

2019-04-30 Thread Akshay Bhardwaj
Hi Experts,

I have a cassandra cluster running with 5 nodes. For some reason, I was
creating a new cassandra cluster, but one of the nodes intended for new
cluster had the same cassandra.yml file as the existing cluster. This
resulted in the new node joining the existing cluster, making total no. of
nodes as 6.

As of now in "nodetool status" command, I see that the state of the new
node is JOINING, and also rebalancing data with other nodes.
What is the best way to decommission the node?

   1. Can I execute "nodetool decommission" immediately for the new node?
   2. Should I wait for the new node to finish sync, and decommission only
   after that?
   3. Any other quick approach without data loss for existing cluster?


Thanks in advance!

Akshay Bhardwaj
+91-97111-33849


Re: Cassandra | Cross Data Centre Replication Status

2018-10-30 Thread Akshay Bhardwaj
Hi Jonathan,

That makes sense. Thank you for the explanation.

Another quick question, as the cluster is still operative and the data for
the past 2 weeks (since updating replication factor) is present in both the
data centres, should I run "nodetool rebuild" or "nodetool repair"?

I read that nodetool rebuild is faster and is useful till the new data
centre is empty and no partition keys are present. So when is the good time
to use either of the commands and what impact can it have on the data
centre operations?

Thanks and Regards
Akshay Bhardwaj
+91-97111-33849


On Wed, Oct 31, 2018 at 2:34 AM Jonathan Haddad  wrote:

> You need to run "nodetool rebuild -- " on each node in
> the new DC to get the old data to replicate.  It doesn't do it
> automatically because Cassandra has no way of knowing if you're done adding
> nodes and if it were to migrate automatically, it could cause a lot of
> problems. Imagine streaming 100 nodes data to 3 nodes in the new DC, not
> fun.
>
> On Tue, Oct 30, 2018 at 1:59 PM Akshay Bhardwaj <
> akshay.bhardwaj1...@gmail.com> wrote:
>
>> Hi Experts,
>>
>> I previously had 1 Cassandra data centre in AWS Singapore region with 5
>> nodes, with my keyspace's replication factor as 3 in Network topology.
>>
>> After this cluster has been running smoothly for 4 months (500 GB of data
>> on each node's disk), I added 2nd data centre in AWS Mumbai region with yet
>> again 5 nodes in Network topology.
>>
>> After updating my keyspace's replication factor to
>> {"AWS_Sgp":3,"AWS_Mum":3}, my expectation was that the data present in Sgp
>> region will immediately start replicating on the Mum region's nodes.
>> However even after 2 weeks I do not see historical data to be replicated,
>> but new data being written on Sgp region is present in Mum region as well.
>>
>> Any help or suggestions to debug this issue will be highly appreciated.
>>
>> Regards
>> Akshay Bhardwaj
>> +91-97111-33849
>>
>
>
> --
> Jon Haddad
> http://www.rustyrazorblade.com
> twitter: rustyrazorblade
>


Cassandra | Cross Data Centre Replication Status

2018-10-30 Thread Akshay Bhardwaj
Hi Experts,

I previously had 1 Cassandra data centre in AWS Singapore region with 5
nodes, with my keyspace's replication factor as 3 in Network topology.

After this cluster has been running smoothly for 4 months (500 GB of data
on each node's disk), I added 2nd data centre in AWS Mumbai region with yet
again 5 nodes in Network topology.

After updating my keyspace's replication factor to
{"AWS_Sgp":3,"AWS_Mum":3}, my expectation was that the data present in Sgp
region will immediately start replicating on the Mum region's nodes.
However even after 2 weeks I do not see historical data to be replicated,
but new data being written on Sgp region is present in Mum region as well.

Any help or suggestions to debug this issue will be highly appreciated.

Regards
Akshay Bhardwaj
+91-97111-33849