. I rarely see
Cassandra CPU-bound for my use cases. These are primarily write use cases with
a low number of clients with far fewer reads. There is just a lot of data to
keep.
Sean Durity
From: Alex Ott
Sent: Saturday, March 20, 2021 1:01 PM
To: user
Subject: [EXTERNAL] Re: Changing num_tokens
On 2021-03-22 01:27, Kane Wilson wrote:
You should be able to get repairs working fine if you use a tool such as
cassandra-reaper to manage it for you for such a small cluster. I would
look into that before doing major cluster topology changes, as these can
be complex and risky.
I was
You should be able to get repairs working fine if you use a tool such as
cassandra-reaper to manage it for you for such a small cluster. I would
look into that before doing major cluster topology changes, as these can be
complex and risky. I definitely wouldn't go about it in the way you've
if the nodes are almost the same, except the disk space, then giving them
more may make siltation worse - they will get more requests than other
nodes, and won't have resources to process them.
In Cassandra the disk size isn't the main "success" factor - it's a memory,
CPU, disk type (SSD), etc.
Hi, thanks for suggestions!
I'll definitely migrate to 4.0 after all this is done, then.
Old prod DC I fear can't suffer losing a node right now (a few nodes
have the disk 70% full), but I can maybe find a third node for the new
DC right away.
BTW the new nodes have got 3× the disk space,
The are several things to consider here:
- You can't have DC of two nodes with RF=3...
- Are you sure that new DC will handle all production traffic?
- if new nodes much more powerful than other (memory/CPU/disk type) that
could also cause unpredictable spikes when request will hit
I have a 6 nodes production cluster running 3.11.9 with the default
num_tokens=256… which is fine but I later discovered is a bit of a
hassle to do repairs and is probably better to lower that to 16.
I'm adding two new nodes with much higher space storage and I was
wondering which migration