On Sat, Feb 3, 2018 at 11:23 AM, Kyrylo Lebediev <kyrylo_lebed...@epam.com>
wrote:

> Just tested on 3.11.1 and it worked for me (you may see the logs below).
>
> Just comprehended that there is one important prerequisite this method to
> work: new node MUST be located in the same rack (in terms of C*) as the old
> one. Otherwise correct replicas placement order will be violated (I mean
> when replicas of the same token range should be placed in different racks).
>

Correct.

Anyway, even having successful run of node replacement in sandbox I'm still
> in doubt.
>
> Just wondering why this procedure which seems to be much easier than
> [add/remove node] or [replace a node] which are documented ways for live
> node replacement, has never been included into documentation.
>
> Does anybody in the ML know the reason for this?
>

There are a number of reasons why one would need to replace a node.  Losing
a disk would be the most frequent one, I guess.  In that case using
replace_address is the way to go, since it allows you to avoid any
ownership changes.

At the same time on EC2 you might be replacing nodes in order to apply
security updates to your base machine image, etc.  In this case it is
possible to apply the described procedure to migrate the data to the new
node.  However, given that your nodes are small enough, simply using
replace_address seems like a more straightforward way to me.

Also, for some reason in his article Carlos drops files of system keyspace
> (which contains system.local table):
>
> In the new node, delete all system tables except for the schema ones. This
> will ensure that the new Cassandra node will not have any corrupt or
> previous configuration assigned.
>
>    1. sudo cd /var/lib/cassandra/data/system && sudo ls | grep -v schema
>    | xargs -I {} sudo rm -rf {}
>
>
Ah, this sounds like a wrong thing to do.  That would remove system.local
keyspace, which I expect makes the node forget its tokens.

I wouldn't do that: the node's state on disk should be just like after a
normal restart.

--
Alex

Reply via email to