Few weeks ago, we move a ks to another DC but in same cluster.
Original: cluster_1: DC1,ks1+ks2
After: cluster_1: DC1,ks1; DC2,ks2


by reference http://www.planetcassandra.org/blog/cassandra-migration-to-ec2, 
our steps :


1.in all new Node(DC2) : 
$ vi /usr/install/cassandra/conf/cassandra.yaml:
 cluster_name: cluster_1
 endpoint_snitch: GossipingPropertyFileSnitch
 - seeds: "DC1 Nodes,DC2 Node"
 auto_bootstrap: false
$ sudo -u admin vi /usr/install/cassandra/conf/cassandra-rackdc.properties
 dc=DC2
 rack=RAC1


2.add DC2 to ks2
ALTER KEYSPACE ks2 WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', 
'DC2' : 3, 'DC1' : 3};


3.migration data from original DC: 
/usr/install/cassandra/bin/nodetool rebuild DC1


4.ALTER KEYSPACE ks2 WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', 
'DC2' : 3};


After that, I think It's OK now. because ks1 and ks2 now had different DC to 
separate the data come from. 


But recently we had some issue: node in DC1 still invoke DC2.
use iftop command, I can see DC1 node flow to DC2. 
I think it's the problem of seed in DC2 still use DC1 node at step1. 


Our leader want to totaly seperate ks1 and ks2. so my program is modify DC2 
nodes to use another cluster_name.
But just reboot DC2 nodes can’t success start node:
org.apache.cassandra.exceptions.ConfigurationException: Saved cluster name 
cluster_1 != configured name cluster_2


and ref from 
here:http://stackoverflow.com/questions/22006887/cassandra-saved-cluster-name-test-cluster-configured-name
 can start node but has warning: 
WARN 16:41:35,824 ClusterName mismatch from /192.168.47.216 cluster_1!=cluster2


in this way also has some problem of nodetool status:
nodetool status at DC1 nodes: 
[qihuang.zheng@spark047219 ~]$ /usr/install/cassandra/bin/nodetool status
Datacenter: DC1
===============
-- Address     Load    Tokens Owns  Host ID                Rack
UN 192.168.47.219 62.61 GB  256   7.2%  953dda8c-3908-401f-9adb-aa59c4cb92d1 
RAC1
Datacenter: DC2
===============
-- Address     Load    Tokens Owns  Host ID                Rack
DN 192.168.47.223 49.49 GB  256   7.5%  a4b91faf-3e1f-46df-a1cc-39bb267bc683 
RAC1


nodetool status at DC2 nodes after change cluster_name:
[qihuang.zheng@spark047223 ~]$ /usr/install/cassandra/bin/nodetool status
Datacenter: DC1
===============
-- Address     Load    Tokens Owns  Host ID                Rack
DN 192.168.47.219 ?     256   7.2%  953dda8c-3908-401f-9adb-aa59c4cb92d1 r1
DN 192.168.47.218 ?     256   7.7%  42b8bfae-a6ee-439b-b101-4a0963c9aaa0 r1
...
Datacenter: DC2
===============
-- Address     Load    Tokens Owns  Host ID                Rack
UN 192.168.47.223 49.49 GB  256   7.5%  a4b91faf-3e1f-46df-a1cc-39bb267bc683 
RAC1


As u can see, DC1 think DC2 is DOWN, and DC2 think DC1 is DN and load is ?. but 
actualy all nodes are UP.

So, Here is the problem:
1. If I delete seeds of DC1 node but still in same cluster_name. will DC1 node 
iftop still flow to DC2?
2. If I want to change DC2 nodes to new cluster_name, what should I do next?
I thinks first way is easy but may be not fit our leader’s opinion.


TKS,qihuang.zheng

Reply via email to