[ 
https://issues.apache.org/jira/browse/CASSANDRA-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14073592#comment-14073592
 ] 

T Jake Luciani commented on CASSANDRA-7601:
-------------------------------------------

I think this comes down to what the point of taketoken is.  In reality you are 
de-commissioning the data from that nodes. when in reality the code is 
streaming the data to the new replicas.  

Lets say you have 3 nodes and a keyspace of RF=2

If you say nodetool -h node3 taketoken [tokenfromnode1]  the replicas may end 
up being on node2 and node3.  This means we need a way to push data from node1 
to node2 with node3 as the coordinator.  What ends up happening with taketoken 
now is we stream nothing.  Which in the case of this test leaves only the 
"missing" data on node3.

 


> Data loss after nodetool taketoken
> ----------------------------------
>
>                 Key: CASSANDRA-7601
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-7601
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core, Tests
>         Environment: Mac OSX Mavericks. Ubuntu 14.04
>            Reporter: Philip Thompson
>            Priority: Minor
>         Attachments: consistent_bootstrap_test.py, taketoken.tar.gz
>
>
> The dtest 
> consistent_bootstrap_test.py:TestBootstrapConsistency.consistent_reads_after_relocate_test
>  is failing on HEAD of the git branches 2.1 and 2.1.0.
> The test performs the following actions:
> - Create a cluster of 3 nodes
> - Create a keyspace with RF 2
> - Take node 3 down
> - Write 980 rows to node 2 with CL ONE
> - Flush node 2
> - Bring node 3 back up
> - Run nodetool taketoken on node 3 to transfer 80% of node 1's tokens to node 
> 3
> - Check for data loss
> When the check for data loss is performed, only ~725 rows can be read via CL 
> ALL.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to