This is not a problem with the token assignments. Here is the ideal assignments 
from the tools/bin/token-generator script 

DC #1:
  Node #1:                                        0
  Node #2:   56713727820156410577229101238628035242
  Node #3:  113427455640312821154458202477256070484

You are pretty close, but the order of the nodes in the output is a little odd, 
would normally expect node 2 to be first.

First step would be to check the logs on 1 to see if it’s failing at 
compaction, and to check if it’s holding a lot of hints. Then make sure repair 
is running so the data is distributed. 

Hope that helps. 
Aaron

-----------------
Aaron Morton
New Zealand
@aaronmorton

Co-Founder & Principal Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com

On 12/05/2014, at 11:58 pm, Oleg Dulin <oleg.du...@gmail.com> wrote:

> I have a cluster that looks like this:
> 
> Datacenter: us-east
> ==========
> Replicas: 2
> 
> Address         Rack        Status State   Load            Owns               
> Token
>                                                                             
> 113427455640312821154458202477256070484
> *.*.*.1   1b          Up     Normal  141.88 GB       66.67%             
> 56713727820156410577229101238628035242
> *.*.*.2  1a          Up     Normal  113.2 GB        66.67%              210
> *.*.*.3   1d          Up     Normal  102.37 GB       66.67%             
> 113427455640312821154458202477256070484
> 
> 
> Obviously, the first node in 1b has 40% more data than the others. If I 
> wanted to rebalance this cluster, how would I go about that ? Would shifting 
> the tokens accomplish what I need and which tokens ?
> 
> Regards,
> Oleg
> 
> 

Reply via email to