[ 
https://issues.apache.org/jira/browse/CASSANDRA-12557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Klopp updated CASSANDRA-12557:
-------------------------------------
    Description: 
Hello,

We are using Cassandra 3.0.6, we've added a fifth Cassandra node to our four 
node cluster.  Earlier on the streams kept failing.  I tweaked some 
cassandra.yaml settings and got them to not fail.  However, we have noticed 
strange behavior in the sync.  Please see the output of nodetool:

ubuntu@ip-172-28-4-238:~$ nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address       Load       Tokens       Owns    Host ID                       
        Rack
UJ  172.28.4.238  1.48 TB    256          ?       
a797ed18-1d50-4b19-924a-f6b37b8859af  rack1
UN  172.28.4.79   988.83 GB  256          ?       
9eec70ec-5d7a-4ba8-bba8-f7d229d00358  rack1
UN  172.28.4.69   891.9 GB   256          ?       
1d429d87-ec4a-4e14-92d7-df2aa129041e  rack1
UN  172.28.4.129  985.48 GB  256          ?       
677c7585-ed31-4afc-b17c-288a3a1e3666  rack1
UN  172.28.4.146  760.38 GB  256          ?       
13ab7037-ec9b-4031-8d6c-4db95b91fa21  rack1

Note: Non-system keyspaces don't have the same replication settings, effective 
ownership information is meaningless
ubuntu@ip-172-28-4-238:~$ 


The fifth node is 172.28.4.238.  Why is its load 1.48 TB, when all of the 
original four nodes are less than 1 TB?  I can also see this on disk usage.  
The original four nodes are utilizing 900 GB to 1100 GB on data volume.  The 
fifth node, however, has ballooned to 2380 GB.  I had to stop the sync and add 
a second disk to support it.

I've attached our cassandra.yaml file.  What could be causing this?



  was:
Hello,

We are using Cassandra 3.0.6, we've added a fifth Cassandra node to our four 
node cluster.  Earlier on the streams kept failing.  I tweaked some 
cassandra.yaml settings and got them to not fail.  However, we have noticed 
strange behavior in the sync.  Please see the output of nodetool:

ubuntu@ip-172-28-4-238:~$ nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address       Load       Tokens       Owns    Host ID                       
        Rack
UJ  172.28.4.238  1.48 TB    256          ?       
a797ed18-1d50-4b19-924a-f6b37b8859af  rack1
UN  172.28.4.79   988.83 GB  256          ?       
9eec70ec-5d7a-4ba8-bba8-f7d229d00358  rack1
UN  172.28.4.69   891.9 GB   256          ?       
1d429d87-ec4a-4e14-92d7-df2aa129041e  rack1
UN  172.28.4.129  985.48 GB  256          ?       
677c7585-ed31-4afc-b17c-288a3a1e3666  rack1
UN  172.28.4.146  760.38 GB  256          ?       
13ab7037-ec9b-4031-8d6c-4db95b91fa21  rack1

Note: Non-system keyspaces don't have the same replication settings, effective 
ownership information is meaningless
ubuntu@ip-172-28-4-238:~$ 


The fifth node is 172.28.4.238.  Why is its load 1.48 TB, when all of the 
original four nodes are less than 1 TB?  I can also see this on disk usage.  
The original four nodes are utilizing 900 GB to 1100 GB on data volume.  The 
fifth node, however, has ballooned to 2380 GB.  I had to stop the sync and add 
a second disk to support it.

I've attached our cassandra.yaml file and some of our log outputs.  What could 
be causing this?




> Cassandra 3.0.6 New Node Perpetually in UJ State and Streams More Data Than 
> Any Node
> ------------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-12557
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-12557
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Streaming and Messaging
>         Environment: Ubuntu 14.04, AWS EC2, m4.2xlarge, 2TB dedicated data 
> disks per node (except node 5, with 2x2TB dedicated data disks), Cassandra 
> 3.0.6
>            Reporter: Daniel Klopp
>             Fix For: 3.x
>
>         Attachments: cassandra.yaml
>
>
> Hello,
> We are using Cassandra 3.0.6, we've added a fifth Cassandra node to our four 
> node cluster.  Earlier on the streams kept failing.  I tweaked some 
> cassandra.yaml settings and got them to not fail.  However, we have noticed 
> strange behavior in the sync.  Please see the output of nodetool:
> ubuntu@ip-172-28-4-238:~$ nodetool status
> Datacenter: datacenter1
> =======================
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address       Load       Tokens       Owns    Host ID                     
>           Rack
> UJ  172.28.4.238  1.48 TB    256          ?       
> a797ed18-1d50-4b19-924a-f6b37b8859af  rack1
> UN  172.28.4.79   988.83 GB  256          ?       
> 9eec70ec-5d7a-4ba8-bba8-f7d229d00358  rack1
> UN  172.28.4.69   891.9 GB   256          ?       
> 1d429d87-ec4a-4e14-92d7-df2aa129041e  rack1
> UN  172.28.4.129  985.48 GB  256          ?       
> 677c7585-ed31-4afc-b17c-288a3a1e3666  rack1
> UN  172.28.4.146  760.38 GB  256          ?       
> 13ab7037-ec9b-4031-8d6c-4db95b91fa21  rack1
> Note: Non-system keyspaces don't have the same replication settings, 
> effective ownership information is meaningless
> ubuntu@ip-172-28-4-238:~$ 
> The fifth node is 172.28.4.238.  Why is its load 1.48 TB, when all of the 
> original four nodes are less than 1 TB?  I can also see this on disk usage.  
> The original four nodes are utilizing 900 GB to 1100 GB on data volume.  The 
> fifth node, however, has ballooned to 2380 GB.  I had to stop the sync and 
> add a second disk to support it.
> I've attached our cassandra.yaml file.  What could be causing this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to