[ https://issues.apache.org/jira/browse/CASSANDRA-7845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14114133#comment-14114133 ]
Robert Stupp commented on CASSANDRA-7845: ----------------------------------------- Will do the upgrade again - but with data/cl/sc/cfg backed up before and after the upgrade to be able to reproduce it. Maybe it's reproducible on a single node with data from one of the "failing" nodes. We did not see any data loss - just these strange load numbers. > Negative load of C* nodes > ------------------------- > > Key: CASSANDRA-7845 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7845 > Project: Cassandra > Issue Type: Bug > Reporter: Robert Stupp > > I've completed two C* workshops. Both groups also did an upgrade of C* 2.0.9 > to 2.1.0rc6 in a 6 node multi-DC cluster. > Both groups encountered the same phenomenon that "nodetool status" and > OpsCenter report a negative load (data size) of most (not all) nodes. I did > not take the phenomenon seriously for the first group, because there were > only operations guys that "did their best to crash the cluster". But the > second groups did nothing seriously wrong. > 2.0.9 configuration was the default one with just changed directories (data, > cl, caches) and cluster name. Configurations of 2.1.0rc6 nodes matched the > config of 2.0.9 - they just removed 5 config parameters that were removed in > 2.1. They did not run any repair or forced a compaction. > After a rolling restart both "nodetool status" and OpsCenter report the > correct load. > I was not able to reproduce this locally. > I have a third group tomorrow and hope to have some time to do the upgrade > again. Anything that I can check? I think it would be possible to grab the > data files from at least one node for further analysis. Anything else I can > do to check that? -- This message was sent by Atlassian JIRA (v6.2#6252)