Hi ,

Generally Upgradesstables are only recommended when you plan to move with Major 
version like  from 2.0 to 2.1  or from 2.1 to 2.2 etc. Since you are doing 
minor version upgrade no need to run upgradesstables utility.

Link by Datastax might be helpful to you :

https://support.datastax.com/hc/en-us/articles/208040036-Nodetool-upgradesstables-FAQ

From: Kathiresan S [mailto:kathiresanselva...@gmail.com]
Sent: Wednesday, January 04, 2017 12:22 AM
To: user@cassandra.apache.org
Subject: Re: Incremental repair for the first time

Thank you!

We are planning to upgrade to 3.0.10 for this issue.

From the NEWS txt file 
(https://github.com/apache/cassandra/blob/trunk/NEWS.txt), it looks like there 
is no need for sstableupgrade when we upgrade from 3.0.4 to 3.0.10 (i.e. Just 
installing 3.0.10 Cassandra would suffice and it will work with the sstables 
created by 3.0.4 ?)

Could you please confirm (if i'm reading the upgrade instructions correctly)?

Thanks,
Kathir

On Tue, Dec 20, 2016 at 5:28 PM, kurt Greaves 
<k...@instaclustr.com<mailto:k...@instaclustr.com>> wrote:
No workarounds, your best/only option is to upgrade (plus you get the benefit 
of loads of other bug fixes).

On 16 December 2016 at 21:58, Kathiresan S 
<kathiresanselva...@gmail.com<mailto:kathiresanselva...@gmail.com>> wrote:
Thank you!

Is any work around available for this version?

Thanks,
Kathir


On Friday, December 16, 2016, Jake Luciani 
<jak...@gmail.com<mailto:jak...@gmail.com>> wrote:
This was fixed post 3.0.4 please upgrade to latest 3.0 release

On Fri, Dec 16, 2016 at 4:49 PM, Kathiresan S 
<kathiresanselva...@gmail.com<mailto:kathiresanselva...@gmail.com>> wrote:
Hi,

We have a brand new Cassandra cluster (version 3.0.4) and we set up nodetool 
repair scheduled for every day (without any options for repair). As per 
documentation, incremental repair is the default in this case.
Should we do a full repair for the very first time on each node once and then 
leave it to do incremental repair afterwards?

Problem we are facing:

On a random node, the repair process throws validation failed error, pointing 
to some other node

For Eg. Node A, where the repair is run (without any option), throws below error

Validation failed in /Node B

In Node B when we check the logs, below exception is seen at the same exact 
time...

java.lang.RuntimeException: Cannot start multiple repair sessions over the same 
sstables
        at 
org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1087)
 ~[apache-cassandra-3.0.4.jar:3.0.4]
        at 
org.apache.cassandra.db.compaction.CompactionManager.access$700(CompactionManager.java:80)
 ~[apache-cassandra-3.0.4.jar:3.0.4]
        at 
org.apache.cassandra.db.compaction.CompactionManager$10.call(CompactionManager.java:700)
 ~[apache-cassandra-3.0.4.jar:3.0.4]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[na:1.8.0_73]
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_73]

Can you please help on how this can be fixed?

Thanks,
Kathir



--
http://twitter.com/tjake


Reply via email to