[ https://issues.apache.org/jira/browse/CASSANDRA-8316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14251628#comment-14251628 ]
Marcus Eriksson edited comment on CASSANDRA-8316 at 12/18/14 1:13 PM: ---------------------------------------------------------------------- To summarize this; * we had a bug in compaction marking that could make a node end up in an infinite loop, fixed in branch linked above * we allowed multiple repairs over the same sstables, fixed * we had a situation where we didn't remove the parent repair sessions, fixed And, to describe the "final" problem: # Node A sends a PrepareMessage to overloaded Node B # B starts preparing # A times out waiting for B to prepare # B finishes preparing and marks a bunch of sstables as being repaired # User retries the repair on node A # B gets the new PrepareMessage but sees that the sstables it wants to repair are already being repaired, and refuses to start One solution could be to have A send out a cancel message, another solution could be to have B remove any parent repair sessions after 5 (or something) minutes if it hasn't received a validation message before that. Need [~yukim] input. was (Author: krummas): To summarize this; 1. we had a bug in compaction marking that could make a node end up in an infinite loop, fixed in branch linked above 2. we allowed multiple repairs over the same sstables, fixed 3. we had a situation where we didn't remove the parent repair sessions, fixed And, to describe the "final" problem: # Node A sends a PrepareMessage to overloaded Node B # B starts preparing # A times out waiting for B to prepare # B finishes preparing and marks a bunch of sstables as being repaired # User retries the repair on node A # B gets the new PrepareMessage but sees that the sstables it wants to repair are already being repaired, and refuses to start One solution could be to have A send out a cancel message, another solution could be to have B remove any parent repair sessions after 5 (or something) minutes if it hasn't received a validation message before that. Need [~yukim] input. > "Did not get positive replies from all endpoints" error on incremental repair > ------------------------------------------------------------------------------ > > Key: CASSANDRA-8316 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8316 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: cassandra 2.1.2 > Reporter: Loic Lambiel > Assignee: Marcus Eriksson > Fix For: 2.1.3 > > Attachments: 0001-patch.patch, 8316-v2.patch, > CassandraDaemon-2014-11-25-2.snapshot.tar.gz, > CassandraDaemon-2014-12-14.snapshot.tar.gz, test.sh > > > Hi, > I've got an issue with incremental repairs on our production 15 nodes 2.1.2 > (new cluster, not yet loaded, RF=3) > After having successfully performed an incremental repair (-par -inc) on 3 > nodes, I started receiving "Repair failed with error Did not get positive > replies from all endpoints." from nodetool on all remaining nodes : > [2014-11-14 09:12:36,488] Starting repair command #3, repairing 108 ranges > for keyspace xxxx (seq=false, full=false) > [2014-11-14 09:12:47,919] Repair failed with error Did not get positive > replies from all endpoints. > All the nodes are up and running and the local system log shows that the > repair commands got started and that's it. > I've also noticed that soon after the repair, several nodes started having > more cpu load indefinitely without any particular reason (no tasks / queries, > nothing in the logs). I then restarted C* on these nodes and retried the > repair on several nodes, which were successful until facing the issue again. > I tried to repro on our 3 nodes preproduction cluster without success > It looks like I'm not the only one having this issue: > http://www.mail-archive.com/user%40cassandra.apache.org/msg39145.html > Any idea? > Thanks > Loic -- This message was sent by Atlassian JIRA (v6.3.4#6332)