On Wed, Sep 26, 2012 at 12:36 PM, Peter Schuller
<peter.schul...@infidyne.com> wrote:
>> What is strange every time I run repair data takes almost 3 times more
>> - 270G, then I run compaction and get 100G back.
>
> https://issues.apache.org/jira/browse/CASSANDRA-2699 outlines the
> maion issues with repair. In short - in your case the limited
> granularity of merkle trees is causing too much data to be streamed
> (effectively duplicate data).
> https://issues.apache.org/jira/browse/CASSANDRA-3912 may be a bandaid
> for you in that it allows granularity to be much finer, and the
> process to be more incremental.
>
Thank you, Peter!
It looks like what I need. Couple questions.
Does it work with RandomPartinioner only? I use ByteOrderedPartitioner.
I don't see it as part of any release. Am I supposed to build my own
version of cassandra?

Andrey

Reply via email to