thanks guys for the advice. We were running parallel repairs earlier, with
cassandra version 2.0.14. As pointed out having set the replication factor
really huge for system_auth was causing the repair to take really long.

thanks
Sai

On Fri, Oct 16, 2015 at 9:56 AM, Victor Chen <victor.h.c...@gmail.com>
wrote:

> To elaborate on what Robert said, I think with most things technology
> related, the answer with these sorts of questions (i.e. "ideal settings")
> is usually "it depends." Remember that technology is a tool that we use to
> accomplish something we want. It's just a mechanism that we as humans use
> to exert our wishes on other things. In this case, cassandra allows us to
> exert our wishes on the data we need to have available. So think for a
> second about what you want? To be less philosophical and more practical,
> how many nodes you are comfortable losing or likely to lose? How many
> copies of your system_auth keyspace do you want to have always available?
>
> Also, what do you mean by "really long?" What version of cassandra are you
> using? If you are on 2.1, look at migrating to incremental repair. That it
> takes so long for such a small keyspace leads me to believe you're using
> sequential repair ...
>
> -V
>
> On Thu, Oct 15, 2015 at 7:46 PM, Robert Coli <rc...@eventbrite.com> wrote:
>
>> On Thu, Oct 15, 2015 at 10:24 AM, sai krishnam raju potturi <
>> pskraj...@gmail.com> wrote:
>>
>>>   we are deploying a new cluster with 2 datacenters, 48 nodes in each
>>> DC. For the system_auth keyspace, what should be the ideal
>>> replication_factor set?
>>>
>>> We tried setting the replication factor equal to the number of nodes in
>>> a datacenter, and the repair for the system_auth keyspace took really long.
>>> Your suggestions would be of great help.
>>>
>>
>> More than 1 and a lot less than 48.
>>
>> =Rob
>>
>>
>

Reply via email to