Hi Benjamin,
I reverted back to the old RF of 2, by restarting all nodes with RF 2, and
then running cleanup. It came down to 2.
This time, i now changed the RF to 3 for all machines and restarted all the
nodes.

I started running repair one by one on all machines, tracking through
jconsole that compaction (readonly) was happenning, and moved over to the
next node only when there was no compaction going on, and nothing in the AES
stage.

The process is finished on all machines, and took a long time i have to say.
However my ring shows almost 8 times load. Here it is.
Original data size was about 100 gigs, with RF 2 it was about 200 gigs. But
here its almost 800 gigs. What could i be doing wrong?

Address       Status     Load          Range
     Ring

128045052799293308972222897669231312075
ip1 Up         100.09 GB     418948754358022090024968091731975038
|<--|
ip2 Up         90.83 GB      11057191649494821574112431245768166381     |
^
ip3 Up         105.9 GB      21705674247570520134925875025273670789     v
|
ip4 Up         122.7 GB      42980788726886850121690234508345696103     |
^
ip5 Up         106.24 GB     85510669386552904928042350150568641153     v
|
ip6 Up         194.27 GB     106767287274351479790232508363491106683    |
^
ip7 Up         87.3 GB       128045052799293308972222897669231312075
 |-->|

/G

On Thu, Sep 16, 2010 at 11:56 PM, Gurpreet Singh
<gurpreet.si...@gmail.com>wrote:

> Thanks Benjamin. I realised that, i have reverted using cleanup, got it
> back to old state and testing the scenario exactly the way you put it.
>
>
> On Thu, Sep 16, 2010 at 10:56 PM, Benjamin Black <b...@b3k.us> wrote:
>
>> On Thu, Sep 16, 2010 at 3:19 PM, Gurpreet Singh
>> <gurpreet.si...@gmail.com> wrote:
>> > 1.  I was looking to increase the RF to 3. This process entails changing
>> the
>> > config and calling repair on the keyspace one at a time, right?
>> > So, I started with one node at a time, changed the config file on the
>> first
>> > node for the keyspace, restarted the node. And then called a nodetool
>> repair
>> > on the node.
>>
>> You need to change the RF on _all_ nodes in the cluster _before_
>> running repair on _any_ of them.  If nodes disagree on which nodes
>> should have replicas for keys, repair will not work correctly.
>> Different RF for the same keyspace creates that disagreement.
>>
>>
>> b
>>
>
>

Reply via email to