Philippe,

Besides the system keyspace, we have only one user keyspace.  However, tell
me that we can also try repairing one CF at a time.

We have two concurrent compactors configured.  Will change that to one.

Huy


On Wed, Aug 17, 2011 at 6:10 PM, Philippe <watche...@gmail.com> wrote:

> Huy,
> Have you tried repairing one keyspace at a time and then giving it some
> breathing time to compact.
> My current observations is that the streams of repairs are triggering
> massive compactions which are filling up my disks too. Another idea I'd like
> to try is to limit the number of concurrent compactions because it appears
> that "Validation" are not constrained by it. That way, I would have fewer
> concurrent compactions are the disk would not fill up (as fast?)
>
>
> 2011/8/17 Huy Le <hu...@springpartners.com>
>
>> I restarted the cluster and kicked off repair on the same node again.  It
>> only made the matter worse.  It filled up the 830GB partition, and cassandra
>> on the node repair ran on crashed.  I restarted it, and now I am running
>> compaction to reduce disk usage.
>>
>> Repair after upgrading to 0.8.4 is still a problem.  Does anyone have any
>> further info on the issue?  Thanks!
>>
>> Huy
>>
>>
>> On Wed, Aug 17, 2011 at 11:13 AM, Huy Le <hu...@springpartners.com>wrote:
>>
>>> Sorry for the duplicate thread.  I saw the issue being referenced to
>>> https://issues.apache.org/jira/browse/CASSANDRA-2280.   However, I am
>>> running version 0.8.4.   I saw your comment in on of the threads that the
>>> issue is not reprocible, but multiple users have the same issue.  This there
>>> anything that I should do to determine the cause of this issue for I do a
>>> rolling restart and try to run repair again?  Thanks!
>>>
>>> Huy
>>>
>>>
>>> On Wed, Aug 17, 2011 at 11:03 AM, Philippe <watche...@gmail.com> wrote:
>>>
>>>> Look at my last two or three threads. I've encountered the same thing
>>>> and got some pointers/answers.
>>>> On Aug 17, 2011 4:03 PM, "Huy Le" <hu...@springpartners.com> wrote:
>>>> > Hi,
>>>> >
>>>> > After upgrading to cass 0.8.4 from cass 0.6.11. I ran scrub. That
>>>> worked
>>>> > fine. Then I ran nodetool repair on one of the nodes. The disk usage
>>>> on
>>>> > data directory increased from 40GB to 480GB, and it's still growing.
>>>> >
>>>> > The cluster has 4 nodes with replica factor 3. The ring shows:
>>>> >
>>>> > Address DC Rack Status State Load
>>>> > Owns Token
>>>> >
>>>> > 141784319550391026443072753096570088103
>>>> > 10.245.xxx.xxx datacenter1 rack1 Up Normal 39.89 GB
>>>> > 25.00% 14178431955039102644307275309657008807
>>>> > 10.242.xxx.xxx datacenter1 rack1 Up Normal 80.98 GB
>>>> > 25.00% 56713727820156410577229101238628035239
>>>> > 10.242.xxx.xxx datacenter1 rack1 Up Normal 80.08 GB
>>>> > 25.00% 99249023685273718510150927167599061671
>>>> > 10.244.xxx.xxx datacenter1 rack1 Up Normal 80.23 GB
>>>> > 25.00% 141784319550391026443072753096570088103
>>>> >
>>>> > Does anyone know what's might be causing this issue?
>>>> >
>>>> > Huy
>>>> >
>>>> > --
>>>> > Huy Le
>>>> > Spring Partners, Inc.
>>>> > http://springpadit.com
>>>>
>>>
>>>
>>>
>>> --
>>> Huy Le
>>> Spring Partners, Inc.
>>> http://springpadit.com
>>>
>>
>>
>>
>> --
>> Huy Le
>> Spring Partners, Inc.
>> http://springpadit.com
>>
>
>


-- 
Huy Le
Spring Partners, Inc.
http://springpadit.com

Reply via email to