The version here really matters. If it’s higher than 3.2, it’s probably related
to this issue which places sstables for a given range in the same directory to
avoid data loss on single drive failure:
https://issues.apache.org/jira/browse/CASSANDRA-6696
--
Jeff Jirsa
> On Mar 9, 2018, at
Yes it will helps,thanks James for correcting me
> On Mar 9, 2018, at 9:52 PM, James Shaw wrote:
>
> per my testing, repair not help.
> repair build Merkle tree to compare data, it only write to a new file while
> have difference, very very small file at the end (of course,
per my testing, repair not help.
repair build Merkle tree to compare data, it only write to a new file while
have difference, very very small file at the end (of course, means most
data are synced)
On Fri, Mar 9, 2018 at 10:31 PM, Madhu B wrote:
> Yasir,
> I think you
Ours have similar issue and I am working to solve it this weekend.
Our case is because STCS make one huge table's sstable file bigger and
bigger after compaction (this is STCS compaction nature, nothing wrong),
even all most all data TTL 30days, but tombstones not evicted since largest
file is
Yasir,
I think you need to run full repair in off-peak hours
Thanks,
Madhu
> On Mar 9, 2018, at 7:20 AM, Kenneth Brotman
> wrote:
>
> Yasir,
>
> How many nodes are in the cluster?
> What is num_tokens set to in the Cassandra.yaml file?
> Is it just this one
Yasir,
How many nodes are in the cluster?
What is num_tokens set to in the Cassandra.yaml file?
Is it just this one node doing this?
What replication factor do you use that affects the ranges on that disk?
Kenneth Brotman
From: Kyrylo Lebediev [mailto:kyrylo_lebed...@epam.com]
Not sure where I heard this, but AFAIK data imbalance when multiple
data_directories are in use is a known issue for older versions of Cassandra.
This might be the root-cause of your issue.
Which version of C* are you using?
Unfortunately, don't remember in which version this imbalance issue
Hi Alex,
no active compaction, right now.
On Fri, Mar 9, 2018 at 3:47 PM, Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:
> On Fri, Mar 9, 2018 at 11:40 AM, Yasir Saleem
> wrote:
>
>> Thanks, Nicolas Guyomar
>>
>> I am new to cassandra, here is the
On Fri, Mar 9, 2018 at 11:40 AM, Yasir Saleem
wrote:
> Thanks, Nicolas Guyomar
>
> I am new to cassandra, here is the properties which I can see in yaml file:
>
> # of compaction, including validation compaction.
> compaction_throughput_mb_per_sec: 16
>
Thanks, Nicolas Guyomar
I am new to cassandra, here is the properties which I can see in yaml file:
# of compaction, including validation compaction.
compaction_throughput_mb_per_sec: 16
compaction_large_partition_warning_threshold_mb: 100
On Fri, Mar 9, 2018 at 3:33 PM, Nicolas Guyomar
Hi,
This might be a compaction which is running, have you check that ?
On 9 March 2018 at 11:29, Yasir Saleem wrote:
> Hi Team,
>
> we are facing issue of uneven data movement in cassandra disk for
> specific which disk03 in our case, however all the disk are
Hi Team,
we are facing issue of uneven data movement in cassandra disk for
specific which disk03 in our case, however all the disk are consuming
around 60% of space but disk03 is taking 87% space. Here is configuration
in yaml and current disk space:
data_file_directories:
-
12 matches
Mail list logo