Hi Preetika,
After thinking about your scenario I believe your small SSTable size might
be due to data compression. By default, all tables enable SSTable
compression.
Let go through your scenario. Let's say you have allocated 4GB to your
Cassandra node. Your *memtable_heap_space_in_mb* and
*memt
The cluster is running with RF=3, right now each node is storing about 3-4 TB of data. I'm using r4.2xlarge EC2 instances, these have 8 vCPU's, 61 GB of RAM, and the disks attached for the data drive are gp2 ssd ebs volumes with 10k iops. I guess this brings up the question of what's a good marker
Hi Daniel,
This is not normal. Possibly a capacity problem. Whats the RF, how much
data do you store per node and what kind of servers do you use (core count,
RAM, disk, ...)?
Cheers,
Tommaso
On Mon, May 29, 2017 at 6:22 PM, Daniel Steuernol
wrote:
>
> I am running a 6 node cluster, and I have
I am running a 6 node cluster, and I have noticed that the reported load on each node rises throughout the week and grows way past the actual disk space used and available on each node. Also eventually latency for operations suffers and the nodes have to be restarted. A couple questions on this, is
My approach is the obvious taking a big outage window especially at work we
are using 1.2 and is using token range. I am generally a believer that (1)
patches should be applied, but (2) routinely we replace each host with a
new EC2, so (1) I know my infrastructure code (puppet/chef/ansible/salt
sta
Hi,
is it possible to extract from repair logs the writetime of the writes
that needed to be repaired?
I have some processes I would like to re-trigger from a time point if
repair found problems.
Is that useful? Possible?
Jan
---