Thanks Reid,

We currently only have ~1GB data per node with a replication factor of 3.
The amount of data will certainly grow, though I have no solid projections
at this time. The current memory and CPU resources are quite low (for
Cassandra) and so along with the upgrade we plan to increase both. This
seems to be the strong recommendation from this user group.

On Fri, Nov 1, 2019 at 4:52 PM Reid Pinchback <rpinchb...@tripadvisor.com>
wrote:

> Maybe I’m missing something.  You’re expecting less than 1 gig of data per
> node?  Unless this is some situation of super-high data churn/brief TTL, it
> sounds like you’ll end up with your entire database in memory.
>
>
>
> *From: *Ben Mills <b...@bitbrew.com>
> *Reply-To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
> *Date: *Friday, November 1, 2019 at 3:31 PM
> *To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
> *Subject: *Memory Recommendations for G1GC
>
>
>
> *Message from External Sender*
>
> Greetings,
>
>
>
> We are planning a Cassandra upgrade from 3.7 to 3.11.5 and considering a
> change to the GC config.
>
>
>
> What is the minimum amount of memory that needs to be allocated to heap
> space when using G1GC?
>
>
>
> For GC, we currently use CMS. Along with the version upgrade, we'll be
> running the stateful set of Cassandra pods on new machine types in a new
> node pool with 12Gi memory per node. Not a lot of memory but an
> improvement. We may be able to go up to 16Gi memory per node. We'd like to
> continue using these heap settings:
>
>
> -XX:+UnlockExperimentalVMOptions
> -XX:+UseCGroupMemoryLimitForHeap
> -XX:MaxRAMFraction=2
>
>
>
> which (if 12Gi per node) would provide 6Gi memory for heap (i.e. half of
> total available).
>
>
>
> Here are some details on the environment and configs in the event that
> something is relevant.
>
>
>
> Environment: Kubernetes
> Environment Config: Stateful set of 3 replicas
> Storage: Persistent Volumes
> Storage Class: SSD
> Node OS: Container-Optimized OS
> Container OS: Ubuntu 16.04.3 LTS
> Data Centers: 1
> Racks: 3 (one per zone)
> Nodes: 3
> Tokens: 4
> Replication Factor: 3
> Replication Strategy: NetworkTopologyStrategy (all keyspaces)
> Compaction Strategy: STCS (all tables)
> Read/Write Requirements: Blend of both
> Data Load: <1GB per node
> gc_grace_seconds: default (10 days - all tables)
>
> GC Settings: (CMS)
>
> -XX:+UseParNewGC
> -XX:+UseConcMarkSweepGC
> -XX:+CMSParallelRemarkEnabled
> -XX:SurvivorRatio=8
> -XX:MaxTenuringThreshold=1
> -XX:CMSInitiatingOccupancyFraction=75
> -XX:+UseCMSInitiatingOccupancyOnly
> -XX:CMSWaitDuration=30000
> -XX:+CMSParallelInitialMarkEnabled
> -XX:+CMSEdenChunksRecordAlways
>
>
>
> Any ideas are much appreciated.
>

Reply via email to