I run a 10-node Cassandra cluster in production. 99% writes; 1% reads, 0%
deletes. The nodes have 32 GB RAM; C* runs with 8 GB heap. Each node has a
SDD for commitlog and 2x4 TB spinning disks for data (sstables). The schema
uses key caching only. C* version is 2.1.2.

It can be predicted that the cluster will run out of free disk space in not
too long. So its storage capacity needs to be increased. The client prefers
increasing disk size over adding more nodes. So a plan is to take the 2x4
TB spinning disks in each node and replace by 3x6 TB spinning disks.

   - Are there any obvious pitfalls/caveats to be aware of here? Like:

   - Can C* handle up to 18 TB data size per node with this amount of RAM?

      - Is it feasible to increase the disk size by mounting a new (larger)
      disk, copy all SS tables to it, and then mount it on the same mount point
      as the original (smaller) disk (to replace it)?


( -- also posted on StackOverflow
<http://stackoverflow.com/questions/29509595/whats-to-think-of-when-increasing-disk-size-on-cassandra-nodes>
)

Thanks in advance.


Med venlig hilsen / Best regards,


*Thomas Borg Salling*
Freelance IT architect and programmer.
Java and open source specialist.

tbsall...@tbsalling.dk :: +45 4063 2353 :: @tbsalling
<http://twitter.com/tbsalling> :: tbsalling.dk :: linkedin.com/in/tbsalling

Reply via email to