First off, I agree that the preferred path is adding nodes, but it is
possible.

> Can C* handle up to 18 TB data size per node with this amount of RAM?

Depends on how deep in the weeds you want to get tuning and testing. See
below.

>
> Is it feasible to increase the disk size by mounting a new (larger) disk,
copy all SS tables to it, and then mount it on the same mount point as the
original (smaller) disk (to replace it)?

Yes (with C* off of course).

As for tuning, you will need to look at, experiment with, and get a good
understanding of:
- index_interval (turn this up now anyway if have not already ~ start at
512 and go up from there)
- bloom filter space usage via bloom_filter_fp_chance
- compression metadata storage via chunk_length_kb
- repair time and how compaction_throughput_in_mb_per_sec and
stream_throughput_outbound_megabits_per_sec will effect such

The first three will have a direct negative impact on read performance.

You will definitely want to use JBOD so you don't have to repair everything
if you loose a single disk, but you will still be degraded for *a very long
time* when you loose a disk.

This is hard and takes experimentation and research (I can't emphasize this
part enough), but i've seen it work. That said, the engineering time spent
is probably more than buying and deploying additional hardware in the first
place. YMMV.


--
-----------------
Nate McCall
Austin, TX
@zznate

Co-Founder & Sr. Technical Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com

Reply via email to