Yikes, 18tb/node is a very bad idea.

I dont like to go over 2-3 personally and you have to be careful with JBOD.  
See one of Ellis's latest posts on this and suggested use of LVM.  It is a 
reversal on previous position re JBOD.

--
Colin 
+1 612 859 6129
Skype colin.p.clark

> On Apr 8, 2015, at 3:11 PM, Jack Krupansky <jack.krupan...@gmail.com> wrote:
> 
> I can certainly sympathize if you have IT staff/management who will willingly 
> spring for some disk drives, but not for full machines, even if they are 
> relatively commodity boxes. Seems penny-wise and pound-foolish to me, but 
> management has their own priorities, plus there is the pre-existing Oracle 
> mindset of dense/fat nodes as a preference.
> 
> -- Jack Krupansky
> 
>> On Wed, Apr 8, 2015 at 2:00 PM, Nate McCall <n...@thelastpickle.com> wrote:
>> First off, I agree that the preferred path is adding nodes, but it is 
>> possible. 
>> 
>> > Can C* handle up to 18 TB data size per node with this amount of RAM?
>> 
>> Depends on how deep in the weeds you want to get tuning and testing. See 
>> below. 
>> 
>> >
>> > Is it feasible to increase the disk size by mounting a new (larger) disk, 
>> > copy all SS tables to it, and then mount it on the same mount point as the 
>> > original (smaller) disk (to replace it)? 
>> 
>> Yes (with C* off of course). 
>> 
>> As for tuning, you will need to look at, experiment with, and get a good 
>> understanding of:
>> - index_interval (turn this up now anyway if have not already ~ start at 512 
>> and go up from there)
>> - bloom filter space usage via bloom_filter_fp_chance 
>> - compression metadata storage via chunk_length_kb 
>> - repair time and how compaction_throughput_in_mb_per_sec and 
>> stream_throughput_outbound_megabits_per_sec will effect such
>> 
>> The first three will have a direct negative impact on read performance.
>> 
>> You will definitely want to use JBOD so you don't have to repair everything 
>> if you loose a single disk, but you will still be degraded for *a very long 
>> time* when you loose a disk.  
>> 
>> This is hard and takes experimentation and research (I can't emphasize this 
>> part enough), but i've seen it work. That said, the engineering time spent 
>> is probably more than buying and deploying additional hardware in the first 
>> place. YMMV. 
>> 
>> 
>> --
>> -----------------
>> Nate McCall
>> Austin, TX
>> @zznate
>> 
>> Co-Founder & Sr. Technical Consultant
>> Apache Cassandra Consulting
>> http://www.thelastpickle.com
> 

Reply via email to