On Wed, Aug 24, 2011 at 2:54 PM, Jacob, Arun <[email protected]> wrote:
> I'm trying to determine a node configuration for Cassandra. From what I've > been able to determine from reading around: > > > 1. we need to cap data size at 50% of total node storage capacity for > compaction > 2. with RF=3, that means that I need to effectively assume that I have > 1/6th of total storage capacity. > 3. SSDs are preferred, but of course reduce storage capacity > 4. using standard storage means you bump up your RAM to keep as much in > memory as possible. > > Right now we are looking at storage requirements of 42 – 60TB, assuming a > baseline of 3TB/day and expiring data after 14-20 days (depending on use > case), I would assume based on above that we need 252- 360TB total storage > max. > > My questions: > > 1. is 8TB (meaning 1.33 actual TB storage/node) a reasonable per node > storage size for Cassandra? I don’t want to use SSDs due to reduced storage > capacity -- I don't want to buy 100s of nodes to support that reduced > storage capacity of SSDs. Given that I will be using standard drives, what > is a reasonable/effective per node storage capacity? > 2. other than splitting the commit log onto a separate drive, is there > any other drive allocation I should be doing? > 3. Assuming I'm not using SSDs, what would be a good memory size for a > node? I've heard anything from 32-48 GB, but need more guidance. > > > Anything else that anyone has run into? What are common configurations > being used by others? > > Thanks in advance, > > -- Arun > > > I would suggest checking out: http://wiki.apache.org/cassandra/CassandraHardware http://wiki.apache.org/cassandra/LargeDataSetConsiderations http://www.slideshare.net/edwardcapriolo/real-world-capacity 1. we need to cap data size at 50% of total node storage capacity for compaction False. You need 50% the capacity of your largest column family free with some other room for overhead. This changes all your numbers. 3. SSDs are preferred, but of course reduce storage capacity Avoid generalizations. Many use cases may get little benefit from SSD disks. 4. using standard storage means you bump up your RAM to keep as much in memory as possible. In most cases you want to maintain some RAM / Hard disk ratio. SSD setups still likely need sizable RAM. Your 3 questions are hard to answer because what hardware you need workload dependent. If really depends on active set, what percent of the data is active at any time. It also depends on your latency requirements, if you are modeling something like the way-back machine, that has different usage profile then a stock ticker application, that is again different from the usage patterns of an email system. Generally people come to Cassandra because they are looking for low latency access to read and write data. This is hard to achieve on 8TB of disk. The size of the bloom filters and index files are themselves substantial with 8TB of data. You will also require a large amount of RAM on this disk to minimize disk seeks (or a super like SSD raid-0 (does this sound like a bad idea to you? It does to me :)) The only way to answer the question of how much hardware your need is with load testing. The Yahoo Cloud Serving Benchmark can help you fill up a node and test it with different load patterns to see how it performs.
