On 6/7/2011 9:32 PM, Edward Capriolo wrote:
<snip>

I do not like large disk set-ups. I think they end up not being economical. Most low latency use cases want high RAM to DISK ratio. Two machines with 32GB RAM is usually less expensive then one machine with 64GB ram.

For a machine with 1TB drives (or multiple 1TB drives) it is going to be difficult to get enough RAM to help with random read patterns.

Also cluster operations like joining, decommissioning, or repair can take a *VERY* long time maybe a day. More smaller servers like blade style or more agile.


Is there some rule-of-thumb as to how much RAM is needed per GB of data? I know it probably "depends", but if you could try to explain the best you can that would be great! I too am projecting "big data" requirements.

Reply via email to