On Tue, Jun 7, 2011 at 12:43 PM, Ryan King <r...@twitter.com> wrote:

> On Tue, Jun 7, 2011 at 4:34 AM, Erik Forsberg <forsb...@opera.com> wrote:
> > On Tue, 31 May 2011 13:23:36 -0500
> > Jonathan Ellis <jbel...@gmail.com> wrote:
> >
> >> Have you read http://wiki.apache.org/cassandra/CassandraHardware ?
> >
> > I had, but it was a while ago so I guess I kind of deserved an RTFM! :-)
> >
> > After re-reading it, I still want to know:
> >
> > * If we disregard the performance hit caused by having the commitlog on
> >  the same physical device as parts of the data, are there any other
> >  grave effects on Cassandra's functionality with a setup like that?
>
> You'll take a performance hit if you hare a high write load. I'd
> recommend doing your own benchmarks (with an existing benchmark
> framework like YCSB) against the configuration you'd like to use.
>
> > * How does Cassandra handle a case where one of the disks in a striped
> >  RAID0 partition goes bad and is replaced? Is the only option to wipe
> >  everything from that node and reinit the node, or will it handle
> >  corrupt files?
>
> Don't plan on being able to recover any date on that node.
>
> > I.e, what's the recommended thing to do from an
> >  operations point of view when a disk dies on one of the nodes in a
> >  RAID0 Cassandra setup? What will cause the least risk for data loss?
> >  What will be the fastest way to get the node up to speed with the
> >  rest of the cluster?
>
> Decommission (or removetoken) on the dead node, replace the drive and
> rebootstrap.
>
> -ryan
>

I do not like large disk set-ups. I think they end up not being economical.
Most low latency use cases want high RAM to DISK ratio.  Two machines with
32GB RAM is usually less expensive then one machine with 64GB ram.

For a machine with 1TB drives (or multiple 1TB drives) it is going to be
difficult to get enough RAM to help with random read patterns.

Also cluster operations like joining, decommissioning, or repair can take a
*VERY* long time maybe a day. More smaller servers like blade style or more
agile.

Reply via email to