How large are these writes?

Are you using asynchbase or other alternative client implementation?

Are you batching updates?

On Wed, May 25, 2011 at 2:44 PM, Wayne <wav...@gmail.com> wrote:

> What are your write levels? We are pushing 30-40k writes/sec/node on 10
> nodes for 24-36-48-72 hours straight. We have only 4 writers per node so we
> are hardly overwhelming the nodes. Disk utilization runs at 10-20%, load is
> max 50% including some app code, and memory is the 8g JVM out of 24G. We
> run
> our production as 20TB in MySQL and see 90% disk utilization for hours
> every
> day. A database must be able to accept being pounded 24/7. If the hardware
> can handle it so should the database...this is not true for Java based
> databases. The reality is that Java is not nearly ready for what a real
> database will expect of it. Sure we could "back off" the volume and add 100
> more nodes like Cassandra requires but then we might as well have used
> something else given that hardware spend.
>
> Our problem is that we have invested so much time with Hbase that it is
> hard
> for us to walk away and go to the sharded PostgreSQL we should have used 9
> months back. Sorry for the negativity but considering giving up after
> having
> invested all of this time is painful.
>
>
> On Wed, May 25, 2011 at 4:21 PM, Erik Onnen <eon...@gmail.com> wrote:
>
> > On Wed, May 25, 2011 at 11:39 AM, Ted Dunning <tdunn...@maprtech.com>
> > wrote:
> > > It should be recognized that your experiences are a bit out of the norm
> > > here.  Many hbase installations use more recent JVM's without problems.
> >
> > Indeed, we run u25 on CentOS 5.6 and over several days uptime it's
> > common to never see a full GC across an 8GB heap.
> >
> > What we never see is a ParNew taking .1 seconds, they're usually .01
> > and we never have full collections lasting 92 seconds. The only time
> > I've ever seen a JVM at 8GB take that long is when running on puny
> > (read virtualized) cores or when there are Ubuntu kernel bugs at play.
> > The same is true for our Cassandra deploys as well.
> >
>

Reply via email to