Slava Gorelik wrote:
Hi.I happened yesterday after 28 hours of running i started the hadoop
balancer after 2 hours working (with some exception- ca't move block) my
hbase started to throws exception that can't create new block. It happened
on on couple of region servers and then master is failed to connect to then
and eventually it crashed. Tomorrow I can cut last 2-3 hours from logs (they
are huge) and send you.

Sorry about that. I presumed it safe given that balancer has been around a few releases and we're running it here continuously w/o issue. Did estart fix things or are there now missing blocks?
BTW, some i sent email to list couple of days ago about blockCache parameter
on column family descriptor, what is it and how it affect on performance ?
Sorry, missed it.

The blockcache is client-side caching of pieces of store files. You can set the size of the blocks to cache client-side. It uses java Soft References. Blocks are evicted on roughly an LRU basis when memory is low. It was added a good while ago by Tom White.

By default it has been off but as of HBASE-953 commit of about a week or so ago, after some playing and tuning, the default has been flipped and now blockcache is on by default. Block caching along with other performance improvements including rpc fixes and J-D's scanner pre-fetching and batch writing, will make the 0.19.0 release run faster than its predecessors in many regards.

Some rough benchmarking running our performance test -- keep in mind, this is not-very-real-world just a single client going against a single regionserver (see wiki for more) -- shows writes running at ~3X speed, scans at ~7X, sequential reads at ~2X and random reads anywhere from slower to 2 to 3 times faster dependent on how well the block cache is helping (or hindering). If the regionserver has more memory, random reads run faster. If not enough, regionserver is just spinning filling cache and random read times plummet. I'll put up some numbers when we come closer to the 0.19.0 release.

If you do enable block cache, be sure to update your hbase-default.xml. The old block size tends to provoke OOMEs.
St.Ack

Reply via email to