Hi
i am using hbase-0.90.4, hadoop-0.20.203, for both of these , the user is
same i.e vamshi. Both these i am running in pseudo distributed mode. I
replaced hadoop-0.20.0-append.jar found in hbase/lib with
hadoop-0.20.203-core.jar.
For hadoop-0.20.203, evrything is working fine.when i execute
See https://builds.apache.org/job/HBase-TRUNK/2208/changes
Changes:
[todd] HBASE-4381 Refactor split decisions into a split policy class.
[todd] HBASE-4287 If region opening fails, change region in transition into a
FAILED_OPEN state so that it can be retried quickly.
[stack] HBASE-4394 Add
Look at OpenTSDB which collects exactly those kinds of measurements and inserts
them into HBase.
It already comes with a collector to grab tons of OS metrics and it is fairly
easy to add more yourself.
Cheers,
Joep
From: stable29 [arpita...@gmail.com]
Hi,
I plan to integrate the following JIRAs by Friday:
HBASE-4330, HBASE-4383 - SlabCache should be in usable state after these
HBASE-4351
If you have review comments, please share.
This sounds like you will have about 500 million rows in your database after
6 months. To my mind, this is at the level of inconvenient for a
conventional database, but hardly impossible.
HBase will definitely hold this much data. It would probably help you to do
some slightly clever tricks to
Matt,
Thanks a lot for the code. Great job!
As I mentioned in JIRA I work full time on the delta encoding [1]. Right
now the code and integration is almost done. Most of the parts are under
review. Since it is a big change will plan to test it very carefully.
After that, It will be ported to
See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-23/28/changes
Changes:
[tedyu] HBASE-4330 Fix races in slab cache (Li Pi Todd)
[dmeil] HBASE-4409. book. Fixed cycle in config section and wiki with too
many open files error.
[dmeil] HBASE-4408. book.xml, faq
[todd] Amend
See https://builds.apache.org/job/HBase-TRUNK/2209/changes