I think we should make the BlockCache pluggable for HBase. A simple
reflection-based enhancement to CacheConfig.instantiateBlockCache should do
the trick, is it not? If people think that this is valuable, I can submit a
patch.
This will enable people to play with their own versions of the
One of my colleagues have developed a extensive functional hbase-client in
C++ and we are in the process of open-sourcing it. It uses the Thrift
Interface to interact with the reigonservers.
-dhruba
On Thu, Feb 16, 2012 at 3:09 PM, Todd Lipcon t...@cloudera.com wrote:
On Thu, Feb 16, 2012 at
Most of our garbage is from block cache, not directly from the KVs. Is
that what you see?
thanks,
dhruba
On Thu, Dec 1, 2011 at 11:06 AM, Stack st...@duboce.net wrote:
On Thu, Dec 1, 2011 at 10:57 AM, lars hofhansl lhofha...@yahoo.com
wrote:
To try this out I changed the server side code to
This reflection should occur only once, not at every write to the HLog, so
the performance impact should be minimal, is it not?
why are you seeing the exception now? are u using a new unit test or a new
hdfs jar?
On Thu, Dec 1, 2011 at 9:59 PM, Mikhail Bautin
bautin.mailing.li...@gmail.com
what hadoop version are you using?
On Thu, Dec 1, 2011 at 11:12 PM, Mikhail Bautin
bautin.mailing.li...@gmail.com wrote:
After fixing the getFileLength() method access bug, the error I'm seeing in
my local multi-process cluster load test is different. Do we ever expect to
see checksum
Good stuff K^2.- Karthik and Kannan!!!
http://www.everestnews.com/k2history.htm
-dhruba
On Tue, Nov 1, 2011 at 9:41 AM, Stack st...@duboce.net wrote:
On Mon, Oct 31, 2011 at 11:05 PM, Li Pi l...@idle.li wrote:
K^2 is an awesome moniker. Congrats!
cheezIt also refers to the mountainous
I have been experimenting with the WAL settings too. It is obvious that
turning off the wal makes ur transactions go faster, HDFS write/sync are not
yet very optimized for high throughput small writes.
However, irrespective of whether I have one wal or two, I have seeing the
same throughput. I
+1 for Option 4.
-dhruba
On Sat, Oct 1, 2011 at 11:26 AM, Ted Yu yuzhih...@gmail.com wrote:
I prefer the fourth option.
Only truly broken APIs should be removed.
Cheers
On Oct 1, 2011, at 10:02 AM, Doug Meil doug.m...@explorysmedical.com
wrote:
Be very careful about code-changes
This increases the efficiency of a hbase cluster tremendously. +1 for
committing it earlier rather than later.
-dhruba
On Tue, Jul 26, 2011 at 7:29 PM, Andrew Purtell apurt...@apache.org wrote:
+1
It's a big patch but has seen much more testing than the typical and is of
high quality as you
Doesn't the wide-area HBase replication use the .oldlogs to keep the slave
hBase cluster in sync?
thanks
dhruba
On Tue, Jul 5, 2011 at 7:27 AM, Lars George lars.geo...@gmail.com wrote:
Hi,
Ah, I see Ted has that also questioned in HBASE-4010. Good.
And I was slightly wrong below, as there
I completely agree with Ryan. Most of the measurements in HDFS-347 are point
comparisions data rate over socket, single-threaded sequential read from
datanode, single-threaded random read form datanode, etc. These measurements
are good, but when you run the entire Hbase system at load, you
Hi andrew,
I have been doing a set of experiments for the last one month on a workload
that is purely increments. I too have seen that the performance drops when
the memstore fills up. My guess is that although the complexity is O(logn),
still when n is large the time needed to insert/lookup
Hi Jonathan,
Nice wiki. You mention that
2. Generate a new, special kind of WALEdit for secondary table update
is it possible to store this secondary-wal-edits as the contents on another
hbase table(say indexTable)? The advantage is that then this can be
implemented as a pure layer wrapping the
This is definitely interesting to me. Related discussion you can find here:
http://issues.apache.org/jira/browse/HBASE-3434
I plan on doing this via a coprocessor and will upload the patch in a few
days.
thanks
dhruba
On Mon, Jan 17, 2011 at 2:14 AM, js...@email.de wrote:
Hi,
we are
I am looking at Hregion.incrementColumnValue(). It has the following piece
of code
// build the KeyValue now:
3266KeyValue newKv = new KeyValue(row, family,
3267qualifier, EnvironmentEdgeManager.currentTimeMillis(),
3268Bytes.toBytes(result));
3269
3270
---
This is an automatically generated e-mail. To reply, visit:
http://review.hbase.org/r/219/
---
Review request for hbase.
Summary
---
This utility scans the META table and
16 matches
Mail list logo