See https://builds.apache.org/job/HBase-TRUNK/2017/changes
Changes:
[stack] Added note on diff between snappy in hbase and snappy in hadoop
[stack] HBASE-4019 troubleshooting.xml - adding section under NameNode for
where to find hbase objects on HDFS
--
MapR does help with the GC because it *does* have a JNI interface into an
external block cache.
Typical configurations with MapR trim HBase down to the minimal viable size
and increase the file system cache correspondingly.
On Fri, Jul 8, 2011 at 7:52 PM, Jason Rutherglen
See https://builds.apache.org/job/HBase-TRUNK/2018/changes
Changes:
[tedyu] Added timeout for tests in TestScannerTimeout
--
[...truncated 1286 lines...]
[INFO] Surefire report directory:
On Fri, Jul 8, 2011 at 6:47 PM, Jason Rutherglen jason.rutherg...@gmail.com
wrote:
There are couple of things here, one is direct byte buffers to put the
blocks outside of heap, the other is MMap'ing the blocks directly from
the underlying HDFS file.
I think they both make sense. And I'm
Its not clear from hbase-3904 what the issues are. If there's some code
relying on isTableAvailable, that code is inherently broken.
1. isTableAvailable() is never reliable, because
(a) if it returns true, the table can disappear immediately after the
call finishes, or
(b) the table can
I resolved HBASE-3904 because there was no solution that everyone agreed on.
On Sat, Jul 9, 2011 at 12:48 PM, M. C. Srivas mcsri...@gmail.com wrote:
Its not clear from hbase-3904 what the issues are. If there's some code
relying on isTableAvailable, that code is inherently broken.
1.
I think my general point is we could hack up the hbase source, add
refcounting, circumvent the gc, etc or we could demand more from the dfs.
If a variant of hdfs-347 was committed, reads could come from the Linux
buffer cache and life would be good.
The choice isn't fast hbase vs slow hbase,
I'm a little confused, I was told none of the HBase code changed with MapR,
if the HBase (not the OS) block cache has a JNI implementation then that
part of the HBase code changed.
On Jul 9, 2011 11:19 AM, Ted Dunning tdunn...@maprtech.com wrote:
MapR does help with the GC because it *does* have
re: If a variant of hdfs-347 was committed,
I agree with what Ryan is saying here, and I'd like to second (third?
fourth?) keep pushing for HDFS improvements. Anything else is coding
around the bigger I/O issue.
On 7/9/11 6:13 PM, Ryan Rawson ryano...@gmail.com wrote:
I think my general
No lines of hbase were changed to run on Mapr. Mapr implements the hdfs API
and uses jni to get local data. If hdfs wanted to it could use more
sophisticated methods to get data rapidly from local disk to a client's
memory space...as Mapr does.
On Jul 9, 2011 6:05 PM, Doug Meil
10 matches
Mail list logo