OK it is HDFS-12574, it has also been ported to 2.8.4. Let's
revive HBASE-20244.
2018-07-03 9:07 GMT+08:00 张铎(Duo Zhang) :
> I think it is fine to just use the original hadoop jars in HBase-2.0.1 to
> communicate with HDFS-2.8.4 or above?
>
> The async wal has hacked into the internal of DFSClien
I think it is fine to just use the original hadoop jars in HBase-2.0.1 to
communicate with HDFS-2.8.4 or above?
The async wal has hacked into the internal of DFSClient so it will be
easily broken when HDFS upgraded.
I can take a look at the 2.8.4 problem but for 3.x, there is no production
ready
That's just a warning. Checking on HDFS-11644, it's only present in
Hadoop 2.9+ so seeing a lack of it with HDFS in 2.8.4 is expected.
(Presuming you are deploying on top of HDFS and not e.g.
LocalFileSystem.)
Are there any ERROR messages in the regionservers or the master logs?
Could you post the
It's now stuck at Master Initializing and regionservers are complaining
with:
18/07/02 21:12:20 WARN util.CommonFSUtils: Your Hadoop installation does
not include the StreamCapabilities class from HDFS-11644, so we will skip
checking if any FSDataOutputStreams actually support hflush/hsync. If you
You are lucky that HBASE 2.0.1 worked with Hadoop 2.8
I tried HBASE 2.0.1 with Hadoop 3.1 and there was endless problems with the
Region server crashing because WAL file system issue.
thread - Hbase hbase-2.0.1, region server does not start on Hadoop 3.1
Decided to roll back to Hbase 1.2.6 that
hbase.wal.provider
filesystem
Seems to fix it, but would be nice to actually try the fanout wal with
hadoop 2.8.4.
On Mon, Jul 2, 2018 at 1:03 PM, Andrey Elenskiy
wrote:
> Hello, we are running HBase 2.0.1 with official Hadoop 2.8.4 jars and
> hadoop 2.8.4 client (http://central.maven.org/mav
Hello, we are running HBase 2.0.1 with official Hadoop 2.8.4 jars and
hadoop 2.8.4 client (
http://central.maven.org/maven2/org/apache/hadoop/hadoop-client/2.8.4/).
Got the following exception on regionserver which brings it down:
18/07/02 18:51:06 WARN concurrent.DefaultPromise: An exception was
Hi Sean,
Many thanks for the clarification. I read some notes on GitHub and JIRAs
for Hbase and Hadoop 3 integration.
So my decision was to revert back to an earlier stable version of Hbase as
I did not have the bandwidth trying to make Hbase work with Hadoop 3+
In fairness to Ted, he has always
Hi Mich,
Please check out the section of our reference guide on Hadoop versions:
http://hbase.apache.org/book.html#hadoop
the short version is that there is not yet a Hadoop 3 version that the
HBase community considers appropriate for running HBase. if you'd like
to get into details and work aro
Please see the following two constants defined in TableInputFormat :
/** Column Family to Scan */
public static final String SCAN_COLUMN_FAMILY =
"hbase.mapreduce.scan.column.family";
/** Space delimited list of columns and column families to scan. */
public static final String SCAN_COL
Hi,
I am using HBase with Spark and as I have wide columns (> 1) I wanted to
use the "setbatch(num)" option to not read all the columns for a row but in
batches.
I can create a scan and set the batch size I want with
TableInputFormat.SCAN_BATCHSIZE, but I am a bit confused how this would wor
11 matches
Mail list logo