Your current problem is caused by this phoenix jar:
hduser@rhes75: /data6/hduser/hbase-2.0.0> find ./ -name '*.jar' -print
-exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
StreamCapabilities
./lib/phoenix-5.0.0-alpha-HBase-2.0-client.jar
org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class
org/apache/hadoop/fs/StreamCapabilities.class
org/apache/hadoop/fs/StreamCapabilities$StreamCapability.class
I don't know what version of Hadoop it's bundling or why, but it's one
that includes the StreamCapabilities interface, so HBase takes that to
mean it can check on capabilities. Since Hadoop 2.7 doesn't claim to
implement any, HBase throws its hands up.
I'd recommend you ask on the phoenix list how to properly install
phoenix such that you don't need to copy the jars into the HBase
installation. Hopefully the jar pointed out here is meant to be client
facing only and not installed into the HBase cluster.
On Thu, Jun 7, 2018 at 2:38 PM, Mich Talebzadeh
<mich.talebza...@gmail.com> wrote:
Hi,
Under Hbase Home directory I get
hduser@rhes75: /data6/hduser/hbase-2.0.0> find ./ -name '*.jar' -print
-exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
StreamCapabilities
./lib/phoenix-5.0.0-alpha-HBase-2.0-client.jar
org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class
org/apache/hadoop/fs/StreamCapabilities.class
org/apache/hadoop/fs/StreamCapabilities$StreamCapability.class
--
./lib/hbase-common-2.0.0.jar
org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class
for Hadoop home directory I get nothing
hduser@rhes75: /home/hduser/hadoop-2.7.3> find ./ -name '*.jar' -print
-exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
StreamCapabilities
Dr Mich Talebzadeh
LinkedIn * https://www.linkedin.com/profile/view?id=
AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCd
OABUrV8Pw>*
http://talebzadehmich.wordpress.com
*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising
from
such loss, damage or destruction.
On 7 June 2018 at 15:39, Sean Busbey <bus...@apache.org> wrote:
Somehow, HBase is getting confused by your installation and thinks it
can check for wether or not the underlying FileSystem implementation
(i.e. HDFS) provides hflush/hsync even though that ability is not
present in Hadoop 2.7. Usually this means there's a mix of Hadoop
versions on the classpath. While you do have both Hadoop 2.7.3 and
2.7.4, that mix shouldn't cause this kind of failure[1].
Please run this command and copy/paste the output in your HBase and
Hadoop installation directories:
find . -name '*.jar' -print -exec jar tf {} \; | grep -E
"\.jar$|StreamCapabilities" | grep -B 1 StreamCapabilities
[1]: As an aside, you should follow the guidance in our reference
guide from the section "Replace the Hadoop Bundled With HBase!" in the
Hadoop chapter: http://hbase.apache.org/book.html#hadoop
But as I mentioned, I don't think it's the underlying cause in this
case.
On Thu, Jun 7, 2018 at 8:41 AM, Mich Talebzadeh
<mich.talebza...@gmail.com> wrote:
Hi,
Please find below
*bin/hbase version*
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/data6/hduser/hbase-2.0.0/lib/phoenix-5.0.0-alpha-
HBase-2.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/data6/hduser/hbase-2.0.0/lib/slf4j-log4j12-1.7.
25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/home/hduser/hadoop-2.7.3/share/hadoop/common/lib/
slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
HBase 2.0.0
Source code repository git://
kalashnikov.att.net/Users/stack/checkouts/hbase.git
revision=7483b111e4da77adbfc8062b3b22cbe7c2cb91c1
Compiled by stack on Sun Apr 22 20:26:55 PDT 2018
From source with checksum a59e806496ef216732e730c746bbe5ac
*l**s -lah lib/hadoop**
-rw-r--r-- 1 hduser hadoop 41K Apr 23 04:26
lib/hadoop-annotations-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 93K Apr 23 04:26 lib/hadoop-auth-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 26K Apr 23 04:29
lib/hadoop-client-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 1.9M Apr 23 04:28
lib/hadoop-common-2.7.4-tests.jar
-rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:26
lib/hadoop-common-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 127K Apr 23 04:29
lib/hadoop-distcp-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:29
lib/hadoop-hdfs-2.7.4-tests.
jar
-rw-r--r-- 1 hduser hadoop 8.0M Apr 23 04:29 lib/hadoop-hdfs-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 532K Apr 23 04:29
lib/hadoop-mapreduce-client-app-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 759K Apr 23 04:29
lib/hadoop-mapreduce-client-common-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 1.5M Apr 23 04:27
lib/hadoop-mapreduce-client-core-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 188K Apr 23 04:29
lib/hadoop-mapreduce-client-hs-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 62K Apr 23 04:29
lib/hadoop-mapreduce-client-jobclient-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 71K Apr 23 04:28
lib/hadoop-mapreduce-client-shuffle-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 26K Apr 23 04:28
lib/hadoop-minicluster-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 2.0M Apr 23 04:27
lib/hadoop-yarn-api-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 163K Apr 23 04:28
lib/hadoop-yarn-client-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 1.7M Apr 23 04:27
lib/hadoop-yarn-common-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 216K Apr 23 04:28
lib/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 380K Apr 23 04:28
lib/hadoop-yarn-server-common-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 703K Apr 23 04:28
lib/hadoop-yarn-server-nodemanager-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 1.3M Apr 23 04:29
lib/hadoop-yarn-server-resourcemanager-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 75K Apr 23 04:28
lib/hadoop-yarn-server-tests-2.7.4-tests.jar
-rw-r--r-- 1 hduser hadoop 58K Apr 23 04:29
lib/hadoop-yarn-server-web-proxy-2.7.4.jar
Also I am on Hadoop 2.7.3
*hadoop version*
Hadoop 2.7.3
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
baa91f7c6bc9cb92be5982de4719c1c8af91ccff
Compiled by root on 2016-08-18T01:41Z
Compiled with protoc 2.5.0
From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
This command was run using
/home/hduser/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar
Dr Mich Talebzadeh
LinkedIn * https://www.linkedin.com/profile/view?id=
AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=
AAEAAAAWh2gBxianrbJd6zP6AcPCCd
OABUrV8Pw>*
http://talebzadehmich.wordpress.com
*Disclaimer:* Use it at your own risk. Any and all responsibility for
any
loss, damage or destruction of data or any other property which may
arise
from relying on this email's technical content is explicitly
disclaimed.
The author will in no case be liable for any monetary damages arising
from
such loss, damage or destruction.
On 7 June 2018 at 14:20, Sean Busbey <sean.bus...@gmail.com> wrote:
HBase needs HDFS syncs to avoid dataloss during component failure.
What's the output of the command "bin/hbase version"?
What's the result of doing the following in the hbase install?
ls -lah lib/hadoop*
On Jun 7, 2018 00:58, "Mich Talebzadeh" <mich.talebza...@gmail.com>
wrote:
yes correct I am using Hbase on hdfs with hadoop-2.7.3
The file system is ext4.
I was hoping that I can avoid the sync option,
many thanks
Dr Mich Talebzadeh
LinkedIn * https://www.linkedin.com/profile/view?id=
AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=
AAEAAAAWh2gBxianrbJd6zP6AcPCCd
OABUrV8Pw>*
http://talebzadehmich.wordpress.com
*Disclaimer:* Use it at your own risk. Any and all responsibility for
any
loss, damage or destruction of data or any other property which may
arise
from relying on this email's technical content is explicitly
disclaimed.
The author will in no case be liable for any monetary damages arising
from
such loss, damage or destruction.
On 7 June 2018 at 01:43, Sean Busbey <bus...@apache.org> wrote:
On Wed, Jun 6, 2018 at 6:11 PM, Mich Talebzadeh
<mich.talebza...@gmail.com> wrote:
so the region server started OK but then I had a problem with
master :(
java.lang.IllegalStateException: The procedure WAL relies on the
ability to
hsync for proper operation during component failures, but the
underlying
filesystem does not support doing so. Please check the config
value
of
'hbase.procedure.store.wal.use.hsync' to set the desired level
of
robustness and ensure the config value of 'hbase.wal.dir' points
to
a
FileSystem mount that can provide it.
This error means that you're running on top of a Filesystem that
doesn't provide sync.
Are you using HDFS? What version?