That did it! I pulled down:
https://www.apache.org/dyn/closer.lua/hbase/2.4.7/hbase-2.4.7-src.tar.gz
and built the software in maven using Java 8.

mvn package assembly:single -Dhadoop.profile=3.0
-Dhadoop-three.version=3.2.2 -DskipTests

I took the gz file from:
/hbase-2.4.7/hbase-assembly/target/hbase-2.4.7-bin.tar.gz
and deployed that to my cluster.

Worked like a charm, Thanks Josh.

On Tue, Oct 19, 2021 at 7:32 AM Josh Elser <els...@apache.org> wrote:

> Keeping Hadoop client libraries in sync is one of those things which,
> within a major version, is not likely to cause you problems. However, if
> you are having a problem, that is the first thing I would do.
>
> If you're in that territory, it's also a good idea to recompile HBase
> against that exact version of Hadoop (we operate under those same
> assumptions because we can't realistically monitor builds of HBase
> against every single Hadoop version). Something like:
>
> `mvn package assembly:single -Dhadoop.profile=3.0
> -Dhadoop-three.version=3.2.2 -DskipTests`
>
> The specific error about HdfsFileStatus is a known breakage from Hadoop
> 2 to Hadoop 3 in which HdfsFileStatus used to be a class in Hadoop2 and
> then they changed it to be an interface in Hadoop3. That's why the
> recompilation is important (that's just how Java works).
>
> On 10/18/21 11:42 AM, Damillious Jones wrote:
> > Thanks for the response. They are not. Hadoop 3.2.2 is running 3.2.2
> libs,
> > while HBase is using the 2.10 Hadoop libs. Do these need to be in sync?
> >
> > I did try syncing them up and adding 3.2.2 libs into HBase, replacing all
> > of the hadoop-* files and I got this error:
> >
> > Unhandled: Found interface
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus,
> > but class was expected
> >
> > In searching for an answer to this I read that if using 3.1 or higher of
> > Hadoop you need to compile HBase with special flag
> > https://issues.apache.org/jira/browse/HBASE-22394
> > https://issues.apache.org/jira/browse/HBASE-24154
> >
> > Do I have to compile HBase in order to get HBase to work with Hadoop 3.1
> or
> > higher?
> >
> > On Mon, Oct 18, 2021 at 7:13 AM Josh Elser <els...@apache.org> wrote:
> >
> >> Are the Hadoop JARs which you're using inside HBase the same as the
> >> Hadoop version you're running? (e.g. in $HBASE_HOME/lib)
> >>
> >> On 10/15/21 6:18 PM, Damillious Jones wrote:
> >>> Hi all, I am seeing a similar issue which is noted in HBASE-26007 where
> >>> HBase will not start if dfs.encrypt.data.transfer in HDFS is set to
> true.
> >>> When I start HBase I see the following error message on the master
> node:
> >>>
> >>> java.io.IOException: Invalid token in javax.security.sasl.qop:
> >>>           at
> >>>
> >>
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessage(DataTransferSaslUtil.java:220)
> >>>
> >>> I am using Hadoop 3.2.2 and HBase 2.4.5 with Java 1.8. If I use Hadoop
> >>> 3.1.1 it works fine. Has anyone else encountered this issue?
> >>>
> >>> Any help would be appreciated, thanks.
> >>>
> >>
> >
>

Reply via email to