Thanks, Keith for all your inputs. FYI this cluster was deployed via. Muchos
and that accumulo-site template has:
$HADOOP_PREFIX/share/hadoop/common/[^.].*.jar,
$HADOOP_PREFIX/share/hadoop/common/lib/(?!slf4j)[^.].*.jar,
$HADOOP_PREFIX/share/hadoop/hdfs/[^.].*.jar,
$HADOOP_PREFIX/share/hadoop/mapreduce/[^.].*.jar,
$HADOOP_PREFIX/share/hadoop/yarn/[^.].*.jar,
$HADOOP_PREFIX/share/hadoop/yarn/lib/jersey.*.jar
I will try modifying this and get back.
Thanks again!
-----Original Message-----
From: Keith Turner <[email protected]>
Sent: Thursday, November 21, 2019 9:59 AM
To: Accumulo Dev List <[email protected]>
Subject: Re: Issues building 1.9-snapshot and Hadoop 3.1.3
Can you check that your accumulo-site.xml only adds
$HADOOP_PREFIX/share/hadoop/client/[^.].*.jar for hadoop deps for the setting
general.classpaths? Not completely sure, but I think this will use the hadoop
shaded jars.
Do not want the non-shaded hadoop jars like
$HADOOP_PREFIX/share/hadoop/common/[^.].*.jar on the path.
On Wed, Nov 20, 2019 at 10:51 PM Arvind Shyamsundar
<[email protected]> wrote:
>
> Hello!
> Per this
> issue(https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Faccumulo%2Fissues%2F569&data=02%7C01%7Carvindsh%40microsoft.com%7C87c861e6940f4e6a80fe08d76eac7beb%7C72f988bf86f141af91ab2d7cd011db47%7C0%7C0%7C637099559408252616&sdata=wzAizSdxYMZHHkAtqXBvq5TUwj77sTovcr2%2BuZ1Zcnw%3D&reserved=0)
> building 1.9.x with Hadoop 3 support needs hadoop.profile=3. So I checked
> out current 1.9 branch and built with -Dhadoop.profile=3. When I deployed
> this "custom" Accumulo build with Hadoop 3.1.3, accumulo init failed:
>
> Caused by: java.lang.NoSuchMethodError:
> com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
> at
> org.apache.hadoop.conf.Configuration.setInt(Configuration.java:1515)
> at
> org.apache.hadoop.hdfs.server.namenode.ha.AbstractNNFailoverProxyProvider.<init>(AbstractNNFailoverProxyProvider.java:70)
> at
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvi
> der.<init>(ConfiguredFailoverProxyProvider.java:44)
>
> This is related to Guava. The version of Guava that is used by Hadoop 3.1.3
> is 27.0-jre while Accumulo 1.9 still depends (and includes) Guava 14.0. So I
> set about to build 1.9 with Guava 27.0-jre. I had to set the compiler version
> to 1.8. As Christopher had mentioned to in a the 1.10 thread, I also ran into
> problems with modernizer. Without disabling modernizer, the refactor involved
> looks non-trivial. I also had issues with outdated interfaces in
> DataoutputHasher.java, CloseWriteAheadLogReferences.java,
> RemoveCompleteReplicationRecords.java but those were relatively easy fixes.
> FWIW, I pushed my changes here:
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Faccumulo%2Fcompare%2Fmaster...arvindshmicrosoft%3Atemp-1.9-guava27&data=02%7C01%7Carvindsh%40microsoft.com%7C87c861e6940f4e6a80fe08d76eac7beb%7C72f988bf86f141af91ab2d7cd011db47%7C0%7C0%7C637099559408252616&sdata=cacxPXl9WL0u4NhRAPljtJuo2Gfcf1uWmNACv%2FbJkuY%3D&reserved=0.
>
> So my question is: are these known issues with the current 1.9 branch and
> Hadoop? Do we want to support Hadoop 3.1 / 3.2 with Accumulo 1.10?
>
> Thank you.
>
> - Arvind.