On 24 Mar 2015, at 02:10, Marcelo Vanzin van...@cloudera.com wrote:
This happens most probably because the Spark 1.3 you have downloaded
is built against an older version of the Hadoop libraries than those
used by CDH, and those libraries cannot parse the container IDs
generated by CDH.
Thanks Marcelo - I was using the SBT built spark per earlier thread. I
switched now to the distro (with the conf changes for CDH path in front)
and guava issue is gone.
Thanks,
On Tue, Mar 24, 2015 at 1:50 PM, Marcelo Vanzin van...@cloudera.com wrote:
Hi there,
On Tue, Mar 24, 2015 at 1:40
Steve, that's correct, but the problem only shows up when different
versions of the YARN jars are included on the classpath.
-Sandy
On Tue, Mar 24, 2015 at 6:29 AM, Steve Loughran ste...@hortonworks.com
wrote:
On 24 Mar 2015, at 02:10, Marcelo Vanzin van...@cloudera.com wrote:
This
wrote:
x-post to CDH list for any insight ...
Thanks,
-- Forwarded message --
From: Manoj Samel manojsamelt...@gmail.com
Date: Mon, Mar 23, 2015 at 6:32 PM
Subject: Invalid ContainerId ... Caused by:
java.lang.NumberFormatException: For input string: e04
To: user
Hi there,
On Tue, Mar 24, 2015 at 1:40 PM, Manoj Samel manojsamelt...@gmail.com wrote:
When I run any query, it gives java.lang.NoSuchMethodError:
com.google.common.hash.HashFunction.hashInt(I)Lcom/google/common/hash/HashCode;
Are you running a custom-compiled Spark by any chance?
Spark 1.3, CDH 5.3.2, Kerberos
Setup works fine with base configuration, spark-shell can be used in yarn
client mode etc.
When work recovery feature is enabled via
http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/admin_ha_yarn_work_preserving_recovery.html,
the
This happens most probably because the Spark 1.3 you have downloaded
is built against an older version of the Hadoop libraries than those
used by CDH, and those libraries cannot parse the container IDs
generated by CDH.
You can try to work around this by manually adding CDH jars to the
front of