The original version numbers I reported were indeed what we had, so let me
clarify the situation.
Our application had Guava 14 because that's what Spark depends on. But we
had added an in-house library to the Hadoop cluster and also the Spark
cluster to add a new FileSystem (think hdfs://, s3n://
Andrew - I think Spark is using Guava 14... are you using Guava 16 in your
user app (i.e. you inverted the versions in your earlier e-mail)?
- Patrick
On Fri, Aug 1, 2014 at 4:15 PM, Colin McCabe wrote:
> On Fri, Aug 1, 2014 at 2:45 PM, Andrew Ash wrote:
> > After several days of debugging, w
On Fri, Aug 1, 2014 at 2:45 PM, Andrew Ash wrote:
> After several days of debugging, we think the issue is that we have
> conflicting versions of Guava. Our application was running with Guava 14
> and the Spark services (Master, Workers, Executors) had Guava 16. We had
> custom Kryo serializers
After several days of debugging, we think the issue is that we have
conflicting versions of Guava. Our application was running with Guava 14
and the Spark services (Master, Workers, Executors) had Guava 16. We had
custom Kryo serializers for Guava's ImmutableLists, and commenting out
those regist
Hi everyone,
I'm seeing the below exception coming out of Spark 1.0.1 when I call it
from my application. I can't share the source to that application, but the
quick gist is that it uses Spark's Java APIs to read from Avro files in
HDFS, do processing, and write back to Avro files. It does this