@Sean perhaps I could leverage this when this
http://openjdk.java.net/jeps/261 becomes available.
On Fri, Dec 16, 2016 at 4:05 AM, Steve Loughran
wrote:
> FWIW, although the underlying Hadoop declared guava dependency is pretty
> low, everything in org.apache.hadoop is set up to run against late
FWIW, although the underlying Hadoop declared guava dependency is pretty low,
everything in org.apache.hadoop is set up to run against later versions. It
just sticks with the old one to avoid breaking anything donwstream which does
expect a low version number. See HADOOP-10101 for the ongoing pa
Yes, that's the problem. Guava isn't generally mutually compatible across
more than a couple major releases. You may have to hunt for a version that
happens to have the functionality that both dependencies want, and hope
that exists. Spark should shade Guava at this point but doesn't mean that
you
I replaced *guava-14.0.1.jar* with *guava-19.0.jar in *SPARK_HOME/jars and
seem to work ok but I am not sure if it is the right thing to do. My fear
is that if Spark uses features from Guava that are only present in 14.0.1
but not in 19.0 I guess my app will break.
On Fri, Dec 16, 2016 at 2:22
Hi Guys,
Here is the simplified version of my problem. I have the following problem
and I new to gradle
dependencies {
compile group: 'org.apache.spark', name: 'spark-core_2.11', version: '2.0.2'
compile group: 'com.github.brainlag', name: 'nsq-client', version:
'1.0.0.RC2'
}
I took ou