On Fri, Aug 1, 2014 at 2:45 PM, Andrew Ash and...@andrewash.com wrote:
After several days of debugging, we think the issue is that we have
conflicting versions of Guava. Our application was running with Guava 14
and the Spark services (Master, Workers, Executors) had Guava 16. We had
custom
I have a similar issue with SPARK-1767. There are basically three ways to
resolve the issue:
1. Use reflection to access classes newer than 0.21 (or whatever the oldest
version of Hadoop is that Spark supports)
2. Add a build variant (in Maven this would be a profile) that deals with
this.
3.
to
any value and it will work.
- Patrick
On Sat, May 31, 2014 at 8:34 PM, Colin McCabe cmcc...@alumni.cmu.edu
wrote:
Spark currently supports two build systems, sbt and maven. sbt will
download the correct version of scala, but with Maven you need to
supply it
yourself and set
things to consider... simplifying our classpath is
definitely an avenue worth exploring!
On Fri, May 30, 2014 at 2:56 PM, Colin McCabe cmcc...@alumni.cmu.edu
wrote:
On Fri, May 30, 2014 at 2:11 PM, Patrick Wendell pwend...@gmail.com
wrote:
Hey guys, thanks for the insights. Also
Spark currently supports two build systems, sbt and maven. sbt will
download the correct version of scala, but with Maven you need to supply it
yourself and set SCALA_HOME.
It sounds like the instructions need to be updated-- perhaps create a JIRA?
best,
Colin
On Sat, May 31, 2014 at 7:06 PM,
First of all, I think it's great that you're thinking about this. API
stability is super important and it would be good to see Spark get on top
of this.
I want to clarify a bit about Hadoop. The problem that Hadoop faces is
that the Java package system isn't very flexible. If you have a method
, but not the whole kit and caboodle like with Hadoop.
best,
Colin
- Patrick
On Fri, May 30, 2014 at 12:30 PM, Marcelo Vanzin van...@cloudera.com
wrote:
On Fri, May 30, 2014 at 12:05 PM, Colin McCabe cmcc...@alumni.cmu.edu
wrote:
I don't know if Scala provides any mechanisms to do this beyond
The FileSystem cache is something that has caused a lot of pain over the
years. Unfortunately we (in Hadoop core) can't change the way it works now
because there are too many users depending on the current behavior.
Basically, the idea is that when you request a FileSystem with certain
options
be that much code. This might be
nicer because you could implement things like closing FileSystem objects
that haven't been used in a while.
cheers,
Colin
On Thu, May 22, 2014 at 12:06 PM, Colin McCabe cmcc...@alumni.cmu.edu
wrote:
The FileSystem cache is something that has caused a lot
Hi Kevin,
Can you try https://issues.apache.org/jira/browse/SPARK-1898 to see if it
fixes your issue?
Running in YARN cluster mode, I had a similar issue where Spark was able to
create a Driver and an Executor via YARN, but then it stopped making any
progress.
Note: I was using a pre-release
10 matches
Mail list logo