On 25 Mar 2015, at 21:54, roni
roni.epi...@gmail.commailto:roni.epi...@gmail.com wrote:
Is there any way that I can install the new one and remove previous version.
I installed spark 1.3 on my EC2 master and set teh spark home to the new one.
But when I start teh spark-shell I get -
I have a EC2 cluster created using spark version 1.2.1.
And I have a SBT project .
Now I want to upgrade to spark 1.3 and use the new features.
Below are issues .
Sorry for the long post.
Appreciate your help.
Thanks
-Roni
Question - Do I have to create a new cluster using spark 1.3?
Here is
For the Spark SQL parts, 1.3 breaks backwards compatibility, because before
1.3, Spark SQL was considered experimental where API changes were allowed.
So, H2O and ADA compatible with 1.2.X might not work with 1.3.
dean
Dean Wampler, Ph.D.
Author: Programming Scala, 2nd Edition
What version of Spark do the other dependencies rely on (Adam and H2O?) - that
could be it
Or try sbt clean compile
—
Sent from Mailbox
On Wed, Mar 25, 2015 at 5:58 PM, roni roni.epi...@gmail.com wrote:
I have a EC2 cluster created using spark version 1.2.1.
And I have a SBT project .
Even if H2o and ADA are dependent on 1.2.1 , it should be backword
compatible, right?
So using 1.3 should not break them.
And the code is not using the classes from those libs.
I tried sbt clean compile .. same errror
Thanks
_R
On Wed, Mar 25, 2015 at 9:26 AM, Nick Pentreath
Ah I see now you are trying to use a spark 1.2 cluster - you will need to be
running spark 1.3 on your EC2 cluster in order to run programs built against
spark 1.3.
You will need to terminate and restart your cluster with spark 1.3
—
Sent from Mailbox
On Wed, Mar 25, 2015 at 6:39 PM,
Thanks Dean and Nick.
So, I removed the ADAM and H2o from my SBT as I was not using them.
I got the code to compile - only for fail while running with -
SparkContext: Created broadcast 1 from textFile at kmerIntersetion.scala:21
Exception in thread main java.lang.NoClassDefFoundError:
Weird. Are you running using SBT console? It should have the spark-core jar
on the classpath. Similarly, spark-shell or spark-submit should work, but
be sure you're using the same version of Spark when running as when
compiling. Also, you might need to add spark-sql to your SBT dependencies,
but
Yes, that's the problem. The RDD class exists in both binary jar files, but
the signatures probably don't match. The bottom line, as always for tools
like this, is that you can't mix versions.
Dean Wampler, Ph.D.
Author: Programming Scala, 2nd Edition
My cluster is still on spark 1.2 and in SBT I am using 1.3.
So probably it is compiling with 1.3 but running with 1.2 ?
On Wed, Mar 25, 2015 at 12:34 PM, Dean Wampler deanwamp...@gmail.com
wrote:
Weird. Are you running using SBT console? It should have the spark-core
jar on the classpath.
Is there any way that I can install the new one and remove previous version.
I installed spark 1.3 on my EC2 master and set teh spark home to the new
one.
But when I start teh spark-shell I get -
java.lang.UnsatisfiedLinkError:
org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
You could stop the running the processes and run the same processes using
the new version, starting with the master and then the slaves. You would
have to snoop around a bit to get the command-line arguments right, but
it's doable. Use `ps -efw` to find the command-lines used. Be sure to rerun
12 matches
Mail list logo