Wish to use the Pivot Table feature of data frame which is available since
Spark 1.6. But the spark of current cluster is version 1.5. Can we install
Spark 2.0 on the master node to work around this?
Thanks!
You really shouldn't mix different versions of Spark between the master and
worker nodes, if your going to upgrade - upgrade all of them. Otherwise you
may get very confusing failures.
On Monday, September 5, 2016, Rex X wrote:
> Wish to use the Pivot Table feature of data frame which is availab
You should be able to get it to work with 2.0 as uber jar.
What type cluster you are running on? YARN? And what distribution?
On Sun, Sep 4, 2016 at 8:48 PM -0700, "Holden Karau"
mailto:hol...@pigscanfly.ca>> wrote:
You really shouldn't mix different versions of Spark between the master and
I don't think a 2.0 uber jar will play nicely on a 1.5 standalone cluster.
On Saturday, September 10, 2016, Felix Cheung
wrote:
> You should be able to get it to work with 2.0 as uber jar.
>
> What type cluster you are running on? YARN? And what distribution?
>
>
>
>
>
> On Sun, Sep 4, 2016 at 8
you'll see errors like this...
"java.lang.RuntimeException: java.io.InvalidClassException:
org.apache.spark.rpc.netty.RequestMessage; local class incompatible: stream
classdesc serialVersionUID = -2221986757032131007, local class
serialVersionUID = -5447855329526097695"
...when mixing versions of
Well, uber jar works in YARN, but not with standalone ;)
On Sun, Sep 18, 2016 at 12:44 PM -0700, "Chris Fregly"
mailto:ch...@fregly.com>> wrote:
you'll see errors like this...
"java.lang.RuntimeException: java.io.InvalidClassException:
org.apache.spark.rpc.netty.RequestMessage; local class
Yes, I have a cloudera cluster with Yarn. Any more details on how to work
out with uber jar?
Thank you.
On Sun, Sep 18, 2016 at 2:13 PM, Felix Cheung
wrote:
> Well, uber jar works in YARN, but not with standalone ;)
>
>
>
>
>
> On Sun, Sep 18, 2016 at 12:44 PM -0700, "Chris Fregly"
> wrote:
>
In YARN you submit the whole application. This way unless the distribution
provider does strange classpath
"optimisations" you may just submit Spark 2 application aside of Spark 1.5
or 1.6.
It is YARN responsibility to deliver the application files and spark
assembly to the workers. What's more,
y
it is also easy to launch many different spark versions on yarn by simply
having them installed side-by-side.
1) build spark for your cdh version. for example for cdh 5 i do:
$ git checkout v2.0.0
$ dev/make-distribution.sh --name cdh5.4-hive --tgz -Phadoop-2.6
-Dhadoop.version=2.6.0-cdh5.4.4 -Pya
oh i forgot in step1 you will have to modify spark's pom.xml to include
cloudera repo so it can find the cloudera artifacts
anyhow we found this process to be pretty easy and we stopped using the
spark versions bundles with the distros
On Mon, Sep 26, 2016 at 3:57 PM, Koert Kuipers wrote:
> it
10 matches
Mail list logo