In YARN you submit the whole application. This way unless the distribution
provider does strange classpath
"optimisations" you may just submit Spark 2 application aside of Spark 1.5
or 1.6.

It is YARN responsibility to deliver the application files and spark
assembly to the workers. What's more,
you have to install Spark only on the node from which you are going to
start it.

Procedure tested with Hortonworks.

HTH,
Piotr


On Mon, Sep 26, 2016 at 5:48 PM, Rex X <dnsr...@gmail.com> wrote:

> Yes, I have a cloudera cluster with Yarn. Any more details on how to work
> out with uber jar?
>
> Thank you.
>
>
> On Sun, Sep 18, 2016 at 2:13 PM, Felix Cheung <felixcheun...@hotmail.com>
> wrote:
>
>> Well, uber jar works in YARN, but not with standalone ;)
>>
>>
>>
>>
>>
>> On Sun, Sep 18, 2016 at 12:44 PM -0700, "Chris Fregly" <ch...@fregly.com>
>> wrote:
>>
>> you'll see errors like this...
>>
>> "java.lang.RuntimeException: java.io.InvalidClassException:
>> org.apache.spark.rpc.netty.RequestMessage; local class incompatible:
>> stream classdesc serialVersionUID = -2221986757032131007, local class
>> serialVersionUID = -5447855329526097695"
>>
>> ...when mixing versions of spark.
>>
>> i'm actually seeing this right now while testing across Spark 1.6.1 and
>> Spark 2.0.1 for my all-in-one, hybrid cloud/on-premise Spark + Zeppelin +
>> Kafka + Kubernetes + Docker + One-Click Spark ML Model Production
>> Deployments initiative documented here:
>>
>> https://github.com/fluxcapacitor/pipeline/wiki/Kubernetes-Docker-Spark-ML
>>
>> and check out my upcoming meetup on this effort either in-person or
>> online:
>>
>> http://www.meetup.com/Advanced-Spark-and-TensorFlow-Meetup/
>> events/233978839/
>>
>> we're throwing in some GPU/CUDA just to sweeten the offering!  :)
>>
>> On Sat, Sep 10, 2016 at 2:57 PM, Holden Karau <hol...@pigscanfly.ca>
>> wrote:
>>
>>> I don't think a 2.0 uber jar will play nicely on a 1.5 standalone
>>> cluster.
>>>
>>>
>>> On Saturday, September 10, 2016, Felix Cheung <felixcheun...@hotmail.com>
>>> wrote:
>>>
>>>> You should be able to get it to work with 2.0 as uber jar.
>>>>
>>>> What type cluster you are running on? YARN? And what distribution?
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Sun, Sep 4, 2016 at 8:48 PM -0700, "Holden Karau" <
>>>> hol...@pigscanfly.ca> wrote:
>>>>
>>>> You really shouldn't mix different versions of Spark between the master
>>>> and worker nodes, if your going to upgrade - upgrade all of them. Otherwise
>>>> you may get very confusing failures.
>>>>
>>>> On Monday, September 5, 2016, Rex X <dnsr...@gmail.com> wrote:
>>>>
>>>>> Wish to use the Pivot Table feature of data frame which is available
>>>>> since Spark 1.6. But the spark of current cluster is version 1.5. Can we
>>>>> install Spark 2.0 on the master node to work around this?
>>>>>
>>>>> Thanks!
>>>>>
>>>>
>>>>
>>>> --
>>>> Cell : 425-233-8271
>>>> Twitter: https://twitter.com/holdenkarau
>>>>
>>>>
>>>
>>> --
>>> Cell : 425-233-8271
>>> Twitter: https://twitter.com/holdenkarau
>>>
>>>
>>
>>
>> --
>> *Chris Fregly*
>> Research Scientist @ *PipelineIO* <http://pipeline.io>
>> *Advanced Spark and TensorFlow Meetup*
>> <http://www.meetup.com/Advanced-Spark-and-TensorFlow-Meetup/>
>> *San Francisco* | *Chicago* | *Washington DC*
>>
>>
>>
>>
>>
>

Reply via email to