Re: Spark on yarn, only 1 or 2 vcores getting allocated to the containers getting created.

2016-08-03 Thread Saisai Shao
, satyajit vegesna < satyajit.apas...@gmail.com> wrote: > Hi All, > > I am trying to run a spark job using yarn, and i specify --executor-cores > value as 20. > But when i go check the "nodes of the cluster" page in > http://hostname:8088/cluster/nodes then i see 4 co

Spark on yarn, only 1 or 2 vcores getting allocated to the containers getting created.

2016-08-02 Thread satyajit vegesna
Hi All, I am trying to run a spark job using yarn, and i specify --executor-cores value as 20. But when i go check the "nodes of the cluster" page in http://hostname:8088/cluster/nodes then i see 4 containers getting created on each of the node in cluster. But can only see 1 vco

Re: problem running spark with yarn-client not using spark-submit

2016-06-26 Thread Saisai Shao
cc| > |"user @spark" <user@spark.apache.org> > | > | >

Re: problem running spark with yarn-client not using spark-submit

2016-06-26 Thread sychungd
ject| |[Spam][SMG] Re: problem running spark with yarn

Re: problem running spark with yarn-client not using spark-submit

2016-06-24 Thread Mich Talebzadeh
Hi, Trying to run spark with yarn-client not using spark-submit here what are you using to submit the job? spark-shell, spark-sql or anything else Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw <https://www.linkedin.

Re: Error Invoking Spark on Yarn on using Spark Submit

2016-06-24 Thread Mich Talebzadeh
talebzadehmich.wordpress.com >> >> >> >> On 24 June 2016 at 08:14, puneet kumar <puneetkumar.2...@gmail.com> >> wrote: >> >>> >>> >>> I am getting below error thrown when I submit Spark Job using Spark >>> Submit on Yarn.

problem running spark with yarn-client not using spark-submit

2016-06-24 Thread sychungd
Hello guys, Trying to run spark with yarn-client not using spark-submit here but the jobs kept failed while AM launching executor. The error collected by yarn like below. Looks like some environment setting is missing? Could someone help me out with this. Thanks in advance! HY Chung Java

Re: Error Invoking Spark on Yarn on using Spark Submit

2016-06-24 Thread Jeff Zhang
> On 24 June 2016 at 08:14, puneet kumar <puneetkumar.2...@gmail.com> wrote: > >> >> >> I am getting below error thrown when I submit Spark Job using Spark >> Submit on Yarn. Need a quick help on what's going wrong here. >> >> 16/

Re: Error Invoking Spark on Yarn on using Spark Submit

2016-06-24 Thread Mich Talebzadeh
?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>* http://talebzadehmich.wordpress.com On 24 June 2016 at 08:14, puneet kumar <puneetkumar.2...@gmail.com> wrote: > > > I am getting below error thrown when I submit Spark Job using Spark Submit > on Yarn. Need a quick help on what's going wrong here. &g

Error Invoking Spark on Yarn on using Spark Submit

2016-06-24 Thread puneet kumar
I am getting below error thrown when I submit Spark Job using Spark Submit on Yarn. Need a quick help on what's going wrong here. 16/06/24 01:09:25 WARN AbstractLifeCycle: FAILED org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter-791eb5d5: java.lang.IllegalStateException: class

Re: oozie and spark on yarn

2016-06-08 Thread vaquar khan
Hi Karthi, Hope following information will help you. Doc: https://oozie.apache.org/docs/4.2.0/DG_SparkActionExtension.html Example : https://developer.ibm.com/hadoop/2015/11/05/run-spark-job-yarn-oozie/ Code : http://3097fca9b1ec8942c4305e550ef1b50a.proxysheep.com/apache/oozie/blob/master

oozie and spark on yarn

2016-06-08 Thread pseudo oduesp
hi , i want ask if somone used oozie with spark ? if you can give me example: how ? we can configure on yarn thanks

Re: spark on yarn

2016-05-26 Thread Steve Loughran
> On 21 May 2016, at 15:14, Shushant Arora wrote: > > And will it allocate rest executors when other containers get freed which > were occupied by other hadoop jobs/spark applications? > requests will go into the queue(s), they'll stay outstanding until things free

Re: run multiple spark jobs yarn-client mode

2016-05-25 Thread spark.raj
Thank you for your help Mich. ThanksRajesh   Sent from Yahoo Mail. Get the app On Wednesday, May 25, 2016 3:14 PM, Mich Talebzadeh wrote: You may have some memory issues OOM etc that terminated the process. Dr Mich Talebzadeh LinkedIn  

Re: run multiple spark jobs yarn-client mode

2016-05-25 Thread Mich Talebzadeh
You may have some memory issues OOM etc that terminated the process. Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw *

Re: run multiple spark jobs yarn-client mode

2016-05-25 Thread spark.raj
Hi Friends, In the yarn log files of the nodemanager i can see the error below. Can i know why i am getting this error. ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 15: SIGTERM ThanksRajesh   Sent from Yahoo Mail. Get the app On Wednesday, May 25, 2016 1:08 PM,

Re: run multiple spark jobs yarn-client mode

2016-05-25 Thread Mich Talebzadeh
Yes check the yarn log files both resourcemanager and nodemanager. Also ensure that you have set up work directories consistently, especially yarn.nodemanager.local-dirs HTH Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

Re: run multiple spark jobs yarn-client mode

2016-05-25 Thread Jeff Zhang
Could you check the yarn app logs ? On Wed, May 25, 2016 at 3:23 PM, wrote: > Hi, > > I am running spark streaming job on yarn-client mode. If run muliple jobs, > some of the jobs failing and giving below error message. Is there any > configuration missing? > >

run multiple spark jobs yarn-client mode

2016-05-25 Thread spark.raj
Hi, I am running spark streaming job on yarn-client mode. If run muliple jobs, some of the jobs failing and giving below error message. Is there any configuration missing? ERROR apache.spark.util.Utils - Uncaught exception in thread main java.lang.NullPointerException     at

Re: spark on yarn

2016-05-21 Thread Shushant Arora
memory use than it is about CPU >> >> > On 20 Apr 2016, at 16:21, Shushant Arora <shushantaror...@gmail.com> >> wrote: >> > >> > I am running a spark application on yarn cluster. >> > >> > say I have available vcors in cluster

Re: spark on yarn

2016-05-21 Thread Shushant Arora
re you ask for enough memory: YARN is a lot more unforgiving about > memory use than it is about CPU > > > On 20 Apr 2016, at 16:21, Shushant Arora <shushantaror...@gmail.com> > wrote: > > > > I am running a spark application on yarn cluster. > > > &

Re: SLF4J binding error while running Spark using YARN as Cluster Manager

2016-05-18 Thread Marcelo Vanzin
> I am having log4j trouble while running Spark using YARN as cluster manager > in CDH 5.3.3. > I get the following error:- > > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/data/12/yarn/nm/filecache/34/spark-assemb

SLF4J binding error while running Spark using YARN as Cluster Manager

2016-05-18 Thread Anubhav Agarwal
Hi, I am having log4j trouble while running Spark using YARN as cluster manager in CDH 5.3.3. I get the following error:- SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/data/12/yarn/nm/filecache/34/spark-assembly-1.6.0-hadoop2.6.0.jar!/org/slf4j/impl

Re: spark on yarn

2016-04-21 Thread Steve Loughran
than it is about CPU > On 20 Apr 2016, at 16:21, Shushant Arora <shushantaror...@gmail.com> wrote: > > I am running a spark application on yarn cluster. > > say I have available vcors in cluster as 100.And I start spark application > with --num-executors 200 --num-cores

Long(20+ seconds) startup delay for jobs when running Spark on YARN

2016-04-21 Thread Akmal Abbasov
Hi, I'm running Spark(1.6.1) on YARN(2.5.1), cluster mode. It's taking 20+ seconds for application to move from ACCEPTED to RUNNING state, here's logs 16/04/21 09:06:56 INFO impl.YarnClientImpl: Submitted application application_1461229289298_0001 16/04/21 09:06:57 INFO yarn.Client: Application

Re: spark on yarn

2016-04-20 Thread Mail.com
I get an error with a message that state what is max number of cores allowed. > On Apr 20, 2016, at 11:21 AM, Shushant Arora <shushantaror...@gmail.com> > wrote: > > I am running a spark application on yarn cluster. > > say I have available vcors in cluster

spark on yarn

2016-04-20 Thread Shushant Arora
I am running a spark application on yarn cluster. say I have available vcors in cluster as 100.And I start spark application with --num-executors 200 --num-cores 2 (so I need total 200*2=400 vcores) but in my cluster only 100 are available. What will happen ? Will the job abort

Re: Running Spark on Yarn-Client/Cluster mode

2016-04-12 Thread Jon Kjær Amundsen
unch Spark-Shell in yarn-client mode > > Any suggestion ? > > > > -- > View this message in context: > http://apache-spark-user-list.1001560.n3.nabble.com/Running-Spark-on-Yarn-Client-Cluster-mode-tp26691p26752.html > Sent from the Apache Spark U

Re: Running Spark on Yarn-Client/Cluster mode

2016-04-12 Thread ashesh_28
I have updated all my nodes in the Cluster to have 4GB RAM memory , but still face the same error when trying to launch Spark-Shell in yarn-client mode Any suggestion ? -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Running-Spark-on-Yarn-Client-Cluster

Re: Running Spark on Yarn-Client/Cluster mode

2016-04-11 Thread ashesh_28
And then issued the following command to run spark-shell in yarn client mode , spark-shell --executor-memory 512m --driver-memory 1g --num-executors 2 But i am still unable to start the Spark context and it fails with the same error. Can someone help me on explaining how to set the core, executor

RE: Unable run Spark in YARN mode

2016-04-10 Thread Yu, Yucai
Could you follow this guide http://spark.apache.org/docs/latest/running-on-yarn.html#configuration? Thanks, Yucai -Original Message- From: maheshmath [mailto:mahesh.m...@gmail.com] Sent: Saturday, April 9, 2016 1:58 PM To: user@spark.apache.org Subject: Unable run Spark in YARN mode I

Re: Unable run Spark in YARN mode

2016-04-09 Thread Ted Yu
mahesh : bq. :16: error: not found: value sqlContext Please take a look at: https://spark.apache.org/docs/latest/sql-programming-guide.html#starting-point-sqlcontext for how the import should be used. Please include version of Spark and the commandline you used in the reply.

Re: Unable run Spark in YARN mode

2016-04-09 Thread Natu Lauchande
ocess$1.apply(SparkILoop.scala:945) > at > > scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135) > at > org.apache.spark.repl.SparkILoop.org > $apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945) > at org.a

Unable run Spark in YARN mode

2016-04-08 Thread maheshmath
he-spark-user-list.1001560.n3.nabble.com/Unable-run-Spark-in-YARN-mode-tp26726.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional comm

Re: Running Spark on Yarn-Client/Cluster mode

2016-04-08 Thread ashesh_28
0.n3.nabble.com/Running-Spark-on-Yarn-Client-Cluster-mode-tp26691p26717.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional command

Re: Running Spark on Yarn-Client/Cluster mode

2016-04-08 Thread ashesh_28
Few more added information with Nodes Memory and Core ptfhadoop01v - 4GB ntpcam01v - 1GB ntpcam03v - 2GB Each of the VM has only 1 core CPU -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Running-Spark-on-Yarn-Client-Cluster-mode-tp26691p26714.html Sent

Re: Running Spark on Yarn-Client/Cluster mode

2016-04-08 Thread ashesh_28
d then Again try to re-issue the same command to start Spark on Yarn-Client then it does not even start and takes me back to the error message posted earlier. <http://apache-spark-user-list.1001560.n3.nabble.com/file/n26713/Spark-error2.jpg> I have no idea on what is causing this. -- View t

Re: Running Spark on Yarn-Client/Cluster mode

2016-04-07 Thread ashesh_28
560.n3.nabble.com/Running-Spark-on-Yarn-Client-Cluster-mode-tp26691p26710.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands,

Re: Running Spark on Yarn-Client/Cluster mode

2016-04-07 Thread ashesh_28
RK_JAR variable in spark-env.sh file but no success. I also tried using the below command , *spark-shell --master yarn-client --conf spark.yarn.jar=hdfs://ptfhadoop01v:8020/user/spark/share/lib/spark-assembly.jar* Issuing this command gives me the following error message , Spark-Error.txt <http

Re: Running Spark on Yarn-Client/Cluster mode

2016-04-07 Thread JasmineGeorge
sembly.jar -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Running-Spark-on-Yarn-Client-Cluster-mode-tp26691p26704.html Sent from the Apache Spark User List mailing list archive at

Running Spark on Yarn-Client/Cluster mode

2016-04-06 Thread ashesh_28
y to initiate the same in yarn-client mode it always fails , the command i used is , $ *spark-shell --master yarn-client* Spark-Error.txt <http://apache-spark-user-list.1001560.n3.nabble.com/file/n26691/Spark-Error.txt> Can anyone tell me what am i doing wrong , Do i need to install spark o

Re: Running Spark on Yarn

2016-03-30 Thread Vineet Mishra
spawn. On Wed, Mar 30, 2016 at 3:27 AM, Alexander Pivovarov <apivova...@gmail.com> wrote: > ok, start EMR-4.3.0 or 4.2.0 cluster and look at how to configure spark on > yarn properly >

Re: Running Spark on Yarn

2016-03-29 Thread Alexander Pivovarov
ok, start EMR-4.3.0 or 4.2.0 cluster and look at how to configure spark on yarn properly

Re: Running Spark on Yarn

2016-03-29 Thread Vineet Mishra
:~/Downloads/package/spark-1.6.1-bin-hadoop2.6$ bin/spark-shell --master yarn-client 16/03/30 03:24:43 DEBUG ipc.Client: IPC Client (111576772) connection to myhost/192.168.1.108:8032 from myhost sending #138 16/03/30 03:24:43 DEBUG ipc.Client: IPC Client (111576772) connection to myhost

Re: Running Spark on Yarn

2016-03-29 Thread Vineet Mishra
;> >>>> On Wed, Mar 30, 2016 at 2:31 AM, Alexander Pivovarov < >>>> apivova...@gmail.com> wrote: >>>> >>>>> check 8088 ui >>>>> - how many cores and memory available >>>>> - how many slaves are active >>>&

Re: Running Spark on Yarn

2016-03-29 Thread Alexander Pivovarov
t;> - how many slaves are active >>>> >>>> run teragen or pi from hadoop examples to make sure that yarn works >>>> >>>> On Tue, Mar 29, 2016 at 1:25 PM, Surendra , Manchikanti < >>>> surendra.manchika...@gmail.com> wrote: >>>> >>&

Re: Running Spark on Yarn

2016-03-29 Thread Vineet Mishra
gt;>> Can you please check resource(RAM,Cores) availability in your local >>>> cluster, And change accordingly. >>>> >>>> Regards, >>>> Surendra M >>>> >>>> -- Surendra Manchikanti >>>> >>>>

Re: Running Spark on Yarn

2016-03-29 Thread Alexander Pivovarov
;> Can you please check resource(RAM,Cores) availability in your local >>> cluster, And change accordingly. >>> >>> Regards, >>> Surendra M >>> >>> -- Surendra Manchikanti >>> >>> On Tue, Mar 29, 2016 at 1:15 PM, Vineet Mishra

Re: Running Spark on Yarn

2016-03-29 Thread Vineet Mishra
nge accordingly. >> >> Regards, >> Surendra M >> >> -- Surendra Manchikanti >> >> On Tue, Mar 29, 2016 at 1:15 PM, Vineet Mishra <clearmido...@gmail.com> >> wrote: >> >>> Hi All, >>> >>> While starting Spark on Yarn o

Re: Running Spark on Yarn

2016-03-29 Thread Alexander Pivovarov
check resource(RAM,Cores) availability in your local > cluster, And change accordingly. > > Regards, > Surendra M > > -- Surendra Manchikanti > > On Tue, Mar 29, 2016 at 1:15 PM, Vineet Mishra <clearmido...@gmail.com> > wrote: > >> Hi All, >> >>

Re: Running Spark on Yarn

2016-03-29 Thread Surendra , Manchikanti
Hi Vineeth, Can you please check resource(RAM,Cores) availability in your local cluster, And change accordingly. Regards, Surendra M -- Surendra Manchikanti On Tue, Mar 29, 2016 at 1:15 PM, Vineet Mishra <clearmido...@gmail.com> wrote: > Hi All, > > While starting Spark o

Running Spark on Yarn

2016-03-29 Thread Vineet Mishra
Hi All, While starting Spark on Yarn on local cluster(Single Node Hadoop 2.6 yarn) I am facing some issues. As I try to start the Spark Shell it keeps on iterating in a endless loop while initiating, *6/03/30 01:32:38 DEBUG ipc.Client: IPC Client (1782965120) connection to myhost/192.168.1.108

Re: Spark with Yarn Client

2016-03-11 Thread Alexander Pivovarov
onfiguration of spark with yarn > client on hadoop cluster . > Can somebody help me or point me document /blog/books which has deeper > understanding of above two. > Thanks, > Divya >

Spark with Yarn Client

2016-03-11 Thread Divya Gehlot
Hi, I am trying to understand behaviour /configuration of spark with yarn client on hadoop cluster . Can somebody help me or point me document /blog/books which has deeper understanding of above two. Thanks, Divya

Re: Spark on YARN memory consumption

2016-03-11 Thread Jan Štěrba
> > > > > From: Jan Štěrba > Sent: Friday, March 11, 2016 8:27 AM > To: User > Subject: Spark on YARN memory consumption > > > > Hello, > > I am exprimenting with tuning an on demand spark-cluster on top of our > cloudera hadoop. I am running Cloudera 5.5

RE: Spark on YARN memory consumption

2016-03-11 Thread Silvio Fiorito
<mailto:i...@jansterba.com> Sent: Friday, March 11, 2016 8:27 AM To: User<mailto:user@spark.apache.org> Subject: Spark on YARN memory consumption Hello, I am exprimenting with tuning an on demand spark-cluster on top of our cloudera hadoop. I am running Cloudera 5.5.2 with Spark 1

Spark on YARN memory consumption

2016-03-11 Thread Jan Štěrba
Hello, I am exprimenting with tuning an on demand spark-cluster on top of our cloudera hadoop. I am running Cloudera 5.5.2 with Spark 1.5 right now and I am running spark in yarn-client mode. Right now my main experimentation is about spark.executor.memory property and I have noticed a strange

Re: How to display the web ui when running Spark on YARN?

2016-03-09 Thread Shady Xu
t; > Hi all, > > I am running Spark in yarn-client mode, but every time I access the web > ui, the browser redirect me to one of the worker nodes and shows nothing. > The url looks like > http://hadoop-node31.company.com:8088/proxy/application_1453797301246_120264 > . > > >

Re: How to display the web ui when running Spark on YARN?

2016-03-04 Thread Steve Loughran
On 3 Mar 2016, at 09:17, Shady Xu <shad...@gmail.com<mailto:shad...@gmail.com>> wrote: Hi all, I am running Spark in yarn-client mode, but every time I access the web ui, the browser redirect me to one of the worker nodes and shows nothing. The url looks like http://h

How to display the web ui when running Spark on YARN?

2016-03-03 Thread Shady Xu
Hi all, I am running Spark in yarn-client mode, but every time I access the web ui, the browser redirect me to one of the worker nodes and shows nothing. The url looks like http://hadoop-node31.company.com:8088/proxy/application_1453797301246_120264 . I googled a lot and found some possible

Re: Spark on Yarn with Dynamic Resource Allocation. Container always marked as failed

2016-03-02 Thread Xiaoye Sun
;> On Wed, Mar 2, 2016 at 4:26 PM, Xiaoye Sun <sunxiaoy...@gmail.com> wrote: >> >>> Hi all, >>> >>> I am very new to spark and yarn. >>> >>> I am running a BroadcastTest example application using spark 1.6.0 and >>> Hadoop/Yarn 2.7.1.

Re: Spark on Yarn with Dynamic Resource Allocation. Container always marked as failed

2016-03-02 Thread Prabhu Joseph
. > > On Wed, Mar 2, 2016 at 4:26 PM, Xiaoye Sun <sunxiaoy...@gmail.com> wrote: > >> Hi all, >> >> I am very new to spark and yarn. >> >> I am running a BroadcastTest example application using spark 1.6.0 and >> Hadoop/Yarn 2.7.1. in a 5 nodes cluster.

Re: Spark on Yarn with Dynamic Resource Allocation. Container always marked as failed

2016-03-02 Thread Jeff Zhang
The executor may fail to start. You need to check the executor logs, if there's no executor log then you need to check node manager log. On Wed, Mar 2, 2016 at 4:26 PM, Xiaoye Sun <sunxiaoy...@gmail.com> wrote: > Hi all, > > I am very new to spark and yarn. > > I am ru

Spark on Yarn with Dynamic Resource Allocation. Container always marked as failed

2016-03-02 Thread Xiaoye Sun
Hi all, I am very new to spark and yarn. I am running a BroadcastTest example application using spark 1.6.0 and Hadoop/Yarn 2.7.1. in a 5 nodes cluster. I configured my configuration files according to https://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation 1. copy

Spark on Yarn with Dynamic Resource Allocation. Container always marked as failed

2016-03-02 Thread Xiaoye Sun
Hi all, I am very new to spark and yarn. I am running a BroadcastTest example application using spark 1.6.0 and Hadoop/Yarn 2.7.1. in a 5 nodes cluster. I configured my configuration files according to https://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation 1. copy

Re: Spark 1.5.2 Yarn Application Master - resiliencey

2016-02-03 Thread Nirav Patel
Awesome! it looks promising. Thanks Rishabh and Marcelo. On Wed, Feb 3, 2016 at 12:09 PM, Rishabh Wadhawan wrote: > Check out this link > http://spark.apache.org/docs/latest/configuration.html and check > spark.shuffle.service. Thanks > > On Feb 3, 2016, at 1:02 PM,

Spark 1.5.2 Yarn Application Master - resiliencey

2016-02-03 Thread Nirav Patel
Hi, I have a spark job running on yarn-client mode. At some point during Join stage, executor(container) runs out of memory and yarn kills it. Due to this Entire job restarts! and it keeps doing it on every failure? What is the best way to checkpoint? I see there's checkpoint api and other

Re: Spark 1.5.2 Yarn Application Master - resiliencey

2016-02-03 Thread Marcelo Vanzin
Without the exact error from the driver that caused the job to restart, it's hard to tell. But a simple way to improve things is to install the Spark shuffle service on the YARN nodes, so that even if an executor crashes, its shuffle output is still available to other executors. On Wed, Feb 3,

Re: Spark 1.5.2 Yarn Application Master - resiliencey

2016-02-03 Thread Nirav Patel
Do you mean this setup? https://spark.apache.org/docs/1.5.2/job-scheduling.html#dynamic-resource-allocation On Wed, Feb 3, 2016 at 11:50 AM, Marcelo Vanzin wrote: > Without the exact error from the driver that caused the job to restart, > it's hard to tell. But a simple

Re: Spark 1.5.2 Yarn Application Master - resiliencey

2016-02-03 Thread Marcelo Vanzin
Yes, but you don't necessarily need to use dynamic allocation (just enable the external shuffle service). On Wed, Feb 3, 2016 at 11:53 AM, Nirav Patel wrote: > Do you mean this setup? > > https://spark.apache.org/docs/1.5.2/job-scheduling.html#dynamic-resource-allocation

Re: Spark 1.5.2 Yarn Application Master - resiliencey

2016-02-03 Thread Rishabh Wadhawan
Hi Nirav There is a difference between dynamic resource allocation and shuffle service. The dynamic allocation when you enable the configurations for it, every time you run any task spark will determine the number of executors required to run that task for you, which means decreasing the

Re: Spark 1.5.2 Yarn Application Master - resiliencey

2016-02-03 Thread Rishabh Wadhawan
Check out this link http://spark.apache.org/docs/latest/configuration.html and check spark.shuffle.service. Thanks > On Feb 3, 2016, at 1:02 PM, Marcelo Vanzin wrote: > > Yes, but you don't necessarily need to use

Re: Spark 1.5.2 - Programmatically launching spark on yarn-client mode

2016-01-30 Thread Nirav Patel
1.3.1 artifact / dependency leaked into your > app ? > > Cheers > > On Thu, Jan 28, 2016 at 7:36 PM, Nirav Patel <npa...@xactlycorp.com> > wrote: > >> Hi, we were using spark 1.3.1 and launching our spark jobs on yarn-client >> mode programmatically via cr

Programmatically launching spark on yarn-client mode no longer works in spark 1.5.2

2016-01-28 Thread Nirav Patel
Hi, we were using spark 1.3.1 and launching our spark jobs on yarn-client mode programmatically via creating a sparkConf and sparkContext object manually. It was inspired from spark self-contained application example here: https://spark.apache.org/docs/1.5.2/quick-start.html#self-contained

Re: Programmatically launching spark on yarn-client mode no longer works in spark 1.5.2

2016-01-28 Thread Nirav Patel
Hi, we were using spark 1.3.1 and launching our spark jobs on yarn-client >> mode programmatically via creating a sparkConf and sparkContext object >> manually. It was inspired from spark self-contained application example >> here: >> >> >> https://spark.

Re: Programmatically launching spark on yarn-client mode no longer works in spark 1.5.2

2016-01-28 Thread Saisai Shao
> >> I think I met this problem before, this problem might be due to some race >> conditions in exit period. The way you mentioned is still valid, this >> problem only occurs when stopping the application. >> >> Thanks >> Saisai >> >> On Fri, Ja

Spark 1.5.2 - Programmatically launching spark on yarn-client mode

2016-01-28 Thread Nirav Patel
Hi, we were using spark 1.3.1 and launching our spark jobs on yarn-client mode programmatically via creating a sparkConf and sparkContext object manually. It was inspired from spark self-contained application example here: https://spark.apache.org/docs/1.5.2/quick-start.html#self-contained

Re: Programmatically launching spark on yarn-client mode no longer works in spark 1.5.2

2016-01-28 Thread Saisai Shao
t; Hi, we were using spark 1.3.1 and launching our spark jobs on yarn-client > mode programmatically via creating a sparkConf and sparkContext object > manually. It was inspired from spark self-contained application example > here: > > > https://spark.apache.org/docs/1.5.2/quic

Re: Spark 1.5.2 - Programmatically launching spark on yarn-client mode

2016-01-28 Thread Ted Yu
Looks like '--properties-file' is no longer supported. Was it possible that Spark 1.3.1 artifact / dependency leaked into your app ? Cheers On Thu, Jan 28, 2016 at 7:36 PM, Nirav Patel <npa...@xactlycorp.com> wrote: > Hi, we were using spark 1.3.1 and launching our spark jobs on ya

RE: Spark App -Yarn-Cluster-Mode ===> Hadoop_conf_**.zip file.

2016-01-18 Thread Siddharth Ubale
To: Siddharth Ubale <siddharth.ub...@syncoms.com> Cc: user@spark.apache.org Subject: Re: Spark App -Yarn-Cluster-Mode ===> Hadoop_conf_**.zip file. Interesting. Which hbase / Phoenix releases are you using ? The following method has been removed from Put: public Put setWriteToWAL(bool

Re: Spark 1.6.0, yarn-shuffle

2016-01-18 Thread johd
Hi, No, i have not. :-/ Regards, J -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-6-0-yarn-shuffle-tp25961p26002.html Sent from the Apache Spark User List mailing list archive at Nabble.com.

Re: Spark App -Yarn-Cluster-Mode ===> Hadoop_conf_**.zip file.

2016-01-15 Thread Ted Yu
bq. check application tracking page:http://slave1:8088/proxy/application_1452763526769_0011/ Then , ... Have you done the above to see what error was in each attempt ? Which Spark / hadoop release are you using ? Thanks On Fri, Jan

Spark App -Yarn-Cluster-Mode ===> Hadoop_conf_**.zip file.

2016-01-15 Thread Siddharth Ubale
Hi, I am trying to run a Spark streaming application in yarn-cluster mode. However I am facing an issue where the job ends asking for a particular Hadoop_conf_**.zip file in hdfs location. Can any one guide with this? The application works fine in local mode only it stops abruptly for want of

Re: Spark App -Yarn-Cluster-Mode ===> Hadoop_conf_**.zip file.

2016-01-15 Thread Ted Yu
at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > at java.lang.reflect.Method.invoke(Method.java:497) > > at > org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:483) >

RE: Spark App -Yarn-Cluster-Mode ===> Hadoop_conf_**.zip file.

2016-01-15 Thread Siddharth Ubale
.ub...@syncoms.com> Cc: user@spark.apache.org Subject: Re: Spark App -Yarn-Cluster-Mode ===> Hadoop_conf_**.zip file. bq. check application tracking page:http://slave1:8088/proxy/application_1452763526769_0011/ Then<http://slave1:8088/proxy/application_1452763526769_0011/Then>, ...

Spark on YARN job continuously reports "Application does not exist in cache"

2016-01-13 Thread Prabhu Joseph
Hi All, When we submit Spark jobs on YARN, during RM failover, we see lot of jobs reporting below error messages. *2016-01-11 09:41:06,682 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1450676950893_0280_01* 2016-01-11

Re: [Spark on YARN] Multiple Auxiliary Shuffle Service Versions

2016-01-06 Thread Deenar Toraskar
place the spark yarn shuffle jar in the same location, but with no success. $ find . -name *shuffle*.jar ./hadoop/client/hadoop-mapreduce-client-shuffle.jar ./hadoop/client/hadoop-mapreduce-client-shuffle-2.7.1.2.3.2.0-2950.jar ./hadoop/client/spark-1.6.0-SNAPSHOT-yarn-shuffle.jar ./hadoop-mapreduce/

Re: ​Spark 1.6 - YARN Cluster Mode

2015-12-21 Thread Akhil Das
with 1.5 using the same "spark-props.conf" and "spark-env.sh" > config files the cluster mode works as expected. > > Has anyone else also tried the cluster mode in 1.6? > > > Problem reproduction: > > # spark-submit --master yarn --deploy-mode cluster

​Spark 1.6 - YARN Cluster Mode

2015-12-17 Thread syepes
xpected. Has anyone else also tried the cluster mode in 1.6? Problem reproduction: # spark-submit --master yarn --deploy-mode cluster --num-executors 1 --properties-file $PWD/spark-props.conf --class org.apache.spark.examples.SparkPi /opt/spark/lib/spark-examples-1.6.0-SNAPSHOT-hadoop2.7

Can't run spark on yarn

2015-12-17 Thread Eran Witkon
Hi, I am trying to install spark 1.5.2 on Apache hadoop 2.6 and Hive and yarn spark-env.sh export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop bash_profile #HADOOP VARIABLES START export JAVA_HOME=/usr/lib/jvm/java-8-oracle/ export HADOOP_INSTALL=/usr/local/hadoop export PATH=$PATH

Re: Can't run spark on yarn

2015-12-17 Thread Saisai Shao
Please check the Yarn AM log to see why AM is failed to start. That's the reason why using `sc` will get such complaint. On Fri, Dec 18, 2015 at 4:25 AM, Eran Witkon <eranwit...@gmail.com> wrote: > Hi, > I am trying to install spark 1.5.2 on Apache hadoop 2.6 and Hive and yarn >

Re: Can't run spark on yarn

2015-12-17 Thread Alexander Pivovarov
M log to see why AM is failed to start. That's the > reason why using `sc` will get such complaint. > > On Fri, Dec 18, 2015 at 4:25 AM, Eran Witkon <eranwit...@gmail.com> wrote: > >> Hi, >> I am trying to install spark 1.5.2 on Apache hadoop 2.6 and Hive and yarn >&g

Re: Spark on YARN multitenancy

2015-12-15 Thread Ben Roling
any further information on this matter. On > the other hand, I feel this must be pretty common issue for a lot of users. > > So, > > 1. What is your experience when dealing with multitenant (multiple >users) Spark cluster with YARN? >2. Is Spark architectually adept to suppor

Re: Spark on YARN multitenancy

2015-12-15 Thread Ashwin Sai Shankar
hile it's BUSY. What I am looking >>> for is similar approch to MapReduce where a new user obtains fair share of >>> resources >>> >>> I haven't been able to locate any further information on this matter. On >>> the other hand, I feel this must be pretty

Re: Spark on YARN multitenancy

2015-12-15 Thread Ben Roling
On >> the other hand, I feel this must be pretty common issue for a lot of users. >> >> So, >> >>1. What is your experience when dealing with multitenant (multiple >>users) Spark cluster with YARN? >>2. Is Spark architectually adept to support releasing resources while >>it's busy? Is this a planned feature or is it something that conflicts >> with >>the idea of Spark executors? >> >> Thanks >> >

Spark on YARN multitenancy

2015-12-15 Thread David Fox
ultitenant (multiple users) Spark cluster with YARN? 2. Is Spark architectually adept to support releasing resources while it's busy? Is this a planned feature or is it something that conflicts with the idea of Spark executors? Thanks

Re: Spark on YARN: java.lang.ClassCastException SerializedLambda to org.apache.spark.api.java.function.Function in instance of org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1

2015-12-06 Thread Mohamed Nadjib Mami
Your jars are not delivered to the workers. Have a look at this: http://stackoverflow.com/questions/24052899/how-to-make-it-easier-to-deploy-my-jar-to-spark-cluster-in-standalone-mode -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-on-YARN-java-lang

Re: Spark on yarn vs spark standalone

2015-11-30 Thread Jacek Laskowski
Hi, My understanding of Spark on YARN and even Spark in general is very limited so keep that in mind. I'm not sure why you compare yarn-cluster and spark standalone? In yarn-cluster a driver runs on a node in the YARN cluster while spark standalone keeps the driver on the machine you launched

Re: Spark on yarn vs spark standalone

2015-11-30 Thread Jacek Laskowski
cations > . Also, > http://spark.apache.org/docs/latest/spark-standalone.html#high-availability > > On Mon, Nov 30, 2015 at 9:47 AM, Jacek Laskowski <ja...@japila.pl> wrote: >> >> Hi, >> >> My understanding of Spark on YARN and even Spark in general is very >&

Re: Spark on yarn vs spark standalone

2015-11-30 Thread Mark Hamstra
at 9:47 AM, Jacek Laskowski <ja...@japila.pl> wrote: > Hi, > > My understanding of Spark on YARN and even Spark in general is very > limited so keep that in mind. > > I'm not sure why you compare yarn-cluster and spark standalone? In > yarn-cluster a driver runs on a

<    1   2   3   4   5   6   >