Hi All,
Am trying to submit my application using spark-submit in yarn mode.
But its failing because of unknown queue default, we specified the queue
name in spark-default.conf as spark.yarn.queue SecondaryQueue
its failing for one application, but for another application dont know the
reason
Can you try increasing the partition for the base RDD/dataframe that you
are working on?
On Tue, May 8, 2018 at 5:05 PM, Debabrata Ghosh
wrote:
> Hi Everyone,
> I have been trying to run spark-shell in YARN client mode, but am getting
> lot of ClosedChannelException
Hi Everyone,
I have been trying to run spark-shell in YARN client mode, but am getting
lot of ClosedChannelException errors, however the program works fine on
local mode. I am using spark 2.2.0 build for Hadoop 2.7.3. If you are
familiar with this error, please can you help with the possible
I may have found my problem. We have a scala wrapper on top of spark-submit
to run the shell command through scala.
We were kind of eating the exit code from spark-submit in that wrapper.
When I looked at what the actual exit code was stripping away the wrapper I
got 1.
So I think spark-submit is
Hi,
➜ spark git:(master) ✗ ./bin/spark-submit whatever || echo $?
Error: Cannot load main class from JAR file:/Users/jacek/dev/oss/spark/whatever
Run with --help for usage help or --verbose for debug output
1
I see 1 and there are other cases for 1 too.
Pozdrawiam,
Jacek Laskowski
Hello,
+1, i have exactly the same issue. I need the exit code to make a decision
on oozie executing actions. Spark-submit always returns 0 when catching the
exception. From spark 1.5 to 1.6.x, i still have the same issue... It would
be great to fix it or to know if there is some work around
Hi,
An interesting case. You don't use Spark resources whatsoever.
Creating a SparkConf does not use YARN...yet. I think any run mode
would have the same effect. So, although spark-submit could have
returned exit code 1, the use case touches Spark very little.
What version is that? Do you see
Hi All,
I wrote a test script which always throws an exception as below :
object Test {
def main(args: Array[String]) {
try {
val conf =
new SparkConf()
.setAppName("Test")
throw new RuntimeException("Some Exception")
println("all done!")
}
this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/need-info-on-Spark-submit-on-yarn-cluster-mode-tp22420.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail
-on-Spark-submit-on-yarn-cluster-mode-tp22420.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h
Hey Tobias,
Can you try using the YARN Fair Scheduler and set
yarn.scheduler.fair.continuous-scheduling-enabled to true?
-Sandy
On Sun, Dec 7, 2014 at 5:39 PM, Tobias Pfeiffer t...@preferred.jp wrote:
Hi,
thanks for your responses!
On Sat, Dec 6, 2014 at 4:22 AM, Sandy Ryza
Hi,
On Tue, Dec 9, 2014 at 4:39 AM, Sandy Ryza sandy.r...@cloudera.com wrote:
Can you try using the YARN Fair Scheduler and set
yarn.scheduler.fair.continuous-scheduling-enabled to true?
I'm using Cloudera 5.2.0 and my configuration says
yarn.resourcemanager.scheduler.class =
Hi,
thanks for your responses!
On Sat, Dec 6, 2014 at 4:22 AM, Sandy Ryza sandy.r...@cloudera.com wrote:
What version are you using? In some recent versions, we had a couple of
large hardcoded sleeps on the Spark side.
I am using Spark 1.1.1.
As Andrew mentioned, I guess most of the 10
Great to hear!
-Sandy
On Fri, Dec 5, 2014 at 11:17 PM, Denny Lee denny.g@gmail.com wrote:
Okay, my bad for not testing out the documented arguments - once i use the
correct ones, the query shrinks completes in ~55s (I can probably make it
faster). Thanks for the help, eh?!
On Fri
Hi, all:
According to https://github.com/apache/spark/pull/2732, When a spark job fails
or exits nonzero in yarn-cluster mode, the spark-submit will get the
corresponding return code of the spark job. But I tried in spark-1.1.1 yarn
cluster, spark-submit return zero anyway.
Here is my spark
I tried in spark client mode, spark-submit can get the correct return code from
spark job. But in yarn-cluster mode, It failed.
From: lin_q...@outlook.com
To: u...@spark.incubator.apache.org
Subject: Issue on [SPARK-3877][YARN]: Return code of the spark-submit in
yarn-cluster mode
Date: Fri, 5
-submit cannot
get the second return code 100. What's the difference between these two
`exit`? I was so confused.
From: lin_q...@outlook.com
To: u...@spark.incubator.apache.org
Subject: RE: Issue on [SPARK-3877][YARN]: Return code of the spark-submit in
yarn-cluster mode
Date: Fri, 5 Dec 2014 17
.
--
From: lin_q...@outlook.com
To: u...@spark.incubator.apache.org
Subject: RE: Issue on [SPARK-3877][YARN]: Return code of the spark-submit
in yarn-cluster mode
Date: Fri, 5 Dec 2014 17:11:39 +0800
I tried in spark client mode, spark-submit can get the correct return code
Hey Tobias,
As you suspect, the reason why it's slow is because the resource manager in
YARN takes a while to grant resources. This is because YARN needs to first
set up the application master container, and then this AM needs to request
more containers for Spark executors. I think this accounts
Hi Tobias,
What version are you using? In some recent versions, we had a couple of
large hardcoded sleeps on the Spark side.
-Sandy
On Fri, Dec 5, 2014 at 11:15 AM, Andrew Or and...@databricks.com wrote:
Hey Tobias,
As you suspect, the reason why it's slow is because the resource manager
My submissions of Spark on YARN (CDH 5.2) resulted in a few thousand steps.
If I was running this on standalone cluster mode the query finished in 55s
but on YARN, the query was still running 30min later. Would the hard coded
sleeps potentially be in play here?
On Fri, Dec 5, 2014 at 11:23 Sandy
Just an FYI - I can submit the SparkPi app to YARN in cluster mode on a
1-node m3.xlarge EC2 instance instance and the app finishes running
successfully in about 40 seconds. I just figured the 30 - 40 sec run time
was normal b/c of the submitting overhead that Andrew mentioned.
Denny, you can
Hi Denny,
Those sleeps were only at startup, so if jobs are taking significantly
longer on YARN, that should be a different problem. When you ran on YARN,
did you use the --executor-cores, --executor-memory, and --num-executors
arguments? When running against a standalone cluster, by default
Hey Sandy,
What are those sleeps for and do they still exist? We have seen about a
1min to 1:30 executor startup time, which is a large chunk for jobs that
run in ~10min.
Thanks,
Arun
On Fri, Dec 5, 2014 at 3:20 PM, Sandy Ryza sandy.r...@cloudera.com wrote:
Hi Denny,
Those sleeps were only
Likely this not the case here yet one thing to point out with Yarn
parameters like --num-executors is that they should be specified *before*
app jar and app args on spark-submit command line otherwise the app only
gets the default number of containers which is 2.
On Dec 5, 2014 12:22 PM, Sandy
Hey Arun,
The sleeps would only cause maximum like 5 second overhead. The idea was
to give executors some time to register. On more recent versions, they
were replaced with the spark.scheduler.minRegisteredResourcesRatio and
spark.scheduler.maxRegisteredResourcesWaitingTime. As of 1.1, by
these two `exit`? I was so confused.
--
From: lin_q...@outlook.com
To: u...@spark.incubator.apache.org
Subject: RE: Issue on [SPARK-3877][YARN]: Return code of the spark-submit
in yarn-cluster mode
Date: Fri, 5 Dec 2014 17:11:39 +0800
I tried in spark client mode
Sorry for the delay in my response - for my spark calls for stand-alone and
YARN, I am using the --executor-memory and --total-executor-cores for the
submission. In standalone, my baseline query completes in ~40s while in
YARN, it completes in ~1800s. It does not appear from the RM web UI that
Okay, my bad for not testing out the documented arguments - once i use the
correct ones, the query shrinks completes in ~55s (I can probably make it
faster). Thanks for the help, eh?!
On Fri Dec 05 2014 at 10:34:50 PM Denny Lee denny.g@gmail.com wrote:
Sorry for the delay in my response
Hi,
I am using spark-submit to submit my application to YARN in yarn-cluster
mode. I have both the Spark assembly jar file as well as my application jar
file put in HDFS and can see from the logging output that both files are
used from there. However, it still takes about 10 seconds for my
.
From: Xiangrui Meng men...@gmail.commailto:men...@gmail.com
Sent: Sunday, September 07, 2014 11:40 PM
To: Victor Tso-Guillen
Cc: Penny Espinoza; Spark
Subject: Re: prepending jars to the driver class path for spark-submit on YARN
There is an undocumented configuration to put users jars
with
org.apache.httpcomponents httpcore and httpclient when using spark-submit
with YARN running Spark 1.0.2 on a Hadoop 2.2 cluster. I’ve seen several
posts about this issue, but no resolution.
The error message is this:
Caused by: java.lang.NoSuchMethodError
I don't understand what you mean. Can you be more specific?
From: Victor Tso-Guillen v...@paxata.com
Sent: Saturday, September 06, 2014 5:13 PM
To: Penny Espinoza
Cc: Spark
Subject: Re: prepending jars to the driver class path for spark-submit on YARN
I ran
When you submit the job to yarn with spark-submit, set --conf
spark.yarn.user.classpath.first=true .
On Mon, Sep 8, 2014 at 10:46 AM, Penny Espinoza
pesp...@societyconsulting.com wrote:
I don't understand what you mean. Can you be more specific?
From: Victor
...@gmail.com
Sent: Sunday, September 07, 2014 11:40 PM
To: Victor Tso-Guillen
Cc: Penny Espinoza; Spark
Subject: Re: prepending jars to the driver class path for spark-submit on YARN
There is an undocumented configuration to put users jars in front of
spark jar. But I'm not very certain that it works
with some dependency issues with org.apache.httpcomponents
httpcore and httpclient when using spark-submit with YARN running Spark 1.0.2
on a Hadoop 2.2 cluster. I've seen several posts about this issue, but no
resolution.
The error message is this:
Caused by: java.lang.NoSuchMethodError
with
org.apache.httpcomponents httpcore and httpclient when using spark-submit
with YARN running Spark 1.0.2 on a Hadoop 2.2 cluster. I’ve seen several
posts about this issue, but no resolution.
The error message is this:
Caused by: java.lang.NoSuchMethodError
Hey - I’m struggling with some dependency issues with org.apache.httpcomponents
httpcore and httpclient when using spark-submit with YARN running Spark 1.0.2
on a Hadoop 2.2 cluster. I’ve seen several posts about this issue, but no
resolution.
The error message is this:
Caused
Is there more documentation on using spark-submit with Yarn? Trying to
launch a simple job does not seem to work.
My run command is as follows:
/opt/cloudera/parcels/CDH/bin/spark-submit \
--master yarn \
--deploy-mode client \
--executor-memory 10g \
--driver-memory 10g
On Tue, Aug 19, 2014 at 2:34 PM, Arun Ahuja aahuj...@gmail.com wrote:
/opt/cloudera/parcels/CDH/bin/spark-submit \
--master yarn \
--deploy-mode client \
This should be enough.
But when I view the job 4040 page, SparkUI, there is a single executor (just
the driver node) and I see
, Marcelo Vanzin van...@cloudera.com wrote:
On Tue, Aug 19, 2014 at 2:34 PM, Arun Ahuja aahuj...@gmail.com wrote:
/opt/cloudera/parcels/CDH/bin/spark-submit \
--master yarn \
--deploy-mode client \
This should be enough.
But when I view the job 4040 page, SparkUI, there is a single
...@gmail.com wrote:
/opt/cloudera/parcels/CDH/bin/spark-submit \
--master yarn \
--deploy-mode client \
This should be enough.
But when I view the job 4040 page, SparkUI, there is a single executor
(just
the driver node) and I see the following in enviroment
spark.master
42 matches
Mail list logo