Re: Spark Standalone Mode, application runs, but executor is killed

2018-01-26 Thread Chandu
/Reply from Marco posted in another thread/

Re: Best active groups, forums or contacts for Spark ?
Posted by Marco Mistroni on Jan 26, 2018; 9:08am 
URL:
http://apache-spark-user-list.1001560.n3.nabble.com/Best-active-groups-forums-or-contacts-for-Spark-tp30744p30748.html

Hi
 From personal experienceand I might be asking u obvious question
1. Does it work in standalone (no cluster)
2. Can u break down app in pieces and try to see at which step the code gets
killed?
3. Have u had a look at spark gui to see if we executors go oom?

I might be oversimplifying what spark does. But if ur logic works standalone
and does not work in clusterthe cluster might b ur problem..(apart from
modules not being serializable)
If it breaks in no cluster mode then it's easier to debug
I am no way an expert, just talking from my little personal experience.
I m sure someone here can give more hints on how to debug properly a spark
app
Hth



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Spark Standalone Mode, application runs, but executor is killed

2018-01-26 Thread Chandu
@Marco
Thank you.
I thought Standalone and Standalone cluster are the same?
The app is not a huge app.
It's just PI calculation example.
The value of PI is calculated passed to the driver successfully.
When I issue the spark.stop from my driver, that is when I see the KILLED
message on the worker .
I checked the spark GUI for master and worker and did not see any
errors/exceptions reported.

/18/01/26 08:30:32,058 INFO Worker: Asked to kill executor
app-20180126082722-0001/0
18/01/26 08:30:32,064 INFO ExecutorRunner: Runner thread for executor
app-20180126082722-0001/0 interrupted
18/01/26 08:30:32,065 INFO ExecutorRunner: Killing process!/




--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Spark Standalone Mode, application runs, but executor is killed

2018-01-26 Thread Chandu
/Reply from Marco in another post/

Re: Best active groups, forums or contacts for Spark ?
Posted by Marco Mistroni on Jan 26, 2018; 9:08am 
URL:
http://apache-spark-user-list.1001560.n3.nabble.com/Best-active-groups-forums-or-contacts-for-Spark-tp30744p30748.html

Hi
 From personal experienceand I might be asking u obvious question
1. Does it work in standalone (no cluster)
2. Can u break down app in pieces and try to see at which step the code gets
killed?
3. Have u had a look at spark gui to see if we executors go oom?

I might be oversimplifying what spark does. But if ur logic works standalone
and does not work in clusterthe cluster might b ur problem..(apart from
modules not being serializable)
If it breaks in no cluster mode then it's easier to debug
I am no way an expert, just talking from my little personal experience.
I m sure someone here can give more hints on how to debug properly a spark
app
Hth



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Spark Standalone Mode, application runs, but executor is killed

2018-01-26 Thread Chandu
/Reply from Marco posted in another thread/

Re: Best active groups, forums or contacts for Spark ?
Posted by Marco Mistroni on Jan 26, 2018; 9:08am 
URL:
http://apache-spark-user-list.1001560.n3.nabble.com/Best-active-groups-forums-or-contacts-for-Spark-tp30744p30748.html

Hi
 From personal experienceand I might be asking u obvious question
1. Does it work in standalone (no cluster)
2. Can u break down app in pieces and try to see at which step the code gets
killed?
3. Have u had a look at spark gui to see if we executors go oom?

I might be oversimplifying what spark does. But if ur logic works standalone
and does not work in clusterthe cluster might b ur problem..(apart from
modules not being serializable)
If it breaks in no cluster mode then it's easier to debug
I am no way an expert, just talking from my little personal experience.
I m sure someone here can give more hints on how to debug properly a spark
app
Hth



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Spark Standalone Mode, application runs, but executor is killed

2018-01-25 Thread Chandu
Hi,
I tried my question @ stackoverlfow.com (
https://stackoverflow.com/questions/48445145/spark-standalone-mode-application-runs-but-executor-is-killed-with-exitstatus),
yet to be answere, so thought I will tru the user group.

I am new to Apache Spark and was trying to run the example Pi Calculation
application on my local spark setup (using Standalone Cluster). Both the
Master, Slave and Driver are running on my local machine.

What I am noticing is that, the PI is calculated successfully, however in
the slave logs I see that the Worker/Executor is being killed with
exitStatus 1. I do not see any errors/exceptions logged to the console
otherwise. I tried finding help on similar issue, but most of the search
hits were referring to exitStatus 137 etc. (e.g: Spark application kills
executor
https://stackoverflow.com/questions/40910952/spark-application-kills-executor
<https://stackoverflow.com/questions/40910952/spark-application-kills-executor> 
 

I have failed miserably to understand why the Worker is being killed
instead of completing the execution with 'EXITED' state. I think it's
related to how I am executing the app, but am not quite clear what am I
doing wrong.

The code and logs are available @
https://gist.github.com/Chandu/a83c13c045f1d1b480d8839e145b2749
<https://gist.github.com/Chandu/a83c13c045f1d1b480d8839e145b2749>   (trying
to
keep the email content short)

I wantd to understand that if my assumption of an Executor should have a
state of Exited when there are no errors in execution or is it always set
as KILLED when a spark job is completed?

I tried to understand the flow looking at the source code and with my
limited understanding of the code, I found that the Executor would always
end up with KILLED status (most likely my conclusion is wrong) based on the
code @
https://github.com/apache/spark/blob/39e2bad6a866d27c3ca594d15e574a1da3ee84cc/core/src/main/scala/org/apache/spark/deploy/worker/ExecutorRunner.scala#L118
<https://github.com/apache/spark/blob/39e2bad6a866d27c3ca594d15e574a1da3ee84cc/core/src/main/scala/org/apache/spark/deploy/worker/ExecutorRunner.scala#L118>
  

Can someone guide me on identifying the root cause for this issue or if my
assumption of the Exectuor having a status of EXITED at the end of
execution is not correct?






--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org