Re: Worker is KILLED for no reason

2015-06-24 Thread Demi Ben-Ari
Hi,
I've open up an issue bug on the Spark project on JIRA:
https://issues.apache.org/jira/browse/SPARK-8557

Would really appreciate some insights on the issue,
*It's strange that no one else encountered the problem.*

Have a great day!

On Mon, Jun 15, 2015 at 12:03 PM, nizang ni...@windward.eu wrote:

 hi,

 I'm using the new 1.4.0 installation, and ran a job there. The job finished
 and everything seems fine. When I enter the application, I can see that the
 job is marked as KILLED:

 Removed Executors

 ExecutorID  Worker  Cores   Memory  State   Logs
 0   worker-20150615080550-172.31.11.225-51630   4   10240
  KILLED  stdout stderr

 when I enter the worker itself, I can see it marked as EXITED:


 ExecutorID  Cores   State   Memory  Job Details Logs
 0   4   EXITED  10.0 GB
 ID: app-20150615080601-
 Name: dev.app.name
 User: root
 stdout stderr

 no interesting things in the stdout or stderr

 Why is the job marked as KILLED in the application page?

 this is the only job I ran, and the job that was in this executors. Also,
 by
 checking the logs and output things seems to run fine

 thanks, nizan



 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/Worker-is-KILLED-for-no-reason-tp23314.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org




-- 
Best regards,
Demi Ben-Ari http://il.linkedin.com/in/demibenari
Senior Software Engineer
Windward Ltd. http://windward.eu/


Worker is KILLED for no reason

2015-06-15 Thread nizang
hi,

I'm using the new 1.4.0 installation, and ran a job there. The job finished
and everything seems fine. When I enter the application, I can see that the
job is marked as KILLED:

Removed Executors

ExecutorID  Worker  Cores   Memory  State   Logs
0   worker-20150615080550-172.31.11.225-51630   4   10240   KILLED  
stdout stderr

when I enter the worker itself, I can see it marked as EXITED:


ExecutorID  Cores   State   Memory  Job Details Logs
0   4   EXITED  10.0 GB 
ID: app-20150615080601-
Name: dev.app.name
User: root
stdout stderr

no interesting things in the stdout or stderr

Why is the job marked as KILLED in the application page?

this is the only job I ran, and the job that was in this executors. Also, by
checking the logs and output things seems to run fine

thanks, nizan



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Worker-is-KILLED-for-no-reason-tp23314.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org