[jira] [Updated] (SPARK-19026) local directories cannot be cleanuped when create directory of "executor-***" throws IOException such as there is no more free disk space to create it etc.

2017-01-08 Thread Sean Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-19026:
--
  Assignee: zuotingbing
  Priority: Minor  (was: Major)
Issue Type: Improvement  (was: Bug)

> local directories cannot be cleanuped when create directory of "executor-***" 
> throws IOException such as there is no more free disk space to create it etc.
> ---
>
> Key: SPARK-19026
> URL: https://issues.apache.org/jira/browse/SPARK-19026
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 1.5.2, 2.0.2
> Environment: linux 
>Reporter: zuotingbing
>Assignee: zuotingbing
>Priority: Minor
> Fix For: 2.2.0
>
>
> i set SPARK_LOCAL_DIRS variable like this:
> SPARK_LOCAL_DIRS=/data2/spark/tmp,/data3/spark/tmp,/data4/spark/tmp
> when there is no more free disk space on "/data4/spark/tmp" ,  other local 
> directories (/data2/spark/tmp,/data3/spark/tmp) cannot be cleaned up when my 
> application finished.
>  we should catch the IOExecption when create local dirs , otherwise the 
> variable "appDirectories(appId)" not be set , then local directories 
> "executor-***" which have been created on/data2/spark/tmp and 
> /data3/spark/tmp will not be deleted. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-19026) local directories cannot be cleanuped when create directory of "executor-***" throws IOException such as there is no more free disk space to create it etc.

2017-01-03 Thread zuotingbing (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zuotingbing updated SPARK-19026:

Description: 
i set SPARK_LOCAL_DIRS variable like this:
SPARK_LOCAL_DIRS=/data2/spark/tmp,/data3/spark/tmp,/data4/spark/tmp

when there is no more free disk space on "/data4/spark/tmp" ,  other local 
directories (/data2/spark/tmp,/data3/spark/tmp) cannot be cleaned up when my 
application finished.

 we should catch the IOExecption when create local dirs , otherwise the 
variable "appDirectories(appId)" not be set , then local directories 
"executor-***" which have been created on/data2/spark/tmp and /data3/spark/tmp 
will not be deleted. 


  was:
i set SPARK_LOCAL_DIRS variable like this:
SPARK_LOCAL_DIRS=/data2/spark/tmp,/data3/spark/tmp,/data4/spark/tmp

when there is no more free disk space on "/data4/spark/tmp" ,  other local 
directories (/data2/spark/tmp,/data3/spark/tmp) cannot be cleaned up when my 
application finished.

 we should catch the IOExecption when create local dirs , otherwise  the 
variable "appDirectories(appId)" not be set , then local directories 
"executor-***" which have bean created on/data2/spark/tmp and /data3/spark/tmp 
cannot be deleted. 



> local directories cannot be cleanuped when create directory of "executor-***" 
> throws IOException such as there is no more free disk space to create it etc.
> ---
>
> Key: SPARK-19026
> URL: https://issues.apache.org/jira/browse/SPARK-19026
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.5.2, 2.0.2
> Environment: linux 
>Reporter: zuotingbing
>
> i set SPARK_LOCAL_DIRS variable like this:
> SPARK_LOCAL_DIRS=/data2/spark/tmp,/data3/spark/tmp,/data4/spark/tmp
> when there is no more free disk space on "/data4/spark/tmp" ,  other local 
> directories (/data2/spark/tmp,/data3/spark/tmp) cannot be cleaned up when my 
> application finished.
>  we should catch the IOExecption when create local dirs , otherwise the 
> variable "appDirectories(appId)" not be set , then local directories 
> "executor-***" which have been created on/data2/spark/tmp and 
> /data3/spark/tmp will not be deleted. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-19026) local directories cannot be cleanuped when create directory of "executor-***" throws IOException such as there is no more free disk space to create it etc.

2017-01-03 Thread zuotingbing (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zuotingbing updated SPARK-19026:

Description: 
i set SPARK_LOCAL_DIRS variable like this:
SPARK_LOCAL_DIRS=/data2/spark/tmp,/data3/spark/tmp,/data4/spark/tmp

when there is no more free disk space on "/data4/spark/tmp" ,  other local 
directories (/data2/spark/tmp,/data3/spark/tmp) cannot be cleaned up when my 
application finished.

 we should catch the IOExecption when create local dirs , otherwise  the 
variable "appDirectories(appId)" not be set , then local directories 
"executor-***" which have bean created on/data2/spark/tmp and /data3/spark/tmp 
cannot be deleted. 


  was:
i set SPARK_LOCAL_DIRS variable like this:
SPARK_LOCAL_DIRS=/data2/spark/tmp,/data3/spark/tmp,/data4/spark/tmp

when there is no more free disk space on "/data4/spark/tmp" ,  other local 
directories (/data2/spark/tmp,/data3/spark/tmp) cannot be cleanuped when my 
application finished.

 we should catch the IOExecption when create local dirs throws execption , 
otherwise  the variable "appDirectories(appId)" not be set , then local 
directories "executor-***" cannot be deleted for this application.  If the 
number of folders "executor-***" > 32k we cannot create executor anymore on 
this worker node.



> local directories cannot be cleanuped when create directory of "executor-***" 
> throws IOException such as there is no more free disk space to create it etc.
> ---
>
> Key: SPARK-19026
> URL: https://issues.apache.org/jira/browse/SPARK-19026
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.5.2, 2.0.2
> Environment: linux 
>Reporter: zuotingbing
>
> i set SPARK_LOCAL_DIRS variable like this:
> SPARK_LOCAL_DIRS=/data2/spark/tmp,/data3/spark/tmp,/data4/spark/tmp
> when there is no more free disk space on "/data4/spark/tmp" ,  other local 
> directories (/data2/spark/tmp,/data3/spark/tmp) cannot be cleaned up when my 
> application finished.
>  we should catch the IOExecption when create local dirs , otherwise  the 
> variable "appDirectories(appId)" not be set , then local directories 
> "executor-***" which have bean created on/data2/spark/tmp and 
> /data3/spark/tmp cannot be deleted. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-19026) local directories cannot be cleanuped when create directory of "executor-***" throws IOException such as there is no more free disk space to create it etc.

2016-12-29 Thread zuotingbing (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zuotingbing updated SPARK-19026:

Description: 
i set SPARK_LOCAL_DIRS variable like this:
SPARK_LOCAL_DIRS=/data2/spark/tmp,/data3/spark/tmp,/data4/spark/tmp

when there is no more free disk space on "/data4/spark/tmp" ,  other local 
directories (/data2/spark/tmp,/data3/spark/tmp) cannot be cleanuped when my 
application finished.

 we should catch the IOExecption when create local dirs throws execption , 
otherwise  the variable "appDirectories(appId)" not be set , then local 
directories "executor-***" cannot be deleted for this application.  If the 
number of folders "executor-***" > 32k we cannot create executor anymore on 
this worker node.


  was:
i set SPARK_LOCAL_DIRS variable like this:
SPARK_LOCAL_DIRS=/data2/spark/tmp,/data3/spark/tmp,/data4/spark/tmp

when there is no more free disk space on "/data4/spark/tmp" ,  other local 
directories (/data2/spark/tmp,/data3/spark/tmp) cannot be cleanuped when my 
application finished.

 we should catch the IOExecption when create local dirs throws execption , 
otherwise  the variable "appDirectories(appId)" not be set , then local 
directories "executor-***" cannot be deleted for this application.  If the 
number of folders "executor-***" > 32k we cannot created executor anymore on 
this worker node.



> local directories cannot be cleanuped when create directory of "executor-***" 
> throws IOException such as there is no more free disk space to create it etc.
> ---
>
> Key: SPARK-19026
> URL: https://issues.apache.org/jira/browse/SPARK-19026
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.5.2, 2.0.2
> Environment: linux 
>Reporter: zuotingbing
>
> i set SPARK_LOCAL_DIRS variable like this:
> SPARK_LOCAL_DIRS=/data2/spark/tmp,/data3/spark/tmp,/data4/spark/tmp
> when there is no more free disk space on "/data4/spark/tmp" ,  other local 
> directories (/data2/spark/tmp,/data3/spark/tmp) cannot be cleanuped when my 
> application finished.
>  we should catch the IOExecption when create local dirs throws execption , 
> otherwise  the variable "appDirectories(appId)" not be set , then local 
> directories "executor-***" cannot be deleted for this application.  If the 
> number of folders "executor-***" > 32k we cannot create executor anymore on 
> this worker node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org