[jira] [Updated] (HDFS-14107) FileContext Delete on Exit Improvements

2018-12-02 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14107:
---
Status: Patch Available  (was: Open)

> FileContext Delete on Exit Improvements
> ---
>
> Key: HDFS-14107
> URL: https://issues.apache.org/jira/browse/HDFS-14107
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-14107.2.patch, HDFS-14107.1.patch
>
>
> {code:java|FileContext.java}
> synchronized (DELETE_ON_EXIT) {
>   Set>> set = DELETE_ON_EXIT.entrySet();
>   for (Entry> entry : set) {
> FileContext fc = entry.getKey();
> Set paths = entry.getValue();
> for (Path path : paths) {
>   try {
> fc.delete(path, true);
>   } catch (IOException e) {
> LOG.warn("Ignoring failure to deleteOnExit for path " + path);
>   }
> }
>   }
>   DELETE_ON_EXIT.clear();
> {code}
> # Include the {{IOException}} in the logging so that admins can know why the 
> file was not deleted
> # Do not bother clearing out the data structure.  This code is only called if 
> the JVM is going down.  Better to spend the time allowing another shutdown 
> hook to run than to spend time cleaning this thing up.
> # Use Guava {{MultiMap}} for readability
> # Paths are currently stored in a {{TreeSet}}.  This set implementation 
> orders the files by names.  It does not seem worth much to order the files.  
> Use a faster {{HashSet}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14107) FileContext Delete on Exit Improvements

2018-12-02 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14107:
---
Status: Open  (was: Patch Available)

> FileContext Delete on Exit Improvements
> ---
>
> Key: HDFS-14107
> URL: https://issues.apache.org/jira/browse/HDFS-14107
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-14107.2.patch, HDFS-14107.1.patch
>
>
> {code:java|FileContext.java}
> synchronized (DELETE_ON_EXIT) {
>   Set>> set = DELETE_ON_EXIT.entrySet();
>   for (Entry> entry : set) {
> FileContext fc = entry.getKey();
> Set paths = entry.getValue();
> for (Path path : paths) {
>   try {
> fc.delete(path, true);
>   } catch (IOException e) {
> LOG.warn("Ignoring failure to deleteOnExit for path " + path);
>   }
> }
>   }
>   DELETE_ON_EXIT.clear();
> {code}
> # Include the {{IOException}} in the logging so that admins can know why the 
> file was not deleted
> # Do not bother clearing out the data structure.  This code is only called if 
> the JVM is going down.  Better to spend the time allowing another shutdown 
> hook to run than to spend time cleaning this thing up.
> # Use Guava {{MultiMap}} for readability
> # Paths are currently stored in a {{TreeSet}}.  This set implementation 
> orders the files by names.  It does not seem worth much to order the files.  
> Use a faster {{HashSet}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14107) FileContext Delete on Exit Improvements

2018-12-02 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14107:
---
Attachment: HADOOP-14107.2.patch

> FileContext Delete on Exit Improvements
> ---
>
> Key: HDFS-14107
> URL: https://issues.apache.org/jira/browse/HDFS-14107
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-14107.2.patch, HDFS-14107.1.patch
>
>
> {code:java|FileContext.java}
> synchronized (DELETE_ON_EXIT) {
>   Set>> set = DELETE_ON_EXIT.entrySet();
>   for (Entry> entry : set) {
> FileContext fc = entry.getKey();
> Set paths = entry.getValue();
> for (Path path : paths) {
>   try {
> fc.delete(path, true);
>   } catch (IOException e) {
> LOG.warn("Ignoring failure to deleteOnExit for path " + path);
>   }
> }
>   }
>   DELETE_ON_EXIT.clear();
> {code}
> # Include the {{IOException}} in the logging so that admins can know why the 
> file was not deleted
> # Do not bother clearing out the data structure.  This code is only called if 
> the JVM is going down.  Better to spend the time allowing another shutdown 
> hook to run than to spend time cleaning this thing up.
> # Use Guava {{MultiMap}} for readability
> # Paths are currently stored in a {{TreeSet}}.  This set implementation 
> orders the files by names.  It does not seem worth much to order the files.  
> Use a faster {{HashSet}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14107) FileContext Delete on Exit Improvements

2018-11-27 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14107:
---
Attachment: HDFS-14107.1.patch

> FileContext Delete on Exit Improvements
> ---
>
> Key: HDFS-14107
> URL: https://issues.apache.org/jira/browse/HDFS-14107
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14107.1.patch
>
>
> {code:java|FileContext.java}
> synchronized (DELETE_ON_EXIT) {
>   Set>> set = DELETE_ON_EXIT.entrySet();
>   for (Entry> entry : set) {
> FileContext fc = entry.getKey();
> Set paths = entry.getValue();
> for (Path path : paths) {
>   try {
> fc.delete(path, true);
>   } catch (IOException e) {
> LOG.warn("Ignoring failure to deleteOnExit for path " + path);
>   }
> }
>   }
>   DELETE_ON_EXIT.clear();
> {code}
> # Include the {{IOException}} in the logging so that admins can know why the 
> file was not deleted
> # Do not bother clearing out the data structure.  This code is only called if 
> the JVM is going down.  Better to spend the time allowing another shutdown 
> hook to run than to spend time cleaning this thing up.
> # Use Guava {{MultiMap}} for readability
> # Paths are currently stored in a {{TreeSet}}.  This set implementation 
> orders the files by names.  It does not seem worth much to order the files.  
> Use a faster {{HashSet}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14107) FileContext Delete on Exit Improvements

2018-11-27 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14107:
---
Status: Patch Available  (was: Open)

> FileContext Delete on Exit Improvements
> ---
>
> Key: HDFS-14107
> URL: https://issues.apache.org/jira/browse/HDFS-14107
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14107.1.patch
>
>
> {code:java|FileContext.java}
> synchronized (DELETE_ON_EXIT) {
>   Set>> set = DELETE_ON_EXIT.entrySet();
>   for (Entry> entry : set) {
> FileContext fc = entry.getKey();
> Set paths = entry.getValue();
> for (Path path : paths) {
>   try {
> fc.delete(path, true);
>   } catch (IOException e) {
> LOG.warn("Ignoring failure to deleteOnExit for path " + path);
>   }
> }
>   }
>   DELETE_ON_EXIT.clear();
> {code}
> # Include the {{IOException}} in the logging so that admins can know why the 
> file was not deleted
> # Do not bother clearing out the data structure.  This code is only called if 
> the JVM is going down.  Better to spend the time allowing another shutdown 
> hook to run than to spend time cleaning this thing up.
> # Use Guava {{MultiMap}} for readability
> # Paths are currently stored in a {{TreeSet}}.  This set implementation 
> orders the files by names.  It does not seem worth much to order the files.  
> Use a faster {{HashSet}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org