[jira] [Updated] (STORM-2745) Hdfs Open Files problem
[ https://issues.apache.org/jira/browse/STORM-2745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] P. Taylor Goetz updated STORM-2745: --- Fix Version/s: (was: 1.x) (was: 2.0.0) > Hdfs Open Files problem > --- > > Key: STORM-2745 > URL: https://issues.apache.org/jira/browse/STORM-2745 > Project: Apache Storm > Issue Type: New Feature > Components: storm-hdfs >Affects Versions: 2.0.0, 1.x >Reporter: Shoeb >Priority: Major > Labels: features, pull-request-available, starter > Original Estimate: 48h > Time Spent: 50m > Remaining Estimate: 47h 10m > > Issue: > Problem exists when there are multiple HDFS writers in writersMap. Each > writer keeps an open hdfs handle to the file. Incase of Inactive writer(i.e. > one which is not consuming any data from long period), the files are not > closed and always remain in open state. > Ideally, these files should get closed and Hdfs writers removed from the > WritersMap. > Solution: > Implement a ClosingFilesPolicy that is based on Tick tuple intervals. At each > tick tuple all Writers are checked and closed if they exist for a long time. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (STORM-2745) Hdfs Open Files problem
[ https://issues.apache.org/jira/browse/STORM-2745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated STORM-2745: -- Labels: features pull-request-available starter (was: features starter) > Hdfs Open Files problem > --- > > Key: STORM-2745 > URL: https://issues.apache.org/jira/browse/STORM-2745 > Project: Apache Storm > Issue Type: New Feature > Components: storm-hdfs >Affects Versions: 2.0.0, 1.x >Reporter: Shoeb > Labels: features, pull-request-available, starter > Fix For: 2.0.0, 1.x > > Original Estimate: 48h > Remaining Estimate: 48h > > Issue: > Problem exists when there are multiple HDFS writers in writersMap. Each > writer keeps an open hdfs handle to the file. Incase of Inactive writer(i.e. > one which is not consuming any data from long period), the files are not > closed and always remain in open state. > Ideally, these files should get closed and Hdfs writers removed from the > WritersMap. > Solution: > Implement a ClosingFilesPolicy that is based on Tick tuple intervals. At each > tick tuple all Writers are checked and closed if they exist for a long time. -- This message was sent by Atlassian JIRA (v6.4.14#64029)