[jira] [Updated] (HDFS-13882) Set a maximum for the delay before retrying locateFollowingBlock

2018-09-20 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HDFS-13882:

Attachment: HDFS-13882.005.patch

> Set a maximum for the delay before retrying locateFollowingBlock
> 
>
> Key: HDFS-13882
> URL: https://issues.apache.org/jira/browse/HDFS-13882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13882.001.patch, HDFS-13882.002.patch, 
> HDFS-13882.003.patch, HDFS-13882.004.patch, HDFS-13882.005.patch
>
>
> More and more we are seeing cases where customers are running into the java 
> io exception "Unable to close file because the last block does not have 
> enough number of replicas" on client file closure. The common workaround is 
> to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13882) Set a maximum for the delay before retrying locateFollowingBlock

2018-09-17 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HDFS-13882:

Attachment: HDFS-13882.004.patch

> Set a maximum for the delay before retrying locateFollowingBlock
> 
>
> Key: HDFS-13882
> URL: https://issues.apache.org/jira/browse/HDFS-13882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13882.001.patch, HDFS-13882.002.patch, 
> HDFS-13882.003.patch, HDFS-13882.004.patch
>
>
> More and more we are seeing cases where customers are running into the java 
> io exception "Unable to close file because the last block does not have 
> enough number of replicas" on client file closure. The common workaround is 
> to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13882) Set a maximum for the delay before retrying locateFollowingBlock

2018-09-13 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HDFS-13882:

Attachment: HDFS-13882.003.patch

> Set a maximum for the delay before retrying locateFollowingBlock
> 
>
> Key: HDFS-13882
> URL: https://issues.apache.org/jira/browse/HDFS-13882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13882.001.patch, HDFS-13882.002.patch, 
> HDFS-13882.003.patch
>
>
> More and more we are seeing cases where customers are running into the java 
> io exception "Unable to close file because the last block does not have 
> enough number of replicas" on client file closure. The common workaround is 
> to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13882) Set a maximum for the delay before retrying locateFollowingBlock

2018-09-12 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HDFS-13882:

Attachment: HDFS-13882.002.patch

> Set a maximum for the delay before retrying locateFollowingBlock
> 
>
> Key: HDFS-13882
> URL: https://issues.apache.org/jira/browse/HDFS-13882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13882.001.patch, HDFS-13882.002.patch
>
>
> More and more we are seeing cases where customers are running into the java 
> io exception "Unable to close file because the last block does not have 
> enough number of replicas" on client file closure. The common workaround is 
> to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13882) Set a maximum for the delay before retrying locateFollowingBlock

2018-09-12 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HDFS-13882:

Summary: Set a maximum for the delay before retrying locateFollowingBlock  
(was: Change dfs.client.block.write.locateFollowingBlock.retries default from 5 
to 10)

> Set a maximum for the delay before retrying locateFollowingBlock
> 
>
> Key: HDFS-13882
> URL: https://issues.apache.org/jira/browse/HDFS-13882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13882.001.patch
>
>
> More and more we are seeing cases where customers are running into the java 
> io exception "Unable to close file because the last block does not have 
> enough number of replicas" on client file closure. The common workaround is 
> to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org