[jira] [Created] (HDFS-9298) remove replica and not add replica with wrong genStamp
Chang Li created HDFS-9298: -- Summary: remove replica and not add replica with wrong genStamp Key: HDFS-9298 URL: https://issues.apache.org/jira/browse/HDFS-9298 Project: Hadoop HDFS Issue Type: Bug Reporter: Chang Li Assignee: Chang Li currently, in setGenerationStampAndVerifyReplicas, replica with wrong gen stamp is not really removed, only StorageLocation of that replica is removed. Moreover, we should check genStamp before addReplicaIfNotPresent -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9289) check genStamp when complete file
Chang Li created HDFS-9289: -- Summary: check genStamp when complete file Key: HDFS-9289 URL: https://issues.apache.org/jira/browse/HDFS-9289 Project: Hadoop HDFS Issue Type: Bug Reporter: Chang Li Assignee: Chang Li we have seen a case of corrupt block which is caused by file complete after a pipelineUpdate, but the file complete with the old block genStamp. This caused the replicas of two datanodes in updated pipeline to be viewed as corrupte -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9193) when a datanode's usage go above 70 percent, we can't open datanodes tab in NN UI
Chang Li created HDFS-9193: -- Summary: when a datanode's usage go above 70 percent, we can't open datanodes tab in NN UI Key: HDFS-9193 URL: https://issues.apache.org/jira/browse/HDFS-9193 Project: Hadoop HDFS Issue Type: Bug Reporter: Chang Li Assignee: Chang Li Priority: Blocker -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9105) separate replication config for client to create a file or set replication
Chang Li created HDFS-9105: -- Summary: separate replication config for client to create a file or set replication Key: HDFS-9105 URL: https://issues.apache.org/jira/browse/HDFS-9105 Project: Hadoop HDFS Issue Type: Bug Reporter: Chang Li Assignee: Chang Li one scenario when we want this new config: we want to enforce min replication of value 2, when client try to create a file, it's check against the replication factor of the new config, which is set to 2, so that client can not create a file of replication of 1. But when client try to close file and there are some datanode failures in the pipeline and only 1 datanode remain, it is checked against old minReplication value, which has value 1, so we will allow client to close the file. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-8916) add nonDfsUsedSpace back to Name Node UI column
Chang Li created HDFS-8916: -- Summary: add nonDfsUsedSpace back to Name Node UI column Key: HDFS-8916 URL: https://issues.apache.org/jira/browse/HDFS-8916 Project: Hadoop HDFS Issue Type: Bug Reporter: Chang Li Assignee: Chang Li nonDfsUsedSpace was taken out in HDFS-8816. Though currently we can see this info in the pop up but we lost the ability to sort them. So propose to add non dfs usage back to column in namenode UI -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-8845) DiskChecker should not traverse entire tree
Chang Li created HDFS-8845: -- Summary: DiskChecker should not traverse entire tree Key: HDFS-8845 URL: https://issues.apache.org/jira/browse/HDFS-8845 Project: Hadoop HDFS Issue Type: Bug Reporter: Chang Li Assignee: Chang Li Attachments: HDFS-8845.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-8783) enable socket timeout for balancer's target connection
Chang Li created HDFS-8783: -- Summary: enable socket timeout for balancer's target connection Key: HDFS-8783 URL: https://issues.apache.org/jira/browse/HDFS-8783 Project: Hadoop HDFS Issue Type: Bug Reporter: Chang Li Assignee: Chang Li Have met a real case when the balancer connected to a "black hole" target datanode which accepted connection but not sent any response back, then balancer got hung -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-8716) introduce a new config specifically for safe mode block count
Chang Li created HDFS-8716: -- Summary: introduce a new config specifically for safe mode block count Key: HDFS-8716 URL: https://issues.apache.org/jira/browse/HDFS-8716 Project: Hadoop HDFS Issue Type: Bug Reporter: Chang Li Assignee: Chang Li During the start up, namenode waits for n replicas of each block to be reported by datanodes before exiting the safe mode. Currently n is tied to the min replicas config. We could set min replicas to more than one but we might want to exit safe mode as soon as each block has one replica reported. This can be worked out by introducing a new config variable for safe mode block count -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-8690) LeaseRenewer should not abort DFSClient when renew fails
Chang Li created HDFS-8690: -- Summary: LeaseRenewer should not abort DFSClient when renew fails Key: HDFS-8690 URL: https://issues.apache.org/jira/browse/HDFS-8690 Project: Hadoop HDFS Issue Type: Bug Reporter: Chang Li Assignee: Chang Li The lease renewer special cases SocketTimeoutExceptions to abort the DFSClient. Aborting causes the client to be permanently unusable, which causes filesystem instances to stop working. All other IOExceptions do not abort. The special case should be removed and/or abort should not completely shutdown the proxy. -- This message was sent by Atlassian JIRA (v6.3.4#6332)