[jira] [Commented] (HDFS-11799) Introduce a config to allow setting up write pipeline with fewer nodes than replication factor

2017-09-17 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169328#comment-16169328
 ] 

Yongjun Zhang commented on HDFS-11799:
--

Thanks for the revised patch [~brahmareddy]. Patch 006 looks good except for 
two nits:

1. suggest to change  
HdfsClientConfigKeys.BlockWrite.ReplaceDatanodeOnFailure.REPLICATION to 
HdfsClientConfigKeys.BlockWrite.ReplaceDatanodeOnFailure.MIN_REPLICATION

2.  replace the string  
'dfs.client.block.write.replace-datanode-on-failure.min.replication'
in the comment with  
'HdfsClientConfigKeys.BlockWrite.ReplaceDatanodeOnFailure.MIN_REPLICATION'

  


> Introduce a config to allow setting up write pipeline with fewer nodes than 
> replication factor
> --
>
> Key: HDFS-11799
> URL: https://issues.apache.org/jira/browse/HDFS-11799
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11799-002.patch, HDFS-11799-003.patch, 
> HDFS-11799-004.patch, HDFS-11799-005.patch, HDFS-11799-006.patch, 
> HDFS-11799.patch
>
>
> During pipeline recovery, if not enough DNs can be found, if 
> dfs.client.block.write.replace-datanode-on-failure.best-effort
> is enabled, we let the pipeline to continue, even if there is a single DN.
> Similarly, when we create the write pipeline initially, if for some reason we 
> can't find enough DNs, we can have a similar config to enable writing with a 
> single DN.
> More study will be done.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11968) ViewFS: StoragePolicies commands fail with HDFS federation

2017-09-17 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-11968:
-
Attachment: HDFS-11968.006.patch

> ViewFS: StoragePolicies commands fail with HDFS federation
> --
>
> Key: HDFS-11968
> URL: https://issues.apache.org/jira/browse/HDFS-11968
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11968.001.patch, HDFS-11968.002.patch, 
> HDFS-11968.003.patch, HDFS-11968.004.patch, HDFS-11968.005.patch, 
> HDFS-11968.006.patch
>
>
> hdfs storagepolicies command fails with HDFS federation.
> For storage policies commands, a given user path should be resolved to a HDFS 
> path and
> storage policy command should be applied onto the resolved HDFS path.
> {code}
>   static DistributedFileSystem getDFS(Configuration conf)
>   throws IOException {
> FileSystem fs = FileSystem.get(conf);
> if (!(fs instanceof DistributedFileSystem)) {
>   throw new IllegalArgumentException("FileSystem " + fs.getUri() +
>   " is not an HDFS file system");
> }
> return (DistributedFileSystem)fs;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11035) Better documentation for maintenace mode and upgrade domain

2017-09-17 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-11035:
---
Status: Patch Available  (was: Open)

> Better documentation for maintenace mode and upgrade domain
> ---
>
> Key: HDFS-11035
> URL: https://issues.apache.org/jira/browse/HDFS-11035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, documentation
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>Assignee: Ming Ma
> Attachments: HDFS-11035.patch
>
>
> HDFS-7541 added upgrade domain and HDFS-7877 added maintenance mode. Existing 
> documentation about these two features are scarce and the implementation have 
> evolved from the original design doc. Looking at code and Javadoc and I still 
> don't quite get how I can get datanodes into maintenance mode/ set up a 
> upgrade domain.
> File this jira to propose that we write an up-to-date description of these 
> two features.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11035) Better documentation for maintenace mode and upgrade domain

2017-09-17 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma reassigned HDFS-11035:
--

Assignee: Ming Ma

> Better documentation for maintenace mode and upgrade domain
> ---
>
> Key: HDFS-11035
> URL: https://issues.apache.org/jira/browse/HDFS-11035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, documentation
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>Assignee: Ming Ma
> Attachments: HDFS-11035.patch
>
>
> HDFS-7541 added upgrade domain and HDFS-7877 added maintenance mode. Existing 
> documentation about these two features are scarce and the implementation have 
> evolved from the original design doc. Looking at code and Javadoc and I still 
> don't quite get how I can get datanodes into maintenance mode/ set up a 
> upgrade domain.
> File this jira to propose that we write an up-to-date description of these 
> two features.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11035) Better documentation for maintenace mode and upgrade domain

2017-09-17 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-11035:
---
Attachment: HDFS-11035.patch

Here is the draft patch including one doc for upgrade domain and another one 
for datanode administration in general (decommission and maintenance). cc 
[~jojochuang] [~manojg] [~eddyxu] 

> Better documentation for maintenace mode and upgrade domain
> ---
>
> Key: HDFS-11035
> URL: https://issues.apache.org/jira/browse/HDFS-11035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, documentation
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
> Attachments: HDFS-11035.patch
>
>
> HDFS-7541 added upgrade domain and HDFS-7877 added maintenance mode. Existing 
> documentation about these two features are scarce and the implementation have 
> evolved from the original design doc. Looking at code and Javadoc and I still 
> don't quite get how I can get datanodes into maintenance mode/ set up a 
> upgrade domain.
> File this jira to propose that we write an up-to-date description of these 
> two features.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11035) Better documentation for maintenace mode and upgrade domain

2017-09-17 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169365#comment-16169365
 ] 

Ming Ma edited comment on HDFS-11035 at 9/17/17 5:47 PM:
-

Here is the draft patch including one doc for upgrade domain and another one 
for datanode administration in general (decommission and maintenance). cc 
[~jojochuang] [~manojg] [~eddyxu] [~ctrezzo]  


was (Author: mingma):
Here is the draft patch including one doc for upgrade domain and another one 
for datanode administration in general (decommission and maintenance). cc 
[~jojochuang] [~manojg] [~eddyxu] 

> Better documentation for maintenace mode and upgrade domain
> ---
>
> Key: HDFS-11035
> URL: https://issues.apache.org/jira/browse/HDFS-11035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, documentation
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>Assignee: Ming Ma
> Attachments: HDFS-11035.patch
>
>
> HDFS-7541 added upgrade domain and HDFS-7877 added maintenance mode. Existing 
> documentation about these two features are scarce and the implementation have 
> evolved from the original design doc. Looking at code and Javadoc and I still 
> don't quite get how I can get datanodes into maintenance mode/ set up a 
> upgrade domain.
> File this jira to propose that we write an up-to-date description of these 
> two features.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11035) Better documentation for maintenace mode and upgrade domain

2017-09-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169374#comment-16169374
 ] 

Hadoop QA commented on HDFS-11035:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-11035 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12887562/HDFS-11035.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 2e8a181b0ece 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8d7cc22 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21188/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Better documentation for maintenace mode and upgrade domain
> ---
>
> Key: HDFS-11035
> URL: https://issues.apache.org/jira/browse/HDFS-11035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, documentation
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>Assignee: Ming Ma
> Attachments: HDFS-11035.patch
>
>
> HDFS-7541 added upgrade domain and HDFS-7877 added maintenance mode. Existing 
> documentation about these two features are scarce and the implementation have 
> evolved from the original design doc. Looking at code and Javadoc and I still 
> don't quite get how I can get datanodes into maintenance mode/ set up a 
> upgrade domain.
> File this jira to propose that we write an up-to-date description of these 
> two features.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11799) Introduce a config to allow setting up write pipeline with fewer nodes than replication factor

2017-09-17 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11799:

Attachment: HDFS-11799-007.patch

Uploaded patch as per above suggestion.

> Introduce a config to allow setting up write pipeline with fewer nodes than 
> replication factor
> --
>
> Key: HDFS-11799
> URL: https://issues.apache.org/jira/browse/HDFS-11799
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11799-002.patch, HDFS-11799-003.patch, 
> HDFS-11799-004.patch, HDFS-11799-005.patch, HDFS-11799-006.patch, 
> HDFS-11799-007.patch, HDFS-11799.patch
>
>
> During pipeline recovery, if not enough DNs can be found, if 
> dfs.client.block.write.replace-datanode-on-failure.best-effort
> is enabled, we let the pipeline to continue, even if there is a single DN.
> Similarly, when we create the write pipeline initially, if for some reason we 
> can't find enough DNs, we can have a similar config to enable writing with a 
> single DN.
> More study will be done.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12323) NameNode terminates after full GC thinking QJM unresponsive if full GC is much longer than timeout

2017-09-17 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169379#comment-16169379
 ] 

Konstantin Shvachko commented on HDFS-12323:


Committing to branch-3.0 for beta1 totally makes sense.
Starting to loose track of all the versions.

> NameNode terminates after full GC thinking QJM unresponsive if full GC is 
> much longer than timeout
> --
>
> Key: HDFS-12323
> URL: https://issues.apache.org/jira/browse/HDFS-12323
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, qjm
>Affects Versions: 2.7.4
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Fix For: 2.9.0, 2.8.3, 2.7.5, 3.1.0
>
> Attachments: HDFS-12323.000.patch, HDFS-12323.001.patch, 
> HDFS-12323.002.patch, HDFS-12323.003.patch, HDFS-12323.004.patch
>
>
> HDFS-10733 attempted to fix the issue where the Namenode process would 
> terminate itself if it had a GC pause which lasted longer than the QJM 
> timeout, since it would think that the QJM had taken too long to respond. 
> However, it only bumps up the timeout expiration by one timeout length, so if 
> the GC pause was e.g. 2x the length of the timeout, a TimeoutException will 
> be thrown and the NN will still terminate itself.
> Thanks to [~yangjiandan] for noting this issue as a comment on HDFS-10733; we 
> have also seen this issue on a real cluster even after HDFS-10733 is applied.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11968) ViewFS: StoragePolicies commands fail with HDFS federation

2017-09-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169393#comment-16169393
 ] 

Hadoop QA commented on HDFS-11968:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 13m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 16s{color} | {color:orange} root: The patch generated 5 new + 217 unchanged 
- 3 fixed = 222 total (was 220) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
3s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
33s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 40s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
39s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}200m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-11968 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12887552/HDFS-11968.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  c

[jira] [Commented] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval

2017-09-17 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169398#comment-16169398
 ] 

Brahma Reddy Battula commented on HDFS-11576:
-

[~lukmajercak] thanks for reporting and working on this issue.

latest patch lgtm. [~shv] do you any comments on latest patch..?

> Block recovery will fail indefinitely if recovery time > heartbeat interval
> ---
>
> Key: HDFS-11576
> URL: https://issues.apache.org/jira/browse/HDFS-11576
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, namenode
>Affects Versions: 2.7.1, 2.7.2, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Critical
> Attachments: HDFS-11576.001.patch, HDFS-11576.002.patch, 
> HDFS-11576.003.patch, HDFS-11576.004.patch, HDFS-11576.005.patch, 
> HDFS-11576.006.patch, HDFS-11576.007.patch, HDFS-11576.008.patch, 
> HDFS-11576.009.patch, HDFS-11576.010.patch, HDFS-11576.011.patch, 
> HDFS-11576.repro.patch
>
>
> Block recovery will fail indefinitely if the time to recover a block is 
> always longer than the heartbeat interval. Scenario:
> 1. DN sends heartbeat 
> 2. NN sends a recovery command to DN, recoveryID=X
> 3. DN starts recovery
> 4. DN sends another heartbeat
> 5. NN sends a recovery command to DN, recoveryID=X+1
> 6. DN calls commitBlockSyncronization after succeeding with first recovery to 
> NN, which fails because X < X+1
> ... 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11799) Introduce a config to allow setting up write pipeline with fewer nodes than replication factor

2017-09-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169410#comment-16169410
 ] 

Hadoop QA commented on HDFS-11799:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-hdfs-project: The patch generated 7 new + 
156 unchanged - 0 fixed = 163 total (was 156) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
24s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-11799 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12887563/HDFS-11799-007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux a6282f784292 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk 

[jira] [Updated] (HDFS-12470) DiskBalancer: Some tests create plan files under system directory

2017-09-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-12470:
-
Status: Patch Available  (was: Open)

> DiskBalancer: Some tests create plan files under system directory
> -
>
> Key: HDFS-12470
> URL: https://issues.apache.org/jira/browse/HDFS-12470
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: diskbalancer, test
>Reporter: Akira Ajisaka
>Assignee: Hanisha Koneru
> Fix For: 2.9.0
>
> Attachments: HDFS-12470.001.patch
>
>
> When I ran HDFS tests, plan files are created under system directory.
> {noformat}
> $ ls -R hadoop-hdfs-project/hadoop-hdfs/system
> diskbalancer
> hadoop-hdfs-project/hadoop-hdfs/system/diskbalancer:
> 2017-Sep-15-19-37-34
> hadoop-hdfs-project/hadoop-hdfs/system/diskbalancer/2017-Sep-15-19-37-34:
> a87654a9-54c7-4693-8dd9-c9c7021dc340.before.json 
> a87654a9-54c7-4693-8dd9-c9c7021dc340.plan.json
> {noformat}
> All the files created by tests should be in target directory. That way the 
> files are ignored by git.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12470) DiskBalancer: Some tests create plan files under system directory

2017-09-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-12470:
-
Fix Version/s: (was: 2.9.0)

> DiskBalancer: Some tests create plan files under system directory
> -
>
> Key: HDFS-12470
> URL: https://issues.apache.org/jira/browse/HDFS-12470
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: diskbalancer, test
>Reporter: Akira Ajisaka
>Assignee: Hanisha Koneru
> Attachments: HDFS-12470.001.patch
>
>
> When I ran HDFS tests, plan files are created under system directory.
> {noformat}
> $ ls -R hadoop-hdfs-project/hadoop-hdfs/system
> diskbalancer
> hadoop-hdfs-project/hadoop-hdfs/system/diskbalancer:
> 2017-Sep-15-19-37-34
> hadoop-hdfs-project/hadoop-hdfs/system/diskbalancer/2017-Sep-15-19-37-34:
> a87654a9-54c7-4693-8dd9-c9c7021dc340.before.json 
> a87654a9-54c7-4693-8dd9-c9c7021dc340.plan.json
> {noformat}
> All the files created by tests should be in target directory. That way the 
> files are ignored by git.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12470) DiskBalancer: Some tests create plan files under system directory

2017-09-17 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169455#comment-16169455
 ] 

Arpit Agarwal commented on HDFS-12470:
--

+1 pending Jenkins.

> DiskBalancer: Some tests create plan files under system directory
> -
>
> Key: HDFS-12470
> URL: https://issues.apache.org/jira/browse/HDFS-12470
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: diskbalancer, test
>Reporter: Akira Ajisaka
>Assignee: Hanisha Koneru
> Attachments: HDFS-12470.001.patch
>
>
> When I ran HDFS tests, plan files are created under system directory.
> {noformat}
> $ ls -R hadoop-hdfs-project/hadoop-hdfs/system
> diskbalancer
> hadoop-hdfs-project/hadoop-hdfs/system/diskbalancer:
> 2017-Sep-15-19-37-34
> hadoop-hdfs-project/hadoop-hdfs/system/diskbalancer/2017-Sep-15-19-37-34:
> a87654a9-54c7-4693-8dd9-c9c7021dc340.before.json 
> a87654a9-54c7-4693-8dd9-c9c7021dc340.plan.json
> {noformat}
> All the files created by tests should be in target directory. That way the 
> files are ignored by git.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12470) DiskBalancer: Some tests create plan files under system directory

2017-09-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169498#comment-16169498
 ] 

Hadoop QA commented on HDFS-12470:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.web.TestWebHDFSForHA |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12470 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12887445/HDFS-12470.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 15c060db7d6d 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8d7cc22 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21190/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21190/testReport/ |
| modules | C: hadoop-hdfs-proj

[jira] [Commented] (HDFS-12444) Reduce runtime of TestWriteReadStripedFile

2017-09-17 Thread Huafeng Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169508#comment-16169508
 ] 

Huafeng Wang commented on HDFS-12444:
-

[~drankye] the TODO mark is already removed.

> Reduce runtime of TestWriteReadStripedFile
> --
>
> Key: HDFS-12444
> URL: https://issues.apache.org/jira/browse/HDFS-12444
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, test
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Huafeng Wang
> Attachments: HDFS-12444.001.patch, HDFS-12444.002.patch, 
> HDFS-12444.003.patch
>
>
> This test takes a long time to run since it writes a lot of data, and 
> frequently times out during precommit testing. If we change the EC policy 
> from RS(6,3) to RS(3,2) then it will run a lot faster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12381) [Documentation] Adding configuration keys for the Router

2017-09-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169510#comment-16169510
 ] 

Íñigo Goiri commented on HDFS-12381:


[~manojg], do you have any other comments?

> [Documentation] Adding configuration keys for the Router
> 
>
> Key: HDFS-12381
> URL: https://issues.apache.org/jira/browse/HDFS-12381
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: HDFS-10467
>
> Attachments: HDFS-12381-HDFS-10467.000.patch, 
> HDFS-12381-HDFS-10467.001.patch, HDFS-12381-HDFS-10467.002.patch, 
> HDFS-12381-HDFS-10467.003.patch
>
>
> Adding configuration options in tabular format.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11968) ViewFS: StoragePolicies commands fail with HDFS federation

2017-09-17 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169529#comment-16169529
 ] 

Mukul Kumar Singh commented on HDFS-11968:
--

Thanks for the awesome suggestion [~surendrasingh], I have address the review 
comment in the latest patch.

I have also added 2 tests as well, one for Viewfs and another for WebHdfs. We 
already have one test for HDFS.
This will help in extending storage policy admin commands to all the supported 
filesystem.

> ViewFS: StoragePolicies commands fail with HDFS federation
> --
>
> Key: HDFS-11968
> URL: https://issues.apache.org/jira/browse/HDFS-11968
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11968.001.patch, HDFS-11968.002.patch, 
> HDFS-11968.003.patch, HDFS-11968.004.patch, HDFS-11968.005.patch, 
> HDFS-11968.006.patch
>
>
> hdfs storagepolicies command fails with HDFS federation.
> For storage policies commands, a given user path should be resolved to a HDFS 
> path and
> storage policy command should be applied onto the resolved HDFS path.
> {code}
>   static DistributedFileSystem getDFS(Configuration conf)
>   throws IOException {
> FileSystem fs = FileSystem.get(conf);
> if (!(fs instanceof DistributedFileSystem)) {
>   throw new IllegalArgumentException("FileSystem " + fs.getUri() +
>   " is not an HDFS file system");
> }
> return (DistributedFileSystem)fs;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12477) Ozone: Some minor text improvement in SCM web UI

2017-09-17 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-12477:
--

 Summary: Ozone: Some minor text improvement in SCM web UI
 Key: HDFS-12477
 URL: https://issues.apache.org/jira/browse/HDFS-12477
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: scm, ui
Reporter: Weiwei Yang
Priority: Trivial


While trying out SCM UI, there seems to have some small text problems, 

bq. Node Manager: Minimum chill mode nodes)

It has an extra ).

bq. $$hashKey   object:9

I am not really sure what does this mean? Would this help?

bq. Node counts

Can we place the HEALTHY ones at the top of the table?

bq. Node Manager: Chill mode status: Out of chill mode. 15 of out of total 1 
nodes have reported in.

Can we refine this text a bit?




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12477) Ozone: Some minor text improvement in SCM web UI

2017-09-17 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12477:
---
Attachment: haskey.png
Revise text.png
healthy_nodes_place.png

Please see these screenshots, let me know if they make sense.

> Ozone: Some minor text improvement in SCM web UI
> 
>
> Key: HDFS-12477
> URL: https://issues.apache.org/jira/browse/HDFS-12477
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: scm, ui
>Reporter: Weiwei Yang
>Priority: Trivial
> Attachments: haskey.png, healthy_nodes_place.png, Revise text.png
>
>
> While trying out SCM UI, there seems to have some small text problems, 
> bq. Node Manager: Minimum chill mode nodes)
> It has an extra ).
> bq. $$hashKey object:9
> I am not really sure what does this mean? Would this help?
> bq. Node counts
> Can we place the HEALTHY ones at the top of the table?
> bq. Node Manager: Chill mode status: Out of chill mode. 15 of out of total 1 
> nodes have reported in.
> Can we refine this text a bit?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12446) FSNamesystem#internalReleaseLease throw IllegalStateException

2017-09-17 Thread Jiandan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169546#comment-16169546
 ] 

Jiandan Yang  commented on HDFS-12446:
--

NN log exception info as description. this because LeaseManager$Monitor#run is 
in a dead loop, every run will throw IllegalStateException.
[HDFS-11817|https://issues.apache.org/jira/browse/HDFS-11817] has the same 
exception, but did not fix completely.

{code:java}
  boolean completed = false;
  try {
completed = fsnamesystem.internalReleaseLease(
leaseToCheck, p, iip, newHolder);
  } catch (IOException e) { 
 // NOTE:  did not catch IllegalStateException
LOG.warn("Cannot release the path " + p + " in the lease "
+ leaseToCheck + ". It will be retried.", e);
continue;
  }
{code}


> FSNamesystem#internalReleaseLease throw IllegalStateException
> -
>
> Key: HDFS-12446
> URL: https://issues.apache.org/jira/browse/HDFS-12446
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.1
>Reporter: Jiandan Yang 
>
> NameNode always print following logs.
> {code:java}
> 2017-09-14 10:21:32,042 INFO 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: 
> DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7] has expired hard 
> limit
> 2017-09-14 10:21:32,042 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
> Holder: DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7], src=/xxx
> 2017-09-14 10:21:32,042 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable:
> java.lang.IllegalStateException: Unexpected block state: 
> blk_1265519060_203004758 is COMMITTED but not COMPLETE, file=xxx (INodeFile), 
> blocks=[blk_1265519060_203004758] (i=0)
> at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:172)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.assertAllBlocksComplete(INodeFile.java:218)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.toCompleteFile(INodeFile.java:207)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.finalizeINodeFileUnderConstruction(FSNamesystem.java:3312)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:3184)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager.checkLeases(LeaseManager.java:383)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:329)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12446) FSNamesystem#internalReleaseLease throw IllegalStateException

2017-09-17 Thread Jiandan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169546#comment-16169546
 ] 

Jiandan Yang  edited comment on HDFS-12446 at 9/18/17 3:43 AM:
---

NN log exception info as description. this because LeaseManager$Monitor#run is 
in a dead loop, every run will throw IllegalStateException.
[HDFS-11817|https://issues.apache.org/jira/browse/HDFS-11817] has the same 
exception, but did not fix completely.

{code:java}
  boolean completed = false;
  try {
completed = fsnamesystem.internalReleaseLease(
leaseToCheck, p, iip, newHolder);
  } catch (IOException e) { // NOTE:  did not catch 
IllegalStateException
LOG.warn("Cannot release the path " + p + " in the lease "
+ leaseToCheck + ". It will be retried.", e);
continue;
  }
{code}



was (Author: yangjiandan):
NN log exception info as description. this because LeaseManager$Monitor#run is 
in a dead loop, every run will throw IllegalStateException.
[HDFS-11817|https://issues.apache.org/jira/browse/HDFS-11817] has the same 
exception, but did not fix completely.

{code:java}
  boolean completed = false;
  try {
completed = fsnamesystem.internalReleaseLease(
leaseToCheck, p, iip, newHolder);
  } catch (IOException e) { 
 // NOTE:  did not catch IllegalStateException
LOG.warn("Cannot release the path " + p + " in the lease "
+ leaseToCheck + ". It will be retried.", e);
continue;
  }
{code}


> FSNamesystem#internalReleaseLease throw IllegalStateException
> -
>
> Key: HDFS-12446
> URL: https://issues.apache.org/jira/browse/HDFS-12446
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.1
>Reporter: Jiandan Yang 
>
> NameNode always print following logs.
> {code:java}
> 2017-09-14 10:21:32,042 INFO 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: 
> DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7] has expired hard 
> limit
> 2017-09-14 10:21:32,042 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
> Holder: DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7], src=/xxx
> 2017-09-14 10:21:32,042 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable:
> java.lang.IllegalStateException: Unexpected block state: 
> blk_1265519060_203004758 is COMMITTED but not COMPLETE, file=xxx (INodeFile), 
> blocks=[blk_1265519060_203004758] (i=0)
> at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:172)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.assertAllBlocksComplete(INodeFile.java:218)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.toCompleteFile(INodeFile.java:207)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.finalizeINodeFileUnderConstruction(FSNamesystem.java:3312)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:3184)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager.checkLeases(LeaseManager.java:383)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:329)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12477) Ozone: Some minor text improvement in SCM web UI

2017-09-17 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169549#comment-16169549
 ] 

Anu Engineer commented on HDFS-12477:
-

[~elek] FYi .. 

> Ozone: Some minor text improvement in SCM web UI
> 
>
> Key: HDFS-12477
> URL: https://issues.apache.org/jira/browse/HDFS-12477
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: scm, ui
>Reporter: Weiwei Yang
>Priority: Trivial
> Attachments: haskey.png, healthy_nodes_place.png, Revise text.png
>
>
> While trying out SCM UI, there seems to have some small text problems, 
> bq. Node Manager: Minimum chill mode nodes)
> It has an extra ).
> bq. $$hashKey object:9
> I am not really sure what does this mean? Would this help?
> bq. Node counts
> Can we place the HEALTHY ones at the top of the table?
> bq. Node Manager: Chill mode status: Out of chill mode. 15 of out of total 1 
> nodes have reported in.
> Can we refine this text a bit?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12472) Add JUNIT timeout to TestBlockStatsMXBean

2017-09-17 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169641#comment-16169641
 ] 

Lei (Eddy) Xu commented on HDFS-12472:
--

Thanks for the patch and reviews, [~bharatviswa] and [~arpitagarwal]!

> Add JUNIT timeout to TestBlockStatsMXBean 
> --
>
> Key: HDFS-12472
> URL: https://issues.apache.org/jira/browse/HDFS-12472
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lei (Eddy) Xu
>Assignee: Bharat Viswanadham
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HDFS-12472.00.patch, HDFS-12472.01.patch
>
>
> Add Junit timeout to {{TestBlockStatsMXBean}} so that it can show up in the 
> test failure report if timeout occurs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12444) Reduce runtime of TestWriteReadStripedFile

2017-09-17 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169657#comment-16169657
 ] 

Kai Zheng commented on HDFS-12444:
--

Ops, yes it was. Sorry my mistake. LGTM and +1.

> Reduce runtime of TestWriteReadStripedFile
> --
>
> Key: HDFS-12444
> URL: https://issues.apache.org/jira/browse/HDFS-12444
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, test
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Huafeng Wang
> Attachments: HDFS-12444.001.patch, HDFS-12444.002.patch, 
> HDFS-12444.003.patch
>
>
> This test takes a long time to run since it writes a lot of data, and 
> frequently times out during precommit testing. If we change the EC policy 
> from RS(6,3) to RS(3,2) then it will run a lot faster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12395) Support erasure coding policy operations in namenode edit log

2017-09-17 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169658#comment-16169658
 ] 

Brahma Reddy Battula commented on HDFS-12395:
-

Following two test fails after this commit.

TestNamenodeRetryCache.testRetryCacheRebuild
TestRetryCacheWithHA.testRetryCacheOnStandbyNN

*Reference:*

https://builds.apache.org/job/PreCommit-HDFS-Build/21189/testReport/

*Trace*
java.lang.AssertionError: Retry cache size is wrong expected:<26> but was:<34>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache.testRetryCacheRebuild(TestNamenodeRetryCache.java:439)

> Support erasure coding policy operations in namenode edit log
> -
>
> Key: HDFS-12395
> URL: https://issues.apache.org/jira/browse/HDFS-12395
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-beta1
>
> Attachments: editsStored, HDFS-12395.001.patch, HDFS-12395.002.patch, 
> HDFS-12395.003.patch, HDFS-12395.004.patch
>
>
> Support add, remove, disable, enable erasure coding policy operation in edit 
> log. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11799) Introduce a config to allow setting up write pipeline with fewer nodes than replication factor

2017-09-17 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169660#comment-16169660
 ] 

Brahma Reddy Battula commented on HDFS-11799:
-

Testfailures are unrelated. Kindly Review.

TestNamenodeRetryCache and TestRetryCacheWithHA failed after HDFS-12395
TestLeaseRecoveryStriped is tracked HDFS-12437.

Rest  are passing locally.

> Introduce a config to allow setting up write pipeline with fewer nodes than 
> replication factor
> --
>
> Key: HDFS-11799
> URL: https://issues.apache.org/jira/browse/HDFS-11799
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11799-002.patch, HDFS-11799-003.patch, 
> HDFS-11799-004.patch, HDFS-11799-005.patch, HDFS-11799-006.patch, 
> HDFS-11799-007.patch, HDFS-11799.patch
>
>
> During pipeline recovery, if not enough DNs can be found, if 
> dfs.client.block.write.replace-datanode-on-failure.best-effort
> is enabled, we let the pipeline to continue, even if there is a single DN.
> Similarly, when we create the write pipeline initially, if for some reason we 
> can't find enough DNs, we can have a similar config to enable writing with a 
> single DN.
> More study will be done.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org