[jira] [Commented] (HDFS-10701) TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired occasionally fails

2017-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165843#comment-16165843
 ] 

Hadoop QA commented on HDFS-10701:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyConsiderLoad |
|   | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-10701 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12887032/HDFS-10701.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6bb1e5ae0a99 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e0b3c64 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21136/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21136/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21136/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.a

[jira] [Updated] (HDFS-12395) Support erasure coding policy operations in namenode edit log

2017-09-13 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-12395:
-
Attachment: HDFS-12395.004.patch

1. fix style issues
2. uncomment the piece of code

Another note is the uploaded "editsStored" file is also a part of the patch. 
It's a binary file. After apply the HDFS-12395.004.patch to the trunk, need to 
replace the "editsStored" file under 
hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/resources with the uploaded one.

Otherwise one unit test will failure, just as the build system reports
{quote}
 
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer.testStored
 Error Details

Reference XML edits and parsed to XML should be same
 Stack Trace
 Standard Output
{quote}

> Support erasure coding policy operations in namenode edit log
> -
>
> Key: HDFS-12395
> URL: https://issues.apache.org/jira/browse/HDFS-12395
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: editsStored, HDFS-12395.001.patch, HDFS-12395.002.patch, 
> HDFS-12395.003.patch, HDFS-12395.004.patch
>
>
> Support add, remove, disable, enable erasure coding policy operation in edit 
> log. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2017-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165789#comment-16165789
 ] 

Hadoop QA commented on HDFS-7859:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}117m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestErasureCodingPolicies |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.server.namenode.TestReencryption |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-7859 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12887024/HDFS-7859.018.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 3ee6bb90b993 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e0b3c64 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21132/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21132/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-p

[jira] [Updated] (HDFS-12446) FSNamesystem#internalReleaseLease throw IllegalStateException

2017-09-13 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated HDFS-12446:
-
Description: 
NameNode always print following logs.
{code:java}

2017-09-14 10:21:32,042 INFO 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: 
DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7] has expired hard limit
2017-09-14 10:21:32,042 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
Holder: DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7], src=/xxx
2017-09-14 10:21:32,042 WARN 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable:
java.lang.IllegalStateException: Unexpected block state: 
blk_1265519060_203004758 is COMMITTED but not COMPLETE, file=xxx (INodeFile), 
blocks=[blk_1265519060_203004758] (i=0)
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:172)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.assertAllBlocksComplete(INodeFile.java:218)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.toCompleteFile(INodeFile.java:207)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.finalizeINodeFileUnderConstruction(FSNamesystem.java:3312)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:3184)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager.checkLeases(LeaseManager.java:383)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:329)
at java.lang.Thread.run(Thread.java:834)
{code}

  was:
NameNode always print following logs.
{code:java}

2017-09-14 10:21:32,042 INFO 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: 
DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7] has expired hard limit
2017-09-14 10:21:32,042 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
Holder: DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7], src=/***
2017-09-14 10:21:32,042 WARN 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable:
java.lang.IllegalStateException: Unexpected block state: 
blk_1265519060_203004758 is COMMITTED but not COMPLETE, file=(INodeFile), 
blocks=[blk_1265519060_203004758] (i=0)
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:172)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.assertAllBlocksComplete(INodeFile.java:218)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.toCompleteFile(INodeFile.java:207)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.finalizeINodeFileUnderConstruction(FSNamesystem.java:3312)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:3184)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager.checkLeases(LeaseManager.java:383)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:329)
at java.lang.Thread.run(Thread.java:834)
{code}


> FSNamesystem#internalReleaseLease throw IllegalStateException
> -
>
> Key: HDFS-12446
> URL: https://issues.apache.org/jira/browse/HDFS-12446
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.1
>Reporter: Jiandan Yang 
>
> NameNode always print following logs.
> {code:java}
> 2017-09-14 10:21:32,042 INFO 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: 
> DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7] has expired hard 
> limit
> 2017-09-14 10:21:32,042 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
> Holder: DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7], src=/xxx
> 2017-09-14 10:21:32,042 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable:
> java.lang.IllegalStateException: Unexpected block state: 
> blk_1265519060_203004758 is COMMITTED but not COMPLETE, file=xxx (INodeFile), 
> blocks=[blk_1265519060_203004758] (i=0)
> at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:172)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.assertAllBlocksComplete(INodeFile.java:218)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.toCompleteFile(INodeFile.java:207)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.finalizeINodeFileUnderConstruction(FSNamesystem.java:3312)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:3184)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager.checkLeases(LeaseManager.java:383)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:329)
> a

[jira] [Updated] (HDFS-12446) FSNamesystem#internalReleaseLease throw IllegalStateException

2017-09-13 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated HDFS-12446:
-
Description: 
NameNode always print following logs.
{code:java}

2017-09-14 10:21:32,042 INFO 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: 
DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7] has expired hard limit
2017-09-14 10:21:32,042 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
Holder: DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7], 
src=/user/ads/af_base_n_adf_p4p_pv/data/55f57d72-1542-4acf-b2d4-08af65b0e859
2017-09-14 10:21:32,042 WARN 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable:
java.lang.IllegalStateException: Unexpected block state: 
blk_1265519060_203004758 is COMMITTED but not COMPLETE, 
file=55f57d72-1542-4acf-b2d4-08af65b0e859 (INodeFile), 
blocks=[blk_1265519060_203004758] (i=0)
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:172)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.assertAllBlocksComplete(INodeFile.java:218)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.toCompleteFile(INodeFile.java:207)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.finalizeINodeFileUnderConstruction(FSNamesystem.java:3312)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:3184)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager.checkLeases(LeaseManager.java:383)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:329)
at java.lang.Thread.run(Thread.java:834)
{code}

  was:
NameNode log always print following logs.
{code:java}

2017-09-14 10:21:32,042 INFO 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: 
DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7] has expired hard limit
2017-09-14 10:21:32,042 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
Holder: DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7], 
src=/user/ads/af_base_n_adf_p4p_pv/data/55f57d72-1542-4acf-b2d4-08af65b0e859
2017-09-14 10:21:32,042 WARN 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable:
java.lang.IllegalStateException: Unexpected block state: 
blk_1265519060_203004758 is COMMITTED but not COMPLETE, 
file=55f57d72-1542-4acf-b2d4-08af65b0e859 (INodeFile), 
blocks=[blk_1265519060_203004758] (i=0)
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:172)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.assertAllBlocksComplete(INodeFile.java:218)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.toCompleteFile(INodeFile.java:207)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.finalizeINodeFileUnderConstruction(FSNamesystem.java:3312)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:3184)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager.checkLeases(LeaseManager.java:383)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:329)
at java.lang.Thread.run(Thread.java:834)
{code}


> FSNamesystem#internalReleaseLease throw IllegalStateException
> -
>
> Key: HDFS-12446
> URL: https://issues.apache.org/jira/browse/HDFS-12446
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.1
>Reporter: Jiandan Yang 
>
> NameNode always print following logs.
> {code:java}
> 2017-09-14 10:21:32,042 INFO 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: 
> DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7] has expired hard 
> limit
> 2017-09-14 10:21:32,042 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
> Holder: DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7], 
> src=/user/ads/af_base_n_adf_p4p_pv/data/55f57d72-1542-4acf-b2d4-08af65b0e859
> 2017-09-14 10:21:32,042 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable:
> java.lang.IllegalStateException: Unexpected block state: 
> blk_1265519060_203004758 is COMMITTED but not COMPLETE, 
> file=55f57d72-1542-4acf-b2d4-08af65b0e859 (INodeFile), 
> blocks=[blk_1265519060_203004758] (i=0)
> at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:172)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.assertAllBlocksComplete(INodeFile.java:218)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.toCompleteFile(INodeFile.java:207)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.finalizeINodeFileUnderConstruction(FSNamesystem.java:3312)
> at 

[jira] [Updated] (HDFS-12446) FSNamesystem#internalReleaseLease throw IllegalStateException

2017-09-13 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated HDFS-12446:
-
Description: 
NameNode always print following logs.
{code:java}

2017-09-14 10:21:32,042 INFO 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: 
DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7] has expired hard limit
2017-09-14 10:21:32,042 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
Holder: DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7], src=/***
2017-09-14 10:21:32,042 WARN 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable:
java.lang.IllegalStateException: Unexpected block state: 
blk_1265519060_203004758 is COMMITTED but not COMPLETE, file=(INodeFile), 
blocks=[blk_1265519060_203004758] (i=0)
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:172)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.assertAllBlocksComplete(INodeFile.java:218)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.toCompleteFile(INodeFile.java:207)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.finalizeINodeFileUnderConstruction(FSNamesystem.java:3312)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:3184)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager.checkLeases(LeaseManager.java:383)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:329)
at java.lang.Thread.run(Thread.java:834)
{code}

  was:
NameNode always print following logs.
{code:java}

2017-09-14 10:21:32,042 INFO 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: 
DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7] has expired hard limit
2017-09-14 10:21:32,042 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
Holder: DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7], 
src=/user/ads/af_base_n_adf_p4p_pv/data/55f57d72-1542-4acf-b2d4-08af65b0e859
2017-09-14 10:21:32,042 WARN 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable:
java.lang.IllegalStateException: Unexpected block state: 
blk_1265519060_203004758 is COMMITTED but not COMPLETE, 
file=55f57d72-1542-4acf-b2d4-08af65b0e859 (INodeFile), 
blocks=[blk_1265519060_203004758] (i=0)
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:172)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.assertAllBlocksComplete(INodeFile.java:218)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.toCompleteFile(INodeFile.java:207)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.finalizeINodeFileUnderConstruction(FSNamesystem.java:3312)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:3184)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager.checkLeases(LeaseManager.java:383)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:329)
at java.lang.Thread.run(Thread.java:834)
{code}


> FSNamesystem#internalReleaseLease throw IllegalStateException
> -
>
> Key: HDFS-12446
> URL: https://issues.apache.org/jira/browse/HDFS-12446
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.1
>Reporter: Jiandan Yang 
>
> NameNode always print following logs.
> {code:java}
> 2017-09-14 10:21:32,042 INFO 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: 
> DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7] has expired hard 
> limit
> 2017-09-14 10:21:32,042 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
> Holder: DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7], src=/***
> 2017-09-14 10:21:32,042 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable:
> java.lang.IllegalStateException: Unexpected block state: 
> blk_1265519060_203004758 is COMMITTED but not COMPLETE, file=(INodeFile), 
> blocks=[blk_1265519060_203004758] (i=0)
> at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:172)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.assertAllBlocksComplete(INodeFile.java:218)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.toCompleteFile(INodeFile.java:207)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.finalizeINodeFileUnderConstruction(FSNamesystem.java:3312)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:3184)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager.checkLeases(LeaseManager.java:383)
> a

[jira] [Updated] (HDFS-12446) FSNamesystem#internalReleaseLease throw IllegalStateException

2017-09-13 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated HDFS-12446:
-
Description: 
NameNode log always print following logs.
{code:java}

2017-09-14 10:21:32,042 INFO 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: 
DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7] has expired hard limit
2017-09-14 10:21:32,042 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
Holder: DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7], 
src=/user/ads/af_base_n_adf_p4p_pv/data/55f57d72-1542-4acf-b2d4-08af65b0e859
2017-09-14 10:21:32,042 WARN 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable:
java.lang.IllegalStateException: Unexpected block state: 
blk_1265519060_203004758 is COMMITTED but not COMPLETE, 
file=55f57d72-1542-4acf-b2d4-08af65b0e859 (INodeFile), 
blocks=[blk_1265519060_203004758] (i=0)
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:172)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.assertAllBlocksComplete(INodeFile.java:218)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.toCompleteFile(INodeFile.java:207)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.finalizeINodeFileUnderConstruction(FSNamesystem.java:3312)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:3184)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager.checkLeases(LeaseManager.java:383)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:329)
at java.lang.Thread.run(Thread.java:834)
{code}

  was:
NameNonde log always print following logs.
{code:java}

2017-09-14 10:21:32,042 INFO 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: 
DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7] has expired hard limit
2017-09-14 10:21:32,042 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
Holder: DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7], 
src=/user/ads/af_base_n_adf_p4p_pv/data/55f57d72-1542-4acf-b2d4-08af65b0e859
2017-09-14 10:21:32,042 WARN 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable:
java.lang.IllegalStateException: Unexpected block state: 
blk_1265519060_203004758 is COMMITTED but not COMPLETE, 
file=55f57d72-1542-4acf-b2d4-08af65b0e859 (INodeFile), 
blocks=[blk_1265519060_203004758] (i=0)
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:172)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.assertAllBlocksComplete(INodeFile.java:218)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.toCompleteFile(INodeFile.java:207)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.finalizeINodeFileUnderConstruction(FSNamesystem.java:3312)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:3184)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager.checkLeases(LeaseManager.java:383)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:329)
at java.lang.Thread.run(Thread.java:834)
{code}


> FSNamesystem#internalReleaseLease throw IllegalStateException
> -
>
> Key: HDFS-12446
> URL: https://issues.apache.org/jira/browse/HDFS-12446
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.1
>Reporter: Jiandan Yang 
>
> NameNode log always print following logs.
> {code:java}
> 2017-09-14 10:21:32,042 INFO 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: 
> DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7] has expired hard 
> limit
> 2017-09-14 10:21:32,042 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
> Holder: DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7], 
> src=/user/ads/af_base_n_adf_p4p_pv/data/55f57d72-1542-4acf-b2d4-08af65b0e859
> 2017-09-14 10:21:32,042 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable:
> java.lang.IllegalStateException: Unexpected block state: 
> blk_1265519060_203004758 is COMMITTED but not COMPLETE, 
> file=55f57d72-1542-4acf-b2d4-08af65b0e859 (INodeFile), 
> blocks=[blk_1265519060_203004758] (i=0)
> at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:172)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.assertAllBlocksComplete(INodeFile.java:218)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.toCompleteFile(INodeFile.java:207)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.finalizeINodeFileUnderConstruction(FSNamesystem.java:3312)
>   

[jira] [Updated] (HDFS-12446) FSNamesystem#internalReleaseLease throw IllegalStateException

2017-09-13 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated HDFS-12446:
-
Description: 
NameNonde log always print following logs.
{code:java}

2017-09-14 10:21:32,042 INFO 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: 
DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7] has expired hard limit
2017-09-14 10:21:32,042 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
Holder: DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7], 
src=/user/ads/af_base_n_adf_p4p_pv/data/55f57d72-1542-4acf-b2d4-08af65b0e859
2017-09-14 10:21:32,042 WARN 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable:
java.lang.IllegalStateException: Unexpected block state: 
blk_1265519060_203004758 is COMMITTED but not COMPLETE, 
file=55f57d72-1542-4acf-b2d4-08af65b0e859 (INodeFile), 
blocks=[blk_1265519060_203004758] (i=0)
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:172)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.assertAllBlocksComplete(INodeFile.java:218)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.toCompleteFile(INodeFile.java:207)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.finalizeINodeFileUnderConstruction(FSNamesystem.java:3312)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:3184)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager.checkLeases(LeaseManager.java:383)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:329)
at java.lang.Thread.run(Thread.java:834)
{code}

  was:

2017-09-14 10:21:32,042 INFO 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: 
DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7] has expired hard limit
2017-09-14 10:21:32,042 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
Holder: DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7], 
src=/user/ads/af_base_n_adf_p4p_pv/data/55f57d72-1542-4acf-b2d4-08af65b0e859
2017-09-14 10:21:32,042 WARN 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable:
java.lang.IllegalStateException: Unexpected block state: 
blk_1265519060_203004758 is COMMITTED but not COMPLETE, 
file=55f57d72-1542-4acf-b2d4-08af65b0e859 (INodeFile), 
blocks=[blk_1265519060_203004758] (i=0)
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:172)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.assertAllBlocksComplete(INodeFile.java:218)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.toCompleteFile(INodeFile.java:207)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.finalizeINodeFileUnderConstruction(FSNamesystem.java:3312)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:3184)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager.checkLeases(LeaseManager.java:383)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:329)
at java.lang.Thread.run(Thread.java:834)


> FSNamesystem#internalReleaseLease throw IllegalStateException
> -
>
> Key: HDFS-12446
> URL: https://issues.apache.org/jira/browse/HDFS-12446
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.1
>Reporter: Jiandan Yang 
>
> NameNonde log always print following logs.
> {code:java}
> 2017-09-14 10:21:32,042 INFO 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: 
> DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7] has expired hard 
> limit
> 2017-09-14 10:21:32,042 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
> Holder: DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7], 
> src=/user/ads/af_base_n_adf_p4p_pv/data/55f57d72-1542-4acf-b2d4-08af65b0e859
> 2017-09-14 10:21:32,042 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable:
> java.lang.IllegalStateException: Unexpected block state: 
> blk_1265519060_203004758 is COMMITTED but not COMPLETE, 
> file=55f57d72-1542-4acf-b2d4-08af65b0e859 (INodeFile), 
> blocks=[blk_1265519060_203004758] (i=0)
> at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:172)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.assertAllBlocksComplete(INodeFile.java:218)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.toCompleteFile(INodeFile.java:207)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.finalizeINodeFileUnderConstruction(FSNamesystem.java:3312)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesys

[jira] [Created] (HDFS-12446) FSNamesystem#internalReleaseLease throw IllegalStateException

2017-09-13 Thread Jiandan Yang (JIRA)
Jiandan Yang  created HDFS-12446:


 Summary: FSNamesystem#internalReleaseLease throw 
IllegalStateException
 Key: HDFS-12446
 URL: https://issues.apache.org/jira/browse/HDFS-12446
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.8.1
Reporter: Jiandan Yang 



2017-09-14 10:21:32,042 INFO 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: 
DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7] has expired hard limit
2017-09-14 10:21:32,042 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
Holder: DFSClient_NONMAPREDUCE_-275421369_84, pending creates: 7], 
src=/user/ads/af_base_n_adf_p4p_pv/data/55f57d72-1542-4acf-b2d4-08af65b0e859
2017-09-14 10:21:32,042 WARN 
org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable:
java.lang.IllegalStateException: Unexpected block state: 
blk_1265519060_203004758 is COMMITTED but not COMPLETE, 
file=55f57d72-1542-4acf-b2d4-08af65b0e859 (INodeFile), 
blocks=[blk_1265519060_203004758] (i=0)
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:172)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.assertAllBlocksComplete(INodeFile.java:218)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.toCompleteFile(INodeFile.java:207)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.finalizeINodeFileUnderConstruction(FSNamesystem.java:3312)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:3184)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager.checkLeases(LeaseManager.java:383)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:329)
at java.lang.Thread.run(Thread.java:834)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12445) Spelling mistakes in the Hadoop java source files viz. choosen instead of chosen.

2017-09-13 Thread hu xiaodong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hu xiaodong updated HDFS-12445:
---
Attachment: HDFS-12445.001.patch

> Spelling mistakes in the Hadoop java source files viz. choosen instead of 
> chosen.
> -
>
> Key: HDFS-12445
> URL: https://issues.apache.org/jira/browse/HDFS-12445
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hu xiaodong
>Assignee: hu xiaodong
>Priority: Trivial
> Attachments: HDFS-12445.001.patch
>
>
> I found spelling mistakes in the Hadoop java source files viz. choosen 
> instead of chosen.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-12445) Spelling mistakes in the Hadoop java source files viz. choosen instead of chosen.

2017-09-13 Thread hu xiaodong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-12445 started by hu xiaodong.
--
> Spelling mistakes in the Hadoop java source files viz. choosen instead of 
> chosen.
> -
>
> Key: HDFS-12445
> URL: https://issues.apache.org/jira/browse/HDFS-12445
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hu xiaodong
>Assignee: hu xiaodong
>Priority: Trivial
> Attachments: HDFS-12445.001.patch
>
>
> I found spelling mistakes in the Hadoop java source files viz. choosen 
> instead of chosen.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12445) Spelling mistakes in the Hadoop java source files viz. choosen instead of chosen.

2017-09-13 Thread hu xiaodong (JIRA)
hu xiaodong created HDFS-12445:
--

 Summary: Spelling mistakes in the Hadoop java source files viz. 
choosen instead of chosen.
 Key: HDFS-12445
 URL: https://issues.apache.org/jira/browse/HDFS-12445
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: hu xiaodong
Assignee: hu xiaodong
Priority: Trivial


I found spelling mistakes in the Hadoop java source files viz. choosen instead 
of chosen.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12378) TestClientProtocolForPipelineRecovery#testZeroByteBlockRecovery fails on trunk

2017-09-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12378:
---
Labels: flaky-test  (was: )

> TestClientProtocolForPipelineRecovery#testZeroByteBlockRecovery fails on trunk
> --
>
> Key: HDFS-12378
> URL: https://issues.apache.org/jira/browse/HDFS-12378
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiao Chen
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: flaky-test
>
> Saw on 
> https://builds.apache.org/job/PreCommit-HDFS-Build/20928/testReport/org.apache.hadoop.hdfs/TestClientProtocolForPipelineRecovery/testZeroByteBlockRecovery/:
> Error Message
> {noformat}
> Failed to replace a bad datanode on the existing pipeline due to no more good 
> datanodes being available to try. (Nodes: 
> current=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]],
>  
> original=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]]).
>  The current failed datanode replacement policy is ALWAYS, and a client may 
> configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
> {noformat}
> Stacktrace
> {noformat}
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]],
>  
> original=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]]).
>  The current failed datanode replacement policy is ALWAYS, and a client may 
> configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
>   at 
> org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1322)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1388)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1587)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1488)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1470)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1274)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:684)
> {noformat}
> Standard Output
> {noformat}
> 2017-08-30 18:02:37,714 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:(469)) - starting cluster: numNameNodes=1, 
> numDataNodes=3
> Formatting using clusterid: testClusterID
> 2017-08-30 18:02:37,716 [main] INFO  namenode.FSEditLog 
> (FSEditLog.java:newInstance(224)) - Edit logging is async:false
> 2017-08-30 18:02:37,716 [main] INFO  namenode.FSNamesystem 
> (FSNamesystem.java:(742)) - KeyProvider: null
> 2017-08-30 18:02:37,716 [main] INFO  namenode.FSNamesystem 
> (FSNamesystemLock.java:(120)) - fsLock is fair: true
> 2017-08-30 18:02:37,716 [main] INFO  namenode.FSNamesystem 
> (FSNamesystemLock.java:(136)) - Detailed lock hold time metrics 
> enabled: false
> 2017-08-30 18:02:37,717 [main] INFO  namenode.FSNamesystem 
> (FSNamesystem.java:(763)) - fsOwner = jenkins (auth:SIMPLE)
> 2017-08-30 18:02:37,717 [main] INFO  namenode.FSNamesystem 
> (FSNamesystem.java:(764)) - supergroup  = supergroup
> 2017-08-30 18:02:37,717 [main] INFO  namenode.FSNamesystem 
> (FSNamesystem.java:(765)) - isPermissionEnabled = true
> 2017-08-30 18:02:37,717 [main] INFO  namenode.FSNamesystem 
> (FSNamesystem.java:(776)) - HA Enabled: false
> 2017-08-30 18:02:37,718 [main] INFO  common.Util 
> (Util.java:isDiskStatsEnabled(395)) - 
> dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO 
> profiling
> 2017-08-30 18:02:37,718 [main] INFO  blockmanagement.DatanodeManager 
> (DatanodeManager.java:(301)) - dfs.block.invalidate.limit: 
> configured=1000, counted=60, effected=1000
> 2017-08-30 18:02:37,718 [main] INFO  blockmanagement.DatanodeManager 
> (DatanodeManager.java:(309)) - 
> dfs.namenode.datanode.registration.ip-hostname-check=true
> 2017-08-30 18:02:37,719 [main] INFO  blockmanagement.BlockManager 
> (InvalidateBlocks.java:printBlockDeletionTime(76)) - 
> dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
> 2017-08-30 18:02:37,719 [main] INFO  blockmanagement.BlockManager 
> (InvalidateBlocks.java:printBlockDeletionTime(82)) - The block deletion will 
> start around 2017 Aug 30 18:02:37
> 2017-08-30 18:02:37,719 [main] INFO  util.GSet 
> (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map 
> BlocksMap
> 2017-08-30 18:02:37,719 [main] INFO

[jira] [Updated] (HDFS-10701) TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired occasionally fails

2017-09-13 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-10701:
-
Attachment: HDFS-10701.001.patch

1. change the timeout from 3s to 6s
2. remove the assume in testBlockTokenExpired

> TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired occasionally fails
> --
>
> Key: HDFS-10701
> URL: https://issues.apache.org/jira/browse/HDFS-10701
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: SammiChen
>  Labels: flaky-test
> Attachments: HDFS-10701.000.patch, HDFS-10701.001.patch
>
>
> I noticed this test failure in a recent precommit build, and I also found 
> this test had failed for a few times in Hadoop-Hdfs-trunk build in the past. 
> But I do not have sufficient knowledge to tell if it's a flaky test or a bug 
> in the code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10701) TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired occasionally fails

2017-09-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10701:
---
Labels: flaky-test  (was: )

> TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired occasionally fails
> --
>
> Key: HDFS-10701
> URL: https://issues.apache.org/jira/browse/HDFS-10701
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: SammiChen
>  Labels: flaky-test
> Attachments: HDFS-10701.000.patch
>
>
> I noticed this test failure in a recent precommit build, and I also found 
> this test had failed for a few times in Hadoop-Hdfs-trunk build in the past. 
> But I do not have sufficient knowledge to tell if it's a flaky test or a bug 
> in the code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12398) Use JUnit Paramaterized test suite in TestWriteReadStripedFile

2017-09-13 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165741#comment-16165741
 ] 

Andrew Wang commented on HDFS-12398:


FYI that I posted a little patch on HDFS-12444 to reduce this runtime. I 
noticed the same duplication that Huafeng did.

An alternative idea to parameterizing is to split the tests into subclasses, 
one that does no failures and one that does failures. This is easier to use 
than JUnit parameterized testing.

> Use JUnit Paramaterized test suite in TestWriteReadStripedFile
> --
>
> Key: HDFS-12398
> URL: https://issues.apache.org/jira/browse/HDFS-12398
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Huafeng Wang
>Assignee: Huafeng Wang
>Priority: Trivial
>  Labels: flaky-test
> Attachments: HDFS-12398.001.patch, HDFS-12398.002.patch
>
>
> The TestWriteReadStripedFile is basically doing the full product of file size 
> with data node failure or not. It's better to use JUnit Paramaterized test 
> suite.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12444) Reduce runtime of TestWriteReadStripedFile

2017-09-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12444:
---
Status: Patch Available  (was: Open)

> Reduce runtime of TestWriteReadStripedFile
> --
>
> Key: HDFS-12444
> URL: https://issues.apache.org/jira/browse/HDFS-12444
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, test
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-12444.001.patch
>
>
> This test takes a long time to run since it writes a lot of data, and 
> frequently times out during precommit testing. If we change the EC policy 
> from RS(6,3) to RS(3,2) then it will run a lot faster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12444) Reduce runtime of TestWriteReadStripedFile

2017-09-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12444:
---
Attachment: HDFS-12444.001.patch

[~drankye] / [~eddyxu] / [~Sammi] mind reviewing?

> Reduce runtime of TestWriteReadStripedFile
> --
>
> Key: HDFS-12444
> URL: https://issues.apache.org/jira/browse/HDFS-12444
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, test
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-12444.001.patch
>
>
> This test takes a long time to run since it writes a lot of data, and 
> frequently times out during precommit testing. If we change the EC policy 
> from RS(6,3) to RS(3,2) then it will run a lot faster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12427) libhdfs++: Prevent Requests from holding dangling pointer to RpcEngine

2017-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165738#comment-16165738
 ] 

Hadoop QA commented on HDFS-12427:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-8707 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
34s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
57s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
53s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m  
7s{color} | {color:green} HDFS-8707 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
33s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
33s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}335m  6s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_151. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}462m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_151 Failed CTEST tests | test_hdfs_ext_hdfspp_test_shim_static |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:3117e2a |
| JIRA Issue | HDFS-12427 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12886955/HDFS-12427.HDFS-8707.001.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 600abef5998f 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 3f92e63 |
| Default Java | 1.7.0_151 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_144 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_151 |
| CTEST | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21125/artifact/patchprocess/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_151-ctest.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21125/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_151.txt
 |
| JDK v1.7.0_151  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21125/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21125/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Prevent Requests from

[jira] [Created] (HDFS-12444) Reduce runtime of TestWriteReadStripedFile

2017-09-13 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-12444:
--

 Summary: Reduce runtime of TestWriteReadStripedFile
 Key: HDFS-12444
 URL: https://issues.apache.org/jira/browse/HDFS-12444
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: erasure-coding, test
Affects Versions: 3.0.0-alpha4
Reporter: Andrew Wang
Assignee: Andrew Wang


This test takes a long time to run since it writes a lot of data, and 
frequently times out during precommit testing. If we change the EC policy from 
RS(6,3) to RS(3,2) then it will run a lot faster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12398) Use JUnit Paramaterized test suite in TestWriteReadStripedFile

2017-09-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12398:
---
Labels: flaky-test  (was: )

> Use JUnit Paramaterized test suite in TestWriteReadStripedFile
> --
>
> Key: HDFS-12398
> URL: https://issues.apache.org/jira/browse/HDFS-12398
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Huafeng Wang
>Assignee: Huafeng Wang
>Priority: Trivial
>  Labels: flaky-test
> Attachments: HDFS-12398.001.patch, HDFS-12398.002.patch
>
>
> The TestWriteReadStripedFile is basically doing the full product of file size 
> with data node failure or not. It's better to use JUnit Paramaterized test 
> suite.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10701) TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired occasionally fails

2017-09-13 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165725#comment-16165725
 ] 

SammiChen commented on HDFS-10701:
--

3s runs well on my local machine. But I guess its takes long on the build 
server.  Maybe I can try 6s in next patch.  

> TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired occasionally fails
> --
>
> Key: HDFS-10701
> URL: https://issues.apache.org/jira/browse/HDFS-10701
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: SammiChen
> Attachments: HDFS-10701.000.patch
>
>
> I noticed this test failure in a recent precommit build, and I also found 
> this test had failed for a few times in Hadoop-Hdfs-trunk build in the past. 
> But I do not have sufficient knowledge to tell if it's a flaky test or a bug 
> in the code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12439) TestReconstructStripedFile.testNNSendsErasureCodingTasks fails occasionally

2017-09-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12439:
---
Labels: flaky-test  (was: )

> TestReconstructStripedFile.testNNSendsErasureCodingTasks fails occasionally 
> 
>
> Key: HDFS-12439
> URL: https://issues.apache.org/jira/browse/HDFS-12439
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>  Labels: flaky-test
>
> With error message:
> {code}
> Error Message
> test timed out after 6 milliseconds
> Stacktrace
> java.lang.Exception: test timed out after 6 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:917)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1199)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at org.apache.hadoop.hdfs.DFSTestUtil.writeFile(DFSTestUtil.java:835)
>   at 
> org.apache.hadoop.hdfs.TestReconstructStripedFile.writeFile(TestReconstructStripedFile.java:273)
>   at 
> org.apache.hadoop.hdfs.TestReconstructStripedFile.testNNSendsErasureCodingTasks(TestReconstructStripedFile.java:461)
>   at 
> org.apache.hadoop.hdfs.TestReconstructStripedFile.testNNSendsErasureCodingTasks(TestReconstructStripedFile.java:439)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10701) TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired occasionally fails

2017-09-13 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165721#comment-16165721
 ] 

Andrew Wang commented on HDFS-10701:


Another note, I tried bumping the timeout a little more and it still works 
okay. We also need to remove the assumeTrue added by HDFS-12417 so the test 
runs again.

> TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired occasionally fails
> --
>
> Key: HDFS-10701
> URL: https://issues.apache.org/jira/browse/HDFS-10701
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: SammiChen
> Attachments: HDFS-10701.000.patch
>
>
> I noticed this test failure in a recent precommit build, and I also found 
> this test had failed for a few times in Hadoop-Hdfs-trunk build in the past. 
> But I do not have sufficient knowledge to tell if it's a flaky test or a bug 
> in the code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12437) TestLeaseRecoveryStriped fails in trunk

2017-09-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12437:
---
Attachment: HDFS-12437.001.patch

The test depends on flushing all the partial blocks before the DFSClient 
detects the streamers are timed out and marks them as failed. Writing files 
with 15 stripes (135MB data+parity) was just taking too long.

This patch greatly reduces the number of stripes written. I also took the 
opportunity to turn an int[][][] into a class, and added documentation and 
logging.

> TestLeaseRecoveryStriped fails in trunk
> ---
>
> Key: HDFS-12437
> URL: https://issues.apache.org/jira/browse/HDFS-12437
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Arpit Agarwal
>Assignee: Andrew Wang
>Priority: Blocker
> Attachments: HDFS-12437.001.patch
>
>
> Fails consistently for me in trunk with the following call stack.
> {code}
>   TestLeaseRecoveryStriped.testLeaseRecovery:152 failed testCase at i=0, 
> blockLengths=[5242880, 7340032, 5242880, 8388608, 7340032, 3145728, 9437184, 
> 10485760, 11534336]
> java.io.IOException: Failed: the number of failed blocks = 4 > the number of 
> parity blocks = 3
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamers(DFSStripedOutputStream.java:394)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.handleStreamerFailure(DFSStripedOutputStream.java:412)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.flushAllInternals(DFSStripedOutputStream.java:1264)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:629)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:565)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:164)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:145)
>   at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:79)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:48)
>   at java.io.DataOutputStream.write(DataOutputStream.java:88)
>   at 
> org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.writePartialBlocks(TestLeaseRecoveryStriped.java:182)
>   at 
> org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.runTest(TestLeaseRecoveryStriped.java:158)
>   at 
> org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.testLeaseRecovery(TestLeaseRecoveryStriped.java:147)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java

[jira] [Updated] (HDFS-12437) TestLeaseRecoveryStriped fails in trunk

2017-09-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12437:
---
Status: Patch Available  (was: Open)

> TestLeaseRecoveryStriped fails in trunk
> ---
>
> Key: HDFS-12437
> URL: https://issues.apache.org/jira/browse/HDFS-12437
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Arpit Agarwal
>Assignee: Andrew Wang
>Priority: Blocker
> Attachments: HDFS-12437.001.patch
>
>
> Fails consistently for me in trunk with the following call stack.
> {code}
>   TestLeaseRecoveryStriped.testLeaseRecovery:152 failed testCase at i=0, 
> blockLengths=[5242880, 7340032, 5242880, 8388608, 7340032, 3145728, 9437184, 
> 10485760, 11534336]
> java.io.IOException: Failed: the number of failed blocks = 4 > the number of 
> parity blocks = 3
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamers(DFSStripedOutputStream.java:394)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.handleStreamerFailure(DFSStripedOutputStream.java:412)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.flushAllInternals(DFSStripedOutputStream.java:1264)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:629)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:565)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:164)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:145)
>   at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:79)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:48)
>   at java.io.DataOutputStream.write(DataOutputStream.java:88)
>   at 
> org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.writePartialBlocks(TestLeaseRecoveryStriped.java:182)
>   at 
> org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.runTest(TestLeaseRecoveryStriped.java:158)
>   at 
> org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.testLeaseRecovery(TestLeaseRecoveryStriped.java:147)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12420) Disable Namenode format when data already exists

2017-09-13 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12420:
--
Attachment: HDFS-12420.04.patch

Fixed style-check and unit test errors.

> Disable Namenode format when data already exists
> 
>
> Key: HDFS-12420
> URL: https://issues.apache.org/jira/browse/HDFS-12420
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch, 
> HDFS-12420.03.patch, HDFS-12420.04.patch
>
>
> Disable NameNode format to avoid accidental formatting of Namenode in 
> production cluster. If someone really wants to delete the complete fsImage, 
> they can first delete the metadata dir and then run {code} hdfs namenode 
> -format{code} manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12437) TestLeaseRecoveryStriped fails in trunk

2017-09-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reassigned HDFS-12437:
--

Assignee: Andrew Wang

> TestLeaseRecoveryStriped fails in trunk
> ---
>
> Key: HDFS-12437
> URL: https://issues.apache.org/jira/browse/HDFS-12437
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Arpit Agarwal
>Assignee: Andrew Wang
>Priority: Blocker
>
> Fails consistently for me in trunk with the following call stack.
> {code}
>   TestLeaseRecoveryStriped.testLeaseRecovery:152 failed testCase at i=0, 
> blockLengths=[5242880, 7340032, 5242880, 8388608, 7340032, 3145728, 9437184, 
> 10485760, 11534336]
> java.io.IOException: Failed: the number of failed blocks = 4 > the number of 
> parity blocks = 3
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamers(DFSStripedOutputStream.java:394)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.handleStreamerFailure(DFSStripedOutputStream.java:412)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.flushAllInternals(DFSStripedOutputStream.java:1264)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:629)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:565)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:164)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:145)
>   at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:79)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:48)
>   at java.io.DataOutputStream.write(DataOutputStream.java:88)
>   at 
> org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.writePartialBlocks(TestLeaseRecoveryStriped.java:182)
>   at 
> org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.runTest(TestLeaseRecoveryStriped.java:158)
>   at 
> org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.testLeaseRecovery(TestLeaseRecoveryStriped.java:147)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2017-09-13 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-7859:

Attachment: HDFS-7859.018.patch

Rebase the patch.

> Erasure Coding: Persist erasure coding policies in NameNode
> ---
>
> Key: HDFS-7859
> URL: https://issues.apache.org/jira/browse/HDFS-7859
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-7859.001.patch, HDFS-7859.002.patch, 
> HDFS-7859.004.patch, HDFS-7859.005.patch, HDFS-7859.006.patch, 
> HDFS-7859.007.patch, HDFS-7859.008.patch, HDFS-7859.009.patch, 
> HDFS-7859.010.patch, HDFS-7859.011.patch, HDFS-7859.012.patch, 
> HDFS-7859.013.patch, HDFS-7859.014.patch, HDFS-7859.015.patch, 
> HDFS-7859.016.patch, HDFS-7859.017.patch, HDFS-7859.018.patch, 
> HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.002.patch, 
> HDFS-7859-HDFS-7285.003.patch
>
>
> In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we 
> persist EC schemas in NameNode centrally and reliably, so that EC zones can 
> reference them by name efficiently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12413) Inotify should support erasure coding policy op as replica meta change

2017-09-13 Thread Huafeng Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165688#comment-16165688
 ] 

Huafeng Wang commented on HDFS-12413:
-

I looked into the code, actually the setting/unsetting erasure code policy for 
files are also returned in inotify streams. They are represented as 
{{MetadataUpdateEvent}} and the MetadataType is {{XATTRS}}.

> Inotify should support erasure coding policy op as replica meta change
> --
>
> Key: HDFS-12413
> URL: https://issues.apache.org/jira/browse/HDFS-12413
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Kai Zheng
>Assignee: Huafeng Wang
>
> Currently HDFS Inotify already supports meta change like replica for a file. 
> We should also support erasure coding policy setting/unsetting for a file 
> similarly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2017-09-13 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165687#comment-16165687
 ] 

Weiwei Yang commented on HDFS-11156:


Hi [~shahrs87]

Thanks for sharing your concern, I think you are raising up a good point. 
Current approach was trying to maintain compatibility as much as possible. We 
don't want to modify any public APIs because it will cause a lot of problems on 
upgrade/integrate paths. Ideally we could implement following API in 
{{ClientProtocol}}

{code}
public BlockLocation[] getBlockLocations(String src,
  long start, long length) throws IOException;
{code}

so that can be exposed via {{NamenodeRpcServer}} for {{NamenodeWebHdfsMethods}} 
to call. However because of the compatibility concerns I am not sure if this is 
worth to comparing to current path. It did a bit overhead on creating a dfs 
client, it is more compatible. 

Thanks

> Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-11156
> URL: https://issues.apache.org/jira/browse/HDFS-11156
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: BlockLocationProperties_JSON_Schema.jpg, 
> BlockLocations_JSON_Schema.jpg, FileStatuses_JSON_Schema.jpg, 
> HDFS-11156.01.patch, HDFS-11156.02.patch, HDFS-11156.03.patch, 
> HDFS-11156.04.patch, HDFS-11156.05.patch, HDFS-11156.06.patch, 
> HDFS-11156.07.patch, HDFS-11156.08.patch, HDFS-11156.09.patch, 
> HDFS-11156.10.patch, HDFS-11156.11.patch, HDFS-11156.12.patch, 
> HDFS-11156.13.patch, HDFS-11156.14.patch, HDFS-11156.15.patch, 
> HDFS-11156.16.patch, HDFS-11156-branch-2.01.patch, 
> Output_JSON_format_v10.jpg, SampleResponse_JSON.jpg
>
>
> Following webhdfs REST API
> {code}
> http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1
> {code}
> will get a response like
> {code}
> {
>   "LocatedBlocks" : {
> "fileLength" : 1073741824,
> "isLastBlockComplete" : true,
> "isUnderConstruction" : false,
> "lastLocatedBlock" : { ... },
> "locatedBlocks" : [ {...} ]
>   }
> }
> {code}
> This represents for *o.a.h.h.p.LocatedBlocks*. However according to 
> *FileSystem* API, 
> {code}
> public BlockLocation[] getFileBlockLocations(Path p, long start, long len)
> {code}
> clients would expect an array of BlockLocation. This mismatch should be 
> fixed. Marked as Incompatible change as this will change the output of the 
> GET_BLOCK_LOCATIONS API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12415) Ozone: TestXceiverClientManager and TestAllocateContainer occasionally fails

2017-09-13 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12415:
---
Attachment: HDFS-12415-HDFS-7240.003.patch

> Ozone: TestXceiverClientManager and TestAllocateContainer occasionally fails
> 
>
> Key: HDFS-12415
> URL: https://issues.apache.org/jira/browse/HDFS-12415
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12415-HDFS-7240.001.patch, 
> HDFS-12415-HDFS-7240.002.patch, HDFS-12415-HDFS-7240.003.patch
>
>
> TestXceiverClientManager seems to be occasionally failing in some jenkins 
> jobs,
> {noformat}
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.ozone.scm.node.SCMNodeManager.getNodeStat(SCMNodeManager.java:828)
>  at 
> org.apache.hadoop.ozone.scm.container.placement.algorithms.SCMCommonPolicy.hasEnoughSpace(SCMCommonPolicy.java:147)
>  at 
> org.apache.hadoop.ozone.scm.container.placement.algorithms.SCMCommonPolicy.lambda$chooseDatanodes$0(SCMCommonPolicy.java:125)
> {noformat}
> see more from [this 
> report|https://builds.apache.org/job/PreCommit-HDFS-Build/21065/testReport/]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12443) Ozone: Improve SCM block deletion throttling algorithm

2017-09-13 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165649#comment-16165649
 ] 

Weiwei Yang commented on HDFS-12443:


Hi [~anu], FYI I have listed this as a task before merge because I think this 
is an important performance improvement to make. Please let me know if you have 
a different opinion. Thanks.

> Ozone: Improve SCM block deletion throttling algorithm 
> ---
>
> Key: HDFS-12443
> URL: https://issues.apache.org/jira/browse/HDFS-12443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: ozoneMerge
>
> Currently SCM scans delLog to send deletion transactions to datanode 
> periodically, the throttling algorithm is simple, it scans at most 
> {{BLOCK_DELETE_TX_PER_REQUEST_LIMIT}} (by default 50) at a time. This is 
> non-optimal, worst case it might cache 50 TXs for 50 different DNs so each DN 
> will only get 1 TX to proceed in an interval, this will make the deletion 
> slow. An improvement to this is to make this throttling by datanode, e.g 50 
> TXs per datanode per interval.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-12443) Ozone: Improve SCM block deletion throttling algorithm

2017-09-13 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-12443 started by Weiwei Yang.
--
> Ozone: Improve SCM block deletion throttling algorithm 
> ---
>
> Key: HDFS-12443
> URL: https://issues.apache.org/jira/browse/HDFS-12443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: ozoneMerge
>
> Currently SCM scans delLog to send deletion transactions to datanode 
> periodically, the throttling algorithm is simple, it scans at most 
> {{BLOCK_DELETE_TX_PER_REQUEST_LIMIT}} (by default 50) at a time. This is 
> non-optimal, worst case it might cache 50 TXs for 50 different DNs so each DN 
> will only get 1 TX to proceed in an interval, this will make the deletion 
> slow. An improvement to this is to make this throttling by datanode, e.g 50 
> TXs per datanode per interval.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12443) Ozone: Improve SCM block deletion throttling algorithm

2017-09-13 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12443:
---
Labels: ozoneMerge  (was: )

> Ozone: Improve SCM block deletion throttling algorithm 
> ---
>
> Key: HDFS-12443
> URL: https://issues.apache.org/jira/browse/HDFS-12443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: ozoneMerge
>
> Currently SCM scans delLog to send deletion transactions to datanode 
> periodically, the throttling algorithm is simple, it scans at most 
> {{BLOCK_DELETE_TX_PER_REQUEST_LIMIT}} (by default 50) at a time. This is 
> non-optimal, worst case it might cache 50 TXs for 50 different DNs so each DN 
> will only get 1 TX to proceed in an interval, this will make the deletion 
> slow. An improvement to this is to make this throttling by datanode, e.g 50 
> TXs per datanode per interval.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12443) Ozone: Improve SCM block deletion throttling algorithm

2017-09-13 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-12443:
--

 Summary: Ozone: Improve SCM block deletion throttling algorithm 
 Key: HDFS-12443
 URL: https://issues.apache.org/jira/browse/HDFS-12443
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone, scm
Reporter: Weiwei Yang
Assignee: Weiwei Yang


Currently SCM scans delLog to send deletion transactions to datanode 
periodically, the throttling algorithm is simple, it scans at most 
{{BLOCK_DELETE_TX_PER_REQUEST_LIMIT}} (by default 50) at a time. This is 
non-optimal, worst case it might cache 50 TXs for 50 different DNs so each DN 
will only get 1 TX to proceed in an interval, this will make the deletion slow. 
An improvement to this is to make this throttling by datanode, e.g 50 TXs per 
datanode per interval.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12442) WebHdfsFileSystem#getFileBlockLocations will always return BlockLocation#corrupt as false

2017-09-13 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165640#comment-16165640
 ] 

Weiwei Yang commented on HDFS-12442:


Hi [~shahrs87]

Thank you for catching this up and the fix. That was a mis-use of the API like 
you pointed out, can we replace that with {{Boolean.parseBoolean()}}? Other 
seems good to me. But I am not a committer yet, guess you need some one else to 
help review and the commit, thank you.

> WebHdfsFileSystem#getFileBlockLocations will always return 
> BlockLocation#corrupt as false
> -
>
> Key: HDFS-12442
> URL: https://issues.apache.org/jira/browse/HDFS-12442
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Critical
> Attachments: HDFS-12442-1.patch
>
>
> Was going through {{JsonUtilClient#toBlockLocation}} code.
> Below is the relevant code snippet.
> {code:title=JsonUtilClient.java|borderStyle=solid}
>  /** Convert a Json map to BlockLocation. **/
>   static BlockLocation toBlockLocation(Map m)
>   throws IOException{
> ...
> ...  
> boolean corrupt = Boolean.
> getBoolean(m.get("corrupt").toString());
> ...
> ...
>   }
> {code}
> According to java docs for {{Boolean#getBoolean}}
> {noformat}
> Returns true if and only if the system property named by the argument exists 
> and is equal to the string "true". 
> {noformat}
> I assume, the map value for key {{corrupt}} will be populated with either 
> {{true}} or {{false}}.
> On the client side, {{Boolean#getBoolean}} will look for system property for 
> true or false.
> So it will always return false unless the system property is set for true or 
> false.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12381) [Documentation] Adding configuration keys for the Router

2017-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165636#comment-16165636
 ] 

Hadoop QA commented on HDFS-12381:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-10467 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 4s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} HDFS-10467 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}129m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestWriteRead |
|   | hadoop.hdfs.TestSafeModeWithStripedFile |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestSeekBug |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.server.namenode.ha.TestHASafeMode |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.federation.router.TestNamenodeHeartbeat |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12381 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12886990/HDFS-12381-HDFS-10467.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 71f39e4f7663 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10467 / 679e31a |
| Default Java | 1.8.0_144 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21130/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21130/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21130/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


T

[jira] [Commented] (HDFS-12273) Federation UI

2017-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165633#comment-16165633
 ] 

Hadoop QA commented on HDFS-12273:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10467 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 3s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} HDFS-10467 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 57s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 5 new + 403 unchanged - 
5 fixed = 408 total (was 408) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 20s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.hdfs.server.federation.router.TestNamenodeHeartbeat |
|   | hadoop.hdfs.TestReconstructStripedFile |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12273 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12886991/HDFS-12273-HDFS-10467-004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux ea1d8202a746 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10467 / 679e31a |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21128/artifact/patchprocess/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21128/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apach

[jira] [Commented] (HDFS-12420) Disable Namenode format when data already exists

2017-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165628#comment-16165628
 ] 

Hadoop QA commented on HDFS-12420:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 6 new + 616 unchanged - 1 fixed = 622 total (was 617) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestGenericJournalConf |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.qjournal.TestNNWithQJM |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.TestRollingUpgradeRollback |
|   | hadoop.hdfs.TestDFSInotifyEventInputStream |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.qjournal.TestSecureNNWithQJM |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyInProgressTail |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
|
|   | hadoop.hdfs.tools.TestDFSAdminWithHA |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12420 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12886992/HDFS-12420.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Li

[jira] [Comment Edited] (HDFS-12395) Support erasure coding policy operations in namenode edit log

2017-09-13 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165615#comment-16165615
 ] 

SammiChen edited comment on HDFS-12395 at 9/14/17 1:56 AM:
---

Thanks [~eddyxu] and [~drankye]!  OEV is already been implemented in this JIRA 
since several unit test will failure without it.  Th code is a comment out 
during debug.  Forget  to uncomment it . Will make it correct in next patch. 


was (Author: sammi):
Thanks [~eddyxu] and [~drankye]!  OEV is already been implemented in this JIRA 
since several unit test will failure without it.  Th comment out code is 
mis-committed. Will make it correct in next patch. 

> Support erasure coding policy operations in namenode edit log
> -
>
> Key: HDFS-12395
> URL: https://issues.apache.org/jira/browse/HDFS-12395
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: editsStored, HDFS-12395.001.patch, HDFS-12395.002.patch, 
> HDFS-12395.003.patch
>
>
> Support add, remove, disable, enable erasure coding policy operation in edit 
> log. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12395) Support erasure coding policy operations in namenode edit log

2017-09-13 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165615#comment-16165615
 ] 

SammiChen edited comment on HDFS-12395 at 9/14/17 1:56 AM:
---

Thanks [~eddyxu] and [~drankye]!  OEV is already been implemented in this JIRA 
since several unit test will failure without it.  Th code is commented out 
during debug.  Forget  to uncomment it . Will make it correct in next patch. 


was (Author: sammi):
Thanks [~eddyxu] and [~drankye]!  OEV is already been implemented in this JIRA 
since several unit test will failure without it.  Th code is a comment out 
during debug.  Forget  to uncomment it . Will make it correct in next patch. 

> Support erasure coding policy operations in namenode edit log
> -
>
> Key: HDFS-12395
> URL: https://issues.apache.org/jira/browse/HDFS-12395
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: editsStored, HDFS-12395.001.patch, HDFS-12395.002.patch, 
> HDFS-12395.003.patch
>
>
> Support add, remove, disable, enable erasure coding policy operation in edit 
> log. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12395) Support erasure coding policy operations in namenode edit log

2017-09-13 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165615#comment-16165615
 ] 

SammiChen commented on HDFS-12395:
--

Thanks [~eddyxu] and [~drankye]!  OEV is already been implemented in this JIRA 
since several unit test will failure without it.  Th comment out code is 
mis-committed. Will make it correct in next patch. 

> Support erasure coding policy operations in namenode edit log
> -
>
> Key: HDFS-12395
> URL: https://issues.apache.org/jira/browse/HDFS-12395
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: editsStored, HDFS-12395.001.patch, HDFS-12395.002.patch, 
> HDFS-12395.003.patch
>
>
> Support add, remove, disable, enable erasure coding policy operation in edit 
> log. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12414) Ensure to use CLI command to enable/disable erasure coding policy

2017-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165611#comment-16165611
 ] 

Hudson commented on HDFS-12414:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12868 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12868/])
HDFS-12414. Ensure to use CLI command to enable/disable erasure coding 
(sammi.chen: rev e0b3c644e186d89138d4174efe0cbe77a0200315)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSetrepIncreasing.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeModeWithStripedFile.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddStripedBlockInFBR.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddOverReplicatedStripedBlocks.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestHdfsHelper.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReconstructStripedBlocksWithRackAwareness.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileChecksum.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadStripedFileWithMissingBlocks.java
* (edit) 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicies.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSEditLogLoader.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerWithStripedBlocks.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingPolicyManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteReadStripedFile.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestSequentialBlockGroupId.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEnabledECPolicies.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStartup.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestComputeInvalidateWork.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestQuotaWithStripedBlocks.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/ReadStripedFileWithDecodingHelper.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeErasureCodingMetrics.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestReconstructStripedBlocks.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecoveryStriped.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStripedINodeFile.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestUnsetAndChangeDirectoryEcPolicy.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommissionWithStriped.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirErasureCodingOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java

[jira] [Commented] (HDFS-12442) WebHdfsFileSystem#getFileBlockLocations will always return BlockLocation#corrupt as false

2017-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165610#comment-16165610
 ] 

Hadoop QA commented on HDFS-12442:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}127m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.namenode.TestReencryption |
|   | hadoop.hdfs.TestReplication |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12442 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12886983/HDFS-12442-1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3a07dd3f39a4 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bb34ae9 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21127/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21127/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: had

[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2017-09-13 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165602#comment-16165602
 ] 

Kai Zheng commented on HDFS-7859:
-

Sammi, please rebase this. Applying to the latest trunk failed.

> Erasure Coding: Persist erasure coding policies in NameNode
> ---
>
> Key: HDFS-7859
> URL: https://issues.apache.org/jira/browse/HDFS-7859
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-7859.001.patch, HDFS-7859.002.patch, 
> HDFS-7859.004.patch, HDFS-7859.005.patch, HDFS-7859.006.patch, 
> HDFS-7859.007.patch, HDFS-7859.008.patch, HDFS-7859.009.patch, 
> HDFS-7859.010.patch, HDFS-7859.011.patch, HDFS-7859.012.patch, 
> HDFS-7859.013.patch, HDFS-7859.014.patch, HDFS-7859.015.patch, 
> HDFS-7859.016.patch, HDFS-7859.017.patch, HDFS-7859-HDFS-7285.002.patch, 
> HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.003.patch
>
>
> In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we 
> persist EC schemas in NameNode centrally and reliably, so that EC zones can 
> reference them by name efficiently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12414) Ensure to use CLI command to enable/disable erasure coding policy

2017-09-13 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-12414:
-
Release Note: dfs.namenode.ec.policies.enabled was removed in order to 
ensure there is only one approach to enable/disable erasure coding policies to 
avoid sync up.

> Ensure to use CLI command to enable/disable erasure coding policy
> -
>
> Key: HDFS-12414
> URL: https://issues.apache.org/jira/browse/HDFS-12414
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12414.001.patch, HDFS-12414.002.patch, 
> HDFS-12414.003.patch, HDFS-12414.004.patch, HDFS-12414.005.patch, 
> HDFS-12414.006.patch
>
>
> Currently, there are two methods for user to enable/disable a erasure coding 
> policy. One is through "dfs.namenode.ec.policies.enabled" property which is a 
> static way to configure the enabled erasure coding policies. Another is 
> through "enableErasureCodingPolicy" or "disabledErasureCodingPolicy" API 
> which can enabled or disable erasure coding policy at runtime. 
> When Namenode restart, there is potential state conflicts between the policy 
> defined in "dfs.namenode.ec.policies.enabled" and policy saved in fsImage. To 
> resolve the conflict and simplify the operation, it's better to use just one 
> way and remove the old method configuring the property.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12414) Ensure to use CLI command to enable/disable erasure coding policy

2017-09-13 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165594#comment-16165594
 ] 

Kai Zheng commented on HDFS-12414:
--

TO CLARIFY: the commits here appear to be done by Sammi Chen, but actually it 
did be me. It was caused by I shared a machine with Sammi. Sorry for the 
confusion :(.

> Ensure to use CLI command to enable/disable erasure coding policy
> -
>
> Key: HDFS-12414
> URL: https://issues.apache.org/jira/browse/HDFS-12414
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12414.001.patch, HDFS-12414.002.patch, 
> HDFS-12414.003.patch, HDFS-12414.004.patch, HDFS-12414.005.patch, 
> HDFS-12414.006.patch
>
>
> Currently, there are two methods for user to enable/disable a erasure coding 
> policy. One is through "dfs.namenode.ec.policies.enabled" property which is a 
> static way to configure the enabled erasure coding policies. Another is 
> through "enableErasureCodingPolicy" or "disabledErasureCodingPolicy" API 
> which can enabled or disable erasure coding policy at runtime. 
> When Namenode restart, there is potential state conflicts between the policy 
> defined in "dfs.namenode.ec.policies.enabled" and policy saved in fsImage. To 
> resolve the conflict and simplify the operation, it's better to use just one 
> way and remove the old method configuring the property.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12414) Ensure to use CLI command to enable/disable erasure coding policy

2017-09-13 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-12414:
-
   Resolution: Fixed
 Hadoop Flags: Incompatible change,Reviewed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-3.0. Thanks [~Sammi] for the contribution, 
[~HuafengWang] for the review and [~eddyxu] for the idea of ensuring the 
consistency how to enable/disable erasure coding policies.

> Ensure to use CLI command to enable/disable erasure coding policy
> -
>
> Key: HDFS-12414
> URL: https://issues.apache.org/jira/browse/HDFS-12414
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12414.001.patch, HDFS-12414.002.patch, 
> HDFS-12414.003.patch, HDFS-12414.004.patch, HDFS-12414.005.patch, 
> HDFS-12414.006.patch
>
>
> Currently, there are two methods for user to enable/disable a erasure coding 
> policy. One is through "dfs.namenode.ec.policies.enabled" property which is a 
> static way to configure the enabled erasure coding policies. Another is 
> through "enableErasureCodingPolicy" or "disabledErasureCodingPolicy" API 
> which can enabled or disable erasure coding policy at runtime. 
> When Namenode restart, there is potential state conflicts between the policy 
> defined in "dfs.namenode.ec.policies.enabled" and policy saved in fsImage. To 
> resolve the conflict and simplify the operation, it's better to use just one 
> way and remove the old method configuring the property.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12414) Ensure to use CLI command to enable/disable erasure coding policy

2017-09-13 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-12414:
-
Attachment: HDFS-12414.006.patch

Uploaded the revision checked in, with the minor unimport issue fixed.

> Ensure to use CLI command to enable/disable erasure coding policy
> -
>
> Key: HDFS-12414
> URL: https://issues.apache.org/jira/browse/HDFS-12414
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12414.001.patch, HDFS-12414.002.patch, 
> HDFS-12414.003.patch, HDFS-12414.004.patch, HDFS-12414.005.patch, 
> HDFS-12414.006.patch
>
>
> Currently, there are two methods for user to enable/disable a erasure coding 
> policy. One is through "dfs.namenode.ec.policies.enabled" property which is a 
> static way to configure the enabled erasure coding policies. Another is 
> through "enableErasureCodingPolicy" or "disabledErasureCodingPolicy" API 
> which can enabled or disable erasure coding policy at runtime. 
> When Namenode restart, there is potential state conflicts between the policy 
> defined in "dfs.namenode.ec.policies.enabled" and policy saved in fsImage. To 
> resolve the conflict and simplify the operation, it's better to use just one 
> way and remove the old method configuring the property.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10711) Optimize FSPermissionChecker group membership check

2017-09-13 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-10711:
---
Fix Version/s: 2.7.5

Also committed to branch-2.7. Thank you [~daryn].

> Optimize FSPermissionChecker group membership check
> ---
>
> Key: HDFS-10711
> URL: https://issues.apache.org/jira/browse/HDFS-10711
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1, 2.7.5
>
> Attachments: HDFS-10711.1.patch, HDFS-10711.patch
>
>
> HADOOP-13442 obviates the need for multiple group related object allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12414) Ensure to use CLI command to enable/disable erasure coding policy

2017-09-13 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165566#comment-16165566
 ] 

Kai Zheng commented on HDFS-12414:
--

The latest patch LGTM and +1. Will commit it shortly with the minor check style 
fixed.

> Ensure to use CLI command to enable/disable erasure coding policy
> -
>
> Key: HDFS-12414
> URL: https://issues.apache.org/jira/browse/HDFS-12414
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12414.001.patch, HDFS-12414.002.patch, 
> HDFS-12414.003.patch, HDFS-12414.004.patch, HDFS-12414.005.patch
>
>
> Currently, there are two methods for user to enable/disable a erasure coding 
> policy. One is through "dfs.namenode.ec.policies.enabled" property which is a 
> static way to configure the enabled erasure coding policies. Another is 
> through "enableErasureCodingPolicy" or "disabledErasureCodingPolicy" API 
> which can enabled or disable erasure coding policy at runtime. 
> When Namenode restart, there is potential state conflicts between the policy 
> defined in "dfs.namenode.ec.policies.enabled" and policy saved in fsImage. To 
> resolve the conflict and simplify the operation, it's better to use just one 
> way and remove the old method configuring the property.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12424) Datatable sorting on the Datanode Information page in the Namenode UI is broken

2017-09-13 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165541#comment-16165541
 ] 

Ajay Kumar commented on HDFS-12424:
---

LGTM.

> Datatable sorting on the Datanode Information page in the Namenode UI is 
> broken
> ---
>
> Key: HDFS-12424
> URL: https://issues.apache.org/jira/browse/HDFS-12424
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.1
>Reporter: Shawna Martell
>Assignee: Shawna Martell
> Attachments: HDFS-12424.1.patch, HDFS-12424.1.patch, 
> HDFS-12424-branch-2.8.1.patch
>
>
> Attempting to sort the "In operation" table by Last contact, Capacity, 
> Blocks, or Version can result in unexpected behavior. Sorting by Blocks and 
> Version actually sort by entirely different columns, and Last contact and 
> Capacity are not being sorted numerically but rather alphabetically.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12409) Add metrics of execution time of different stages in EC recovery task

2017-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165531#comment-16165531
 ] 

Hudson commented on HDFS-12409:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12867 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12867/])
HDFS-12409. Add metrics of execution time of different stages in EC (lei: rev 
73aed34dffa5e79f6f819137b69054c1dee2d4dd)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeErasureCodingMetrics.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/StripedBlockReconstructor.java


> Add metrics of execution time of different stages in EC recovery task
> -
>
> Key: HDFS-12409
> URL: https://issues.apache.org/jira/browse/HDFS-12409
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12409.00.patch, HDFS-12409.01.patch
>
>
> Admin can use more metrics to monitor EC recovery tasks, to get insights to 
> tune recovery performance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12420) Disable Namenode format when data already exists

2017-09-13 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12420:
--
Attachment: HDFS-12420.03.patch

[~jnp],[~vinayrpet], thanks for suggestion about prod/non-prod. 
[~aw],[~anu],[~arpitagarwal] thanks for valuable feedback. I have updated the 
patch to include a new property to identify if cluster is marked as prod. By 
default property value is false and existing functionality will continue. 

> Disable Namenode format when data already exists
> 
>
> Key: HDFS-12420
> URL: https://issues.apache.org/jira/browse/HDFS-12420
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch, 
> HDFS-12420.03.patch
>
>
> Disable NameNode format to avoid accidental formatting of Namenode in 
> production cluster. If someone really wants to delete the complete fsImage, 
> they can first delete the metadata dir and then run {code} hdfs namenode 
> -format{code} manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12273) Federation UI

2017-09-13 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12273:
---
Attachment: HDFS-12273-HDFS-10467-004.patch

> Federation UI
> -
>
> Key: HDFS-12273
> URL: https://issues.apache.org/jira/browse/HDFS-12273
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Fix For: HDFS-10467
>
> Attachments: federationUI-1.png, federationUI-2.png, 
> federationUI-3.png, HDFS-12273-HDFS-10467-000.patch, 
> HDFS-12273-HDFS-10467-001.patch, HDFS-12273-HDFS-10467-002.patch, 
> HDFS-12273-HDFS-10467-003.patch, HDFS-12273-HDFS-10467-004.patch
>
>
> Add the Web UI to the Router to expose the status of the federated cluster. 
> It includes the federation metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12381) [Documentation] Adding configuration keys for the Router

2017-09-13 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12381:
---
Attachment: (was: HDFS-12381-HDFS-10467.001.patch)

> [Documentation] Adding configuration keys for the Router
> 
>
> Key: HDFS-12381
> URL: https://issues.apache.org/jira/browse/HDFS-12381
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: HDFS-10467
>
> Attachments: HDFS-12381-HDFS-10467.000.patch, 
> HDFS-12381-HDFS-10467.001.patch
>
>
> Adding configuration options in tabular format.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12381) [Documentation] Adding configuration keys for the Router

2017-09-13 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12381:
---
Attachment: HDFS-12381-HDFS-10467.001.patch

> [Documentation] Adding configuration keys for the Router
> 
>
> Key: HDFS-12381
> URL: https://issues.apache.org/jira/browse/HDFS-12381
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: HDFS-10467
>
> Attachments: HDFS-12381-HDFS-10467.000.patch, 
> HDFS-12381-HDFS-10467.001.patch
>
>
> Adding configuration options in tabular format.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12420) Disable Namenode format when data already exists

2017-09-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165515#comment-16165515
 ] 

Allen Wittenauer commented on HDFS-12420:
-

bq. current format functionality is broken itself. It deletes the metadata 
while doing nothing about the data stored in data-nodes. 

Just like mkfs.  And just like it, the fact that it doesn't delete the actual 
data is a feature, not a bug.  If I restore the fsimage back then my data 
should come back too.  (mostly... new data ofc is likely to be missing, etc) 
It's why making a copy of the fsimage is Hadoop Ops 101. 

Some key advice I give to admins:  you can try to prevent mistakes, but they'll 
still happen despite your best efforts.  After low hanging warnings, the energy 
is better spent on how to quickly recover. But that's a problem that's outside 
of the core code.

For the record, yes, I've made HUGE mistakes like this in my career.  Every 
admin has. In my case, I brought down an entire hospital once.  Even with that 
experience, I still think requiring metadata deletion outside of the tool set 
is way overkill.

bq. may be being able to tag a cluster as "production" like discussed above is 
a better idea?

Yeah, sure, whatever.  All that's going to happen is:

{code}
hdfs --config /tmp/mymodifiedconfig namenode -format -force
{code}

If a user is too lazy/impatient/distracted to check that they are on a live 
system before hitting y, they'll just change the flag and then format.  But if 
that makes folks happy, fine.  It still sounds like the console output needs 
some work though if a user couldn't "see" it.  (Not sure I agree with that 
either, but whatever.)

BTW, a quick search for how the equivalent problem is solved in databases is 
interesting. Almost all of them that I looked at: don't give the user access. 
So yes, enough rope to hang themselves seems to be the expectation 
operationally.

> Disable Namenode format when data already exists
> 
>
> Key: HDFS-12420
> URL: https://issues.apache.org/jira/browse/HDFS-12420
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch
>
>
> Disable NameNode format to avoid accidental formatting of Namenode in 
> production cluster. If someone really wants to delete the complete fsImage, 
> they can first delete the metadata dir and then run {code} hdfs namenode 
> -format{code} manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12409) Add metrics of execution time of different stages in EC recovery task

2017-09-13 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12409:
-
   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

Thanks [~andrew.wang] for the review.

Committed to trunk.

> Add metrics of execution time of different stages in EC recovery task
> -
>
> Key: HDFS-12409
> URL: https://issues.apache.org/jira/browse/HDFS-12409
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12409.00.patch, HDFS-12409.01.patch
>
>
> Admin can use more metrics to monitor EC recovery tasks, to get insights to 
> tune recovery performance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12256) Ozone : handle inactive containers on DataNode

2017-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165504#comment-16165504
 ] 

Hadoop QA commented on HDFS-12256:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 3s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}140m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 |
|   | hadoop.hdfs.server.mover.TestMover |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.TestFileCorruption |
|   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop

[jira] [Commented] (HDFS-12420) Disable Namenode format when data already exists

2017-09-13 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165484#comment-16165484
 ] 

Anu Engineer commented on HDFS-12420:
-

[~aw] Thanks for your comments.

bq. The argument here is the same as "newfs should fail if it detects a 
partition table. You'll need to dd onto the raw disk to wipe it out first". If 
you ask any experienced admin, 9/10 they're going to tell you that makes zero 
sense.

Makes sense, Let us not proceed down this path. The only difference is that in 
case of Hadoop the damage that a command can do is multiplied by the number of 
data nodes.

Having seen that accidental formats can happen, may be being able to tag a 
cluster as "production" like discussed above is a better idea?



> Disable Namenode format when data already exists
> 
>
> Key: HDFS-12420
> URL: https://issues.apache.org/jira/browse/HDFS-12420
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch
>
>
> Disable NameNode format to avoid accidental formatting of Namenode in 
> production cluster. If someone really wants to delete the complete fsImage, 
> they can first delete the metadata dir and then run {code} hdfs namenode 
> -format{code} manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12420) Disable Namenode format when data already exists

2017-09-13 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165482#comment-16165482
 ] 

Ajay Kumar commented on HDFS-12420:
---

Hi [~aw],  What you said is true but as [~arpitagarwal] has pointed out current 
format functionality is broken itself. It deletes the metadata while doing 
nothing about the data stored in data-nodes. 
We can keep the existing functionality as it is and add a new property to 
identify prod cluster. By default this property will be set to non-prod. If 
someone marks there cluster as prod cluster than this can be an additional 
safeguard. This will maintain the backward compatibility and hopefully will 
address your concerns as well. 

> Disable Namenode format when data already exists
> 
>
> Key: HDFS-12420
> URL: https://issues.apache.org/jira/browse/HDFS-12420
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch
>
>
> Disable NameNode format to avoid accidental formatting of Namenode in 
> production cluster. If someone really wants to delete the complete fsImage, 
> they can first delete the metadata dir and then run {code} hdfs namenode 
> -format{code} manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12442) WebHdfsFileSystem#getFileBlockLocations will always return BlockLocation#corrupt as false

2017-09-13 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12442:
--
Status: Patch Available  (was: Open)

[~cheersyang]: can you please review.

> WebHdfsFileSystem#getFileBlockLocations will always return 
> BlockLocation#corrupt as false
> -
>
> Key: HDFS-12442
> URL: https://issues.apache.org/jira/browse/HDFS-12442
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha2, 2.9.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Critical
> Attachments: HDFS-12442-1.patch
>
>
> Was going through {{JsonUtilClient#toBlockLocation}} code.
> Below is the relevant code snippet.
> {code:title=JsonUtilClient.java|borderStyle=solid}
>  /** Convert a Json map to BlockLocation. **/
>   static BlockLocation toBlockLocation(Map m)
>   throws IOException{
> ...
> ...  
> boolean corrupt = Boolean.
> getBoolean(m.get("corrupt").toString());
> ...
> ...
>   }
> {code}
> According to java docs for {{Boolean#getBoolean}}
> {noformat}
> Returns true if and only if the system property named by the argument exists 
> and is equal to the string "true". 
> {noformat}
> I assume, the map value for key {{corrupt}} will be populated with either 
> {{true}} or {{false}}.
> On the client side, {{Boolean#getBoolean}} will look for system property for 
> true or false.
> So it will always return false unless the system property is set for true or 
> false.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12442) WebHdfsFileSystem#getFileBlockLocations will always return BlockLocation#corrupt as false

2017-09-13 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12442:
--
Attachment: HDFS-12442-1.patch

Attaching a simple patch with test case.

> WebHdfsFileSystem#getFileBlockLocations will always return 
> BlockLocation#corrupt as false
> -
>
> Key: HDFS-12442
> URL: https://issues.apache.org/jira/browse/HDFS-12442
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Critical
> Attachments: HDFS-12442-1.patch
>
>
> Was going through {{JsonUtilClient#toBlockLocation}} code.
> Below is the relevant code snippet.
> {code:title=JsonUtilClient.java|borderStyle=solid}
>  /** Convert a Json map to BlockLocation. **/
>   static BlockLocation toBlockLocation(Map m)
>   throws IOException{
> ...
> ...  
> boolean corrupt = Boolean.
> getBoolean(m.get("corrupt").toString());
> ...
> ...
>   }
> {code}
> According to java docs for {{Boolean#getBoolean}}
> {noformat}
> Returns true if and only if the system property named by the argument exists 
> and is equal to the string "true". 
> {noformat}
> I assume, the map value for key {{corrupt}} will be populated with either 
> {{true}} or {{false}}.
> On the client side, {{Boolean#getBoolean}} will look for system property for 
> true or false.
> So it will always return false unless the system property is set for true or 
> false.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12420) Disable Namenode format when data already exists

2017-09-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165444#comment-16165444
 ] 

Allen Wittenauer commented on HDFS-12420:
-

bq.  cluster owner, who was visibly distressed. 

Well sure. They screwed up.  They can either own up to the fact they made a 
mistake and learn from it or try to push blame off onto someone or something 
else, like their vendor.  Besides, who *doesn't* make a copy of the fsimage 
data on a regular basis?  That's Hadoop Ops 101.

That said: there comes a point where it becomes impossible to protect every 
admin from every mistake they may possibly make.

-format is the functional equivalent of newfs.  The argument here is the same 
as "newfs should fail if it detects a partition table.  You'll need to dd onto 
the raw disk to wipe it out first".  If you ask any experienced admin, 9/10 
they're going to tell you that makes zero sense.

The same thing here.  The code specifically warns the user that they are about 
to delete live data.  Could the messaging be improved? Sure and that's probably 
what should be happening if users are confused enough to file this drastic 
overreaction.  But the warning is there all the same.  It is up to the user to 
act upon that information and determine it is safe or not to continue with the 
operation.  If they blindly -force it, well, that's on them.  Users might 
remove data they need by always doing -skipTrash.  So we should remove it, 
right?  Of course not.

One of the key principals of operations is that admins have enough rope to hang 
themselves.  This is exactly the same case.  In this instance, the admin did 
exactly that: hung themselves because they weren't careful.

bq. How you can delete the shared edits dir in journal nodes manually?

I'm really glad you asked that question because it's a key one. It's sort of 
ridiculous to have admins go hunt down where Hadoop might be stuffing metadata. 
 Add in the complexity of HA and it is even more ludicrous.

bq. That said, if you have examples of automated deployments that will be 
broken by this change and that we haven't thought of, we can abandon the idea.

I have clients that do this on a regular basis. They regularly roll out small, 
short term clusters to external groups. Yes, this change will break them 
horribly.  


> Disable Namenode format when data already exists
> 
>
> Key: HDFS-12420
> URL: https://issues.apache.org/jira/browse/HDFS-12420
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch
>
>
> Disable NameNode format to avoid accidental formatting of Namenode in 
> production cluster. If someone really wants to delete the complete fsImage, 
> they can first delete the metadata dir and then run {code} hdfs namenode 
> -format{code} manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12437) TestLeaseRecoveryStriped fails in trunk

2017-09-13 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165419#comment-16165419
 ] 

Andrew Wang commented on HDFS-12437:


I bisected this to HDFS-12303. Haven't dug in further yet.

> TestLeaseRecoveryStriped fails in trunk
> ---
>
> Key: HDFS-12437
> URL: https://issues.apache.org/jira/browse/HDFS-12437
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Arpit Agarwal
>Priority: Blocker
>
> Fails consistently for me in trunk with the following call stack.
> {code}
>   TestLeaseRecoveryStriped.testLeaseRecovery:152 failed testCase at i=0, 
> blockLengths=[5242880, 7340032, 5242880, 8388608, 7340032, 3145728, 9437184, 
> 10485760, 11534336]
> java.io.IOException: Failed: the number of failed blocks = 4 > the number of 
> parity blocks = 3
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamers(DFSStripedOutputStream.java:394)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.handleStreamerFailure(DFSStripedOutputStream.java:412)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.flushAllInternals(DFSStripedOutputStream.java:1264)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:629)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:565)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:164)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:145)
>   at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:79)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:48)
>   at java.io.DataOutputStream.write(DataOutputStream.java:88)
>   at 
> org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.writePartialBlocks(TestLeaseRecoveryStriped.java:182)
>   at 
> org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.runTest(TestLeaseRecoveryStriped.java:158)
>   at 
> org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.testLeaseRecovery(TestLeaseRecoveryStriped.java:147)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2017-09-13 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165387#comment-16165387
 ] 

Rushabh S Shah commented on HDFS-11156:
---

I have one concern in this patch.
{code:title=NamenodeWebHdfsMethods.java|borderStyle=solid}
// Some comments here
public String get()
{
case GETFILEBLOCKLOCATIONS:
{
  final long offsetValue = offset.getValue();
  final Long lengthValue = length.getValue();

  FileSystem fs = FileSystem.get(conf != null ?
  conf : new Configuration());
  BlockLocation[] locations = fs.getFileBlockLocations(
  new org.apache.hadoop.fs.Path(fullpath),
  offsetValue,
  lengthValue != null? lengthValue: Long.MAX_VALUE);
  final String js = JsonUtil.toJsonString("BlockLocations",
  JsonUtil.toJsonMap(locations));
  return Response.ok(js).type(MediaType.APPLICATION_JSON).build();
}
}
{code}
In the above code snippet, why are we creating dfsClient object which will 
again create an rpc to itself.
We are already in namenode. We just have to call relevant NamenodeRpcServer 
method.
Correct me if I am missing something.
Cc [~cheersyang] [~andrew.wang] [~liuml07] 

Also this fix introduced a bug. See more details here: HDFS-12442


> Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-11156
> URL: https://issues.apache.org/jira/browse/HDFS-11156
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: BlockLocationProperties_JSON_Schema.jpg, 
> BlockLocations_JSON_Schema.jpg, FileStatuses_JSON_Schema.jpg, 
> HDFS-11156.01.patch, HDFS-11156.02.patch, HDFS-11156.03.patch, 
> HDFS-11156.04.patch, HDFS-11156.05.patch, HDFS-11156.06.patch, 
> HDFS-11156.07.patch, HDFS-11156.08.patch, HDFS-11156.09.patch, 
> HDFS-11156.10.patch, HDFS-11156.11.patch, HDFS-11156.12.patch, 
> HDFS-11156.13.patch, HDFS-11156.14.patch, HDFS-11156.15.patch, 
> HDFS-11156.16.patch, HDFS-11156-branch-2.01.patch, 
> Output_JSON_format_v10.jpg, SampleResponse_JSON.jpg
>
>
> Following webhdfs REST API
> {code}
> http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1
> {code}
> will get a response like
> {code}
> {
>   "LocatedBlocks" : {
> "fileLength" : 1073741824,
> "isLastBlockComplete" : true,
> "isUnderConstruction" : false,
> "lastLocatedBlock" : { ... },
> "locatedBlocks" : [ {...} ]
>   }
> }
> {code}
> This represents for *o.a.h.h.p.LocatedBlocks*. However according to 
> *FileSystem* API, 
> {code}
> public BlockLocation[] getFileBlockLocations(Path p, long start, long len)
> {code}
> clients would expect an array of BlockLocation. This mismatch should be 
> fixed. Marked as Incompatible change as this will change the output of the 
> GET_BLOCK_LOCATIONS API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12256) Ozone : handle inactive containers on DataNode

2017-09-13 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12256:
--
Status: Patch Available  (was: Open)

> Ozone : handle inactive containers on DataNode
> --
>
> Key: HDFS-12256
> URL: https://issues.apache.org/jira/browse/HDFS-12256
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>  Labels: ozoneMerge, tocheck
> Attachments: HDFS-12256-HDFS-7240.001.patch
>
>
> When a container gets created, corresponding metadata gets added to 
> {{ContainerManagerImpl#containerMap}}. What {{containerMap}} stores is a 
> containerName to {{ContainerStatus}} instance map. When datanode starts, it 
> also loads this map from disk file metadata. As long as the containerName is 
> found in this map, it is considered an existing container.
> An issue we saw was that, occasionally, when the container creation on 
> datanode fails, the metadata of the failed container may still get added to 
> {{containerMap}}, with active flag set to false. But currently such 
> containers are not being handled, containers with active=false are just 
> treated as normal containers. Then when someone tries to write to this 
> container, fails can happen.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12256) Ozone : handle inactive containers on DataNode

2017-09-13 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12256:
--
Attachment: HDFS-12256-HDFS-7240.001.patch

A closer look shows that, the active flag seems unnecessary since:
1. active is set to false if and only if when containerData is null. Similarly 
for active = true
2. both active flag and containerData are final variable and thus set in 
constructor and never change.
Because of these 2, checking active flag is equivalent of checking 
containerData is null or not. So did two things in v001 patch:
1. remove the active flag
2. since it is possible we store a null containerData in ContainerStatus, there 
are a couple places that need to check if {{ContainerStatus#getContainer}} 
returns a null before proceeding.

> Ozone : handle inactive containers on DataNode
> --
>
> Key: HDFS-12256
> URL: https://issues.apache.org/jira/browse/HDFS-12256
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>  Labels: ozoneMerge, tocheck
> Attachments: HDFS-12256-HDFS-7240.001.patch
>
>
> When a container gets created, corresponding metadata gets added to 
> {{ContainerManagerImpl#containerMap}}. What {{containerMap}} stores is a 
> containerName to {{ContainerStatus}} instance map. When datanode starts, it 
> also loads this map from disk file metadata. As long as the containerName is 
> found in this map, it is considered an existing container.
> An issue we saw was that, occasionally, when the container creation on 
> datanode fails, the metadata of the failed container may still get added to 
> {{containerMap}}, with active flag set to false. But currently such 
> containers are not being handled, containers with active=false are just 
> treated as normal containers. Then when someone tries to write to this 
> container, fails can happen.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10701) TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired occasionally fails

2017-09-13 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165318#comment-16165318
 ] 

Andrew Wang commented on HDFS-10701:


Hi Sammi, thanks for digging into this. Precommit still failed in 
testBlockTokenExpired with:

Caused by: java.io.IOException: Failed: the number of failed blocks = 8 > the 
number of parity blocks = 3

Do we need to further bump the timeout?

> TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired occasionally fails
> --
>
> Key: HDFS-10701
> URL: https://issues.apache.org/jira/browse/HDFS-10701
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: SammiChen
> Attachments: HDFS-10701.000.patch
>
>
> I noticed this test failure in a recent precommit build, and I also found 
> this test had failed for a few times in Hadoop-Hdfs-trunk build in the past. 
> But I do not have sufficient knowledge to tell if it's a flaky test or a bug 
> in the code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10701) TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired occasionally fails

2017-09-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reassigned HDFS-10701:
--

Assignee: SammiChen

> TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired occasionally fails
> --
>
> Key: HDFS-10701
> URL: https://issues.apache.org/jira/browse/HDFS-10701
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: SammiChen
> Attachments: HDFS-10701.000.patch
>
>
> I noticed this test failure in a recent precommit build, and I also found 
> this test had failed for a few times in Hadoop-Hdfs-trunk build in the past. 
> But I do not have sufficient knowledge to tell if it's a flaky test or a bug 
> in the code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12427) libhdfs++: Prevent Requests from holding dangling pointer to RpcEngine

2017-09-13 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-12427:
---
Status: Patch Available  (was: Open)

> libhdfs++: Prevent Requests from holding dangling pointer to RpcEngine
> --
>
> Key: HDFS-12427
> URL: https://issues.apache.org/jira/browse/HDFS-12427
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Critical
> Attachments: HDFS-12427.HDFS-8707.000.patch, 
> HDFS-12427.HDFS-8707.001.patch
>
>
> The lifetime of Request objects is tied to the worker thread(s) in the async 
> event loop.  In the current code there's nothing that prevents a request from 
> outliving the RpcEngine (bound to FileSystem) while it's waiting for IO.  If 
> the Request, or a task that makes a new request, outlives the RpcEngine it 
> attempts to dereference a dangling pointer and either crashes or continues to 
> run with bad data.
> Proposed fix is to reference count the RpcEngine via shared_ptr so that 
> Requests can hold a weak_ptr to it.  When a request or RpcConnection 
> attempting to make a request needs something from the RpcEngine like a call 
> id number it can promote the weak_ptr to a shared_ptr.  If it's unable to 
> promote because the RpcEngine has been destroyed the Request's handler can be 
> invoked with an appropriate error message.  A weak_ptr must be used rather 
> than a shared_ptr to avoid reference cycles.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12427) libhdfs++: Prevent Requests from holding dangling pointer to RpcEngine

2017-09-13 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-12427:
---
Attachment: HDFS-12427.HDFS-8707.001.patch

New patch: RpcConnection was taking a weak_ptr and then calling lock() in the 
initializer list to deference for member access.  Now it takes a shared_ptr to 
prevent a race between initialization expression evaluation and RpcEngine 
destruction then demotes it to a weak_ptr for longer term use.

> libhdfs++: Prevent Requests from holding dangling pointer to RpcEngine
> --
>
> Key: HDFS-12427
> URL: https://issues.apache.org/jira/browse/HDFS-12427
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Critical
> Attachments: HDFS-12427.HDFS-8707.000.patch, 
> HDFS-12427.HDFS-8707.001.patch
>
>
> The lifetime of Request objects is tied to the worker thread(s) in the async 
> event loop.  In the current code there's nothing that prevents a request from 
> outliving the RpcEngine (bound to FileSystem) while it's waiting for IO.  If 
> the Request, or a task that makes a new request, outlives the RpcEngine it 
> attempts to dereference a dangling pointer and either crashes or continues to 
> run with bad data.
> Proposed fix is to reference count the RpcEngine via shared_ptr so that 
> Requests can hold a weak_ptr to it.  When a request or RpcConnection 
> attempting to make a request needs something from the RpcEngine like a call 
> id number it can promote the weak_ptr to a shared_ptr.  If it's unable to 
> promote because the RpcEngine has been destroyed the Request's handler can be 
> invoked with an appropriate error message.  A weak_ptr must be used rather 
> than a shared_ptr to avoid reference cycles.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12323) NameNode terminates after full GC thinking QJM unresponsive if full GC is much longer than timeout

2017-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165270#comment-16165270
 ] 

Hadoop QA commented on HDFS-12323:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 8 unchanged - 1 fixed = 8 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.client.impl.TestBlockReaderRemote |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.TestReconstructStripedFile |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12323 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12886927/HDFS-12323.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0756b0d9ef08 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5324388 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21124/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21124/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21124/console |
| Powered by | Apache Yetus 0

[jira] [Updated] (HDFS-12442) WebHdfsFileSystem#getFileBlockLocations will always return BlockLocation#corrupt as false

2017-09-13 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12442:
--
Description: 
Was going through {{JsonUtilClient#toBlockLocation}} code.
Below is the relevant code snippet.
{code:title=JsonUtilClient.java|borderStyle=solid}
 /** Convert a Json map to BlockLocation. **/
  static BlockLocation toBlockLocation(Map m)
  throws IOException{
...
...  
boolean corrupt = Boolean.
getBoolean(m.get("corrupt").toString());
...
...
  }
{code}
According to java docs for {{Boolean#getBoolean}}
{noformat}
Returns true if and only if the system property named by the argument exists 
and is equal to the string "true". 
{noformat}
I assume, the map value for key {{corrupt}} will be populated with either 
{{true}} or {{false}}.
On the client side, {{Boolean#getBoolean}} will look for system property for 
true or false.
So it will always return false unless the system property is set for true or 
false.

  was:
Was going through {{JsonUtilClient#toBlockLocation}}
Below is the relevant code snippet.
{code:title=JsonUtilClient.java|borderStyle=solid}
 /** Convert a Json map to BlockLocation. **/
  static BlockLocation toBlockLocation(Map m)
  throws IOException{
...
...  
boolean corrupt = Boolean.
getBoolean(m.get("corrupt").toString());
...
...
  }
{code}
According to java docs for {{Boolean#getBoolean}}
{noformat}
Returns true if and only if the system property named by the argument exists 
and is equal to the string "true". 
{noformat}
I assume, the map value for key {{corrupt}} will be populated with either 
{{true}} or {{false}}.
On the client side, {{Boolean#getBoolean}} will look for system property for 
true or false.
So it will always return false unless the system property is set for true or 
false.


> WebHdfsFileSystem#getFileBlockLocations will always return 
> BlockLocation#corrupt as false
> -
>
> Key: HDFS-12442
> URL: https://issues.apache.org/jira/browse/HDFS-12442
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Critical
>
> Was going through {{JsonUtilClient#toBlockLocation}} code.
> Below is the relevant code snippet.
> {code:title=JsonUtilClient.java|borderStyle=solid}
>  /** Convert a Json map to BlockLocation. **/
>   static BlockLocation toBlockLocation(Map m)
>   throws IOException{
> ...
> ...  
> boolean corrupt = Boolean.
> getBoolean(m.get("corrupt").toString());
> ...
> ...
>   }
> {code}
> According to java docs for {{Boolean#getBoolean}}
> {noformat}
> Returns true if and only if the system property named by the argument exists 
> and is equal to the string "true". 
> {noformat}
> I assume, the map value for key {{corrupt}} will be populated with either 
> {{true}} or {{false}}.
> On the client side, {{Boolean#getBoolean}} will look for system property for 
> true or false.
> So it will always return false unless the system property is set for true or 
> false.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12442) WebHdfsFileSystem#getFileBlockLocations will always return BlockLocation#corrupt as false

2017-09-13 Thread Rushabh S Shah (JIRA)
Rushabh S Shah created HDFS-12442:
-

 Summary: WebHdfsFileSystem#getFileBlockLocations will always 
return BlockLocation#corrupt as false
 Key: HDFS-12442
 URL: https://issues.apache.org/jira/browse/HDFS-12442
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0-alpha2, 2.9.0
Reporter: Rushabh S Shah
Assignee: Rushabh S Shah
Priority: Critical


Was going through {{JsonUtilClient#toBlockLocation}}
Below is the relevant code snippet.
{code:title=JsonUtilClient.java|borderStyle=solid}
 /** Convert a Json map to BlockLocation. **/
  static BlockLocation toBlockLocation(Map m)
  throws IOException{
...
...  
boolean corrupt = Boolean.
getBoolean(m.get("corrupt").toString());
...
...
  }
{code}
According to java docs for {{Boolean#getBoolean}}
{noformat}
Returns true if and only if the system property named by the argument exists 
and is equal to the string "true". 
{noformat}
I assume, the map value for key {{corrupt}} will be populated with either 
{{true}} or {{false}}.
On the client side, {{Boolean#getBoolean}} will look for system property for 
true or false.
So it will always return false unless the system property is set for true or 
false.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12438) Rename dfs.datanode.ec.reconstruction.stripedblock.threads.size to dfs.datanode.ec.reconstruction.threads

2017-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165248#comment-16165248
 ] 

Hadoop QA commented on HDFS-12438:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 421 unchanged - 2 fixed = 422 total (was 423) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 57s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup |
|   | hadoop.hdfs.TestAppendDifferentChecksum |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
|   | hadoop.hdfs.TestFileAppend2 |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12438 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12886920/HDFS-12438.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux f98243b2ee29 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5324388 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21123/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21123/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21123/tes

[jira] [Assigned] (HDFS-12256) Ozone : handle inactive containers on DataNode

2017-09-13 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang reassigned HDFS-12256:
-

Assignee: Chen Liang

> Ozone : handle inactive containers on DataNode
> --
>
> Key: HDFS-12256
> URL: https://issues.apache.org/jira/browse/HDFS-12256
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>  Labels: ozoneMerge, tocheck
>
> When a container gets created, corresponding metadata gets added to 
> {{ContainerManagerImpl#containerMap}}. What {{containerMap}} stores is a 
> containerName to {{ContainerStatus}} instance map. When datanode starts, it 
> also loads this map from disk file metadata. As long as the containerName is 
> found in this map, it is considered an existing container.
> An issue we saw was that, occasionally, when the container creation on 
> datanode fails, the metadata of the failed container may still get added to 
> {{containerMap}}, with active flag set to false. But currently such 
> containers are not being handled, containers with active=false are just 
> treated as normal containers. Then when someone tries to write to this 
> container, fails can happen.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12268) Ozone: Add metrics for pending storage container requests

2017-09-13 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165192#comment-16165192
 ] 

Chen Liang commented on HDFS-12268:
---

I've committed to the feature branch, thanks [~linyiqun] for the contribution!

> Ozone: Add metrics for pending storage container requests
> -
>
> Key: HDFS-12268
> URL: https://issues.apache.org/jira/browse/HDFS-12268
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: ozoneMerge
> Attachments: HDFS-12268-HDFS-7240.001.patch, 
> HDFS-12268-HDFS-7240.002.patch, HDFS-12268-HDFS-7240.003.patch, 
> HDFS-12268-HDFS-7240.004.patch, HDFS-12268-HDFS-7240.005.patch, 
> HDFS-12268-HDFS-7240.006.patch, HDFS-12268-HDFS-7240.007.patch
>
>
>  As storage container async interface has been supported after HDFS-11580, we 
> need to keep an eye on the queue depth of pending container requests. It can 
> help us better found if there are some performance problems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12268) Ozone: Add metrics for pending storage container requests

2017-09-13 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12268:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Ozone: Add metrics for pending storage container requests
> -
>
> Key: HDFS-12268
> URL: https://issues.apache.org/jira/browse/HDFS-12268
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: ozoneMerge
> Attachments: HDFS-12268-HDFS-7240.001.patch, 
> HDFS-12268-HDFS-7240.002.patch, HDFS-12268-HDFS-7240.003.patch, 
> HDFS-12268-HDFS-7240.004.patch, HDFS-12268-HDFS-7240.005.patch, 
> HDFS-12268-HDFS-7240.006.patch, HDFS-12268-HDFS-7240.007.patch
>
>
>  As storage container async interface has been supported after HDFS-11580, we 
> need to keep an eye on the queue depth of pending container requests. It can 
> help us better found if there are some performance problems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12420) Disable Namenode format when data already exists

2017-09-13 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165177#comment-16165177
 ] 

Arpit Agarwal edited comment on HDFS-12420 at 9/13/17 7:43 PM:
---

Allen, thanks for bringing up the automation concern. We certainly don't want 
to break any deployment scripts. This patch will not break scripted deployment 
of new clusters since it eliminates the prompt completely.

Formatting clusters with pre-existing data was a bad idea in the first place. 
It deletes the NameNode metadata and leaves the cluster in an unusable state 
since DataNodes cannot connect anymore. I don't think any existing automation 
can depend on this behavior since it is functionally broken.

That said, if you have examples of automated deployments that will be broken by 
this change and that we haven't thought of, we can abandon the idea.


was (Author: arpitagarwal):
Allen, thanks for bringing up the automation concern. We certainly don't want 
to break any deployment scripts. This patch will not break scripted deployment 
of new clusters since it eliminates the prompt completely.

Formatting clusters with pre-existing data was a bad idea in the first place. 
It deletes the NameNode metadata and leaves the cluster in an unusable state 
since DataNodes cannot connect anymore. I don't think any existing automation 
can depend on this behavior since it is functionally broken.

> Disable Namenode format when data already exists
> 
>
> Key: HDFS-12420
> URL: https://issues.apache.org/jira/browse/HDFS-12420
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch
>
>
> Disable NameNode format to avoid accidental formatting of Namenode in 
> production cluster. If someone really wants to delete the complete fsImage, 
> they can first delete the metadata dir and then run {code} hdfs namenode 
> -format{code} manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12420) Disable Namenode format when data already exists

2017-09-13 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165177#comment-16165177
 ] 

Arpit Agarwal commented on HDFS-12420:
--

Allen, thanks for bringing up the automation concern. We certainly don't want 
to break any deployment scripts. This patch will not break scripted deployment 
of new clusters since it eliminates the prompt completely.

Formatting clusters with pre-existing data was a bad idea in the first place. 
It deletes the NameNode metadata and leaves the cluster in an unusable state 
since DataNodes cannot connect anymore. I don't think any existing automation 
can depend on this behavior since it is functionally broken.

> Disable Namenode format when data already exists
> 
>
> Key: HDFS-12420
> URL: https://issues.apache.org/jira/browse/HDFS-12420
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch
>
>
> Disable NameNode format to avoid accidental formatting of Namenode in 
> production cluster. If someone really wants to delete the complete fsImage, 
> they can first delete the metadata dir and then run {code} hdfs namenode 
> -format{code} manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.

2017-09-13 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165158#comment-16165158
 ] 

Daryn Sharp commented on HDFS-12386:


# The {{FSNameSystem#serverDefaults}} is statically built to avoid unnecessary 
construction of the object.  It would be nice if the json-encoded string was 
cached to avoid unnecessary conversion of a static object into a map and 
serializing into a string.
# Not sure -1 is an appropriate value for missing parameters.  I'd double check 
the PB defaults for parity with hdfs.
# Default checksum should probably be 0 (represents "none") instead of -1.
# Blindly casting relatively new fields like storage policy to a primitive type 
cause exceptions.
# In the test, it verifies the server defaults returned from hdfs and webhdfs 
are equal.  It doesn't verify that one of them actually matches the explicitly 
conf values set by the test.


> Add fsserver defaults call to WebhdfsFileSystem.
> 
>
> Key: HDFS-12386
> URL: https://issues.apache.org/jira/browse/HDFS-12386
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Minor
> Attachments: HDFS-12386-1.patch, HDFS-12386.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12441) Suppress UnresolvedPathException in namenode log

2017-09-13 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165144#comment-16165144
 ] 

Rushabh S Shah commented on HDFS-12441:
---

+1 ltgm non-binding.

> Suppress UnresolvedPathException in namenode log
> 
>
> Key: HDFS-12441
> URL: https://issues.apache.org/jira/browse/HDFS-12441
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Priority: Minor
> Attachments: HDFS-12441.patch
>
>
> {{UnresolvedPathException}} as a normal process of resolving symlinks. This 
> doesn't need to be logged at all.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11612) Ozone: Cleanup Checkstyle issues

2017-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165110#comment-16165110
 ] 

Hadoop QA commented on HDFS-11612:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 17 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 6s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
7s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
13s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
30 unchanged - 95 fixed = 31 total (was 125) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
54s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}155m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.TestPread |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-11612 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12886891/HDFS-11612-HDFS-7240.001.patch
 |
| Optional Tests |

[jira] [Updated] (HDFS-12323) NameNode terminates after full GC thinking QJM unresponsive if full GC is much longer than timeout

2017-09-13 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-12323:
---
Attachment: HDFS-12323.003.patch

Thanks Konstantin. Fixed the whitespace issue and an unused import issue in 
v003 patch. Sorry about that.

> NameNode terminates after full GC thinking QJM unresponsive if full GC is 
> much longer than timeout
> --
>
> Key: HDFS-12323
> URL: https://issues.apache.org/jira/browse/HDFS-12323
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, qjm
>Affects Versions: 2.7.4
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-12323.000.patch, HDFS-12323.001.patch, 
> HDFS-12323.002.patch, HDFS-12323.003.patch
>
>
> HDFS-10733 attempted to fix the issue where the Namenode process would 
> terminate itself if it had a GC pause which lasted longer than the QJM 
> timeout, since it would think that the QJM had taken too long to respond. 
> However, it only bumps up the timeout expiration by one timeout length, so if 
> the GC pause was e.g. 2x the length of the timeout, a TimeoutException will 
> be thrown and the NN will still terminate itself.
> Thanks to [~yangjiandan] for noting this issue as a comment on HDFS-10733; we 
> have also seen this issue on a real cluster even after HDFS-10733 is applied.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12323) NameNode terminates after full GC thinking QJM unresponsive if full GC is much longer than timeout

2017-09-13 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-12323:
---
Target Version/s: 2.7.5

> NameNode terminates after full GC thinking QJM unresponsive if full GC is 
> much longer than timeout
> --
>
> Key: HDFS-12323
> URL: https://issues.apache.org/jira/browse/HDFS-12323
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, qjm
>Affects Versions: 2.7.4
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-12323.000.patch, HDFS-12323.001.patch, 
> HDFS-12323.002.patch
>
>
> HDFS-10733 attempted to fix the issue where the Namenode process would 
> terminate itself if it had a GC pause which lasted longer than the QJM 
> timeout, since it would think that the QJM had taken too long to respond. 
> However, it only bumps up the timeout expiration by one timeout length, so if 
> the GC pause was e.g. 2x the length of the timeout, a TimeoutException will 
> be thrown and the NN will still terminate itself.
> Thanks to [~yangjiandan] for noting this issue as a comment on HDFS-10733; we 
> have also seen this issue on a real cluster even after HDFS-10733 is applied.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12438) Rename dfs.datanode.ec.reconstruction.stripedblock.threads.size to dfs.datanode.ec.reconstruction.threads

2017-09-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12438:
---
Summary: Rename dfs.datanode.ec.reconstruction.stripedblock.threads.size to 
dfs.datanode.ec.reconstruction.threads  (was: Rename 
dfs.datanode.ec.reconstruction.stripedblock.threads.size to 
dfs.datanode.ec.reconstruction.stripedblock.threads)

> Rename dfs.datanode.ec.reconstruction.stripedblock.threads.size to 
> dfs.datanode.ec.reconstruction.threads
> -
>
> Key: HDFS-12438
> URL: https://issues.apache.org/jira/browse/HDFS-12438
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12438.001.patch, HDFS-12438.002.patch
>
>
> We should rename this config key to match other config keys used to size 
> thread pools.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12438) Rename dfs.datanode.ec.reconstruction.stripedblock.threads.size to dfs.datanode.ec.reconstruction.threads

2017-09-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12438:
---
Release Note: 


Config key `dfs.datanode.ec.reconstruction.stripedblock.threads.size` has been 
renamed to `dfs.datanode.ec.reconstruction.threads`.

  was:


Config key `dfs.datanode.ec.reconstruction.stripedblock.threads.size` has been 
renamed to `dfs.datanode.ec.reconstruction.stripedblock.threads`.


> Rename dfs.datanode.ec.reconstruction.stripedblock.threads.size to 
> dfs.datanode.ec.reconstruction.threads
> -
>
> Key: HDFS-12438
> URL: https://issues.apache.org/jira/browse/HDFS-12438
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12438.001.patch, HDFS-12438.002.patch
>
>
> We should rename this config key to match other config keys used to size 
> thread pools.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12441) Suppress UnresolvedPathException in namenode log

2017-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165081#comment-16165081
 ] 

Hadoop QA commented on HDFS-12441:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestPendingReconstruction |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
|   | hadoop.hdfs.TestLargeBlock |
|   | hadoop.hdfs.TestReconstructStripedFile |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12441 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12886894/HDFS-12441.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6aca70c8cd00 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / fa6cc43 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21121/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21121/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Bu

[jira] [Updated] (HDFS-12438) Rename dfs.datanode.ec.reconstruction.stripedblock.threads.size to dfs.datanode.ec.reconstruction.stripedblock.threads

2017-09-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12438:
---
Attachment: HDFS-12438.002.patch

Thanks for taking a look Kai! Sure, I like that even better. Renamed again.

> Rename dfs.datanode.ec.reconstruction.stripedblock.threads.size to 
> dfs.datanode.ec.reconstruction.stripedblock.threads
> --
>
> Key: HDFS-12438
> URL: https://issues.apache.org/jira/browse/HDFS-12438
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12438.001.patch, HDFS-12438.002.patch
>
>
> We should rename this config key to match other config keys used to size 
> thread pools.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12420) Disable Namenode format when data already exists

2017-09-13 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165068#comment-16165068
 ] 

Vinayakumar B commented on HDFS-12420:
--

bq. In spite of the -force option or the prompt for Y/N, admins do make 
mistakes and end up loosing data. In a real production cluster with real data, 
why would someone want to do a format? In dev/qa clusters, I can see the need 
for format.
Yes I agree, admin can make mistakes. In real cluster 'format' command 
(especially with -force) should be used with at-most attention(same as 'rm -r' 
in linux).
bq.  Another option is to configure the cluster as "production" mode, where 
format will not be allowed. Dev/test clusters can be configured with 'dev' 
mode, where format is allowed. 
Still if you insist to disallow format in 'production' clusters, this option 
looks good, provided default value set to 'dev' mode to keep current 'prompt' 
behavior as is.

> Disable Namenode format when data already exists
> 
>
> Key: HDFS-12420
> URL: https://issues.apache.org/jira/browse/HDFS-12420
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch
>
>
> Disable NameNode format to avoid accidental formatting of Namenode in 
> production cluster. If someone really wants to delete the complete fsImage, 
> they can first delete the metadata dir and then run {code} hdfs namenode 
> -format{code} manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12378) TestClientProtocolForPipelineRecovery#testZeroByteBlockRecovery fails on trunk

2017-09-13 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165067#comment-16165067
 ] 

Andrew Wang commented on HDFS-12378:


I think we can fix the flaky tests after beta1. If people would like to post 
patches to disable in the meantime, that's a good short-term fix. Of course, 
patches to fix would be even better.

> TestClientProtocolForPipelineRecovery#testZeroByteBlockRecovery fails on trunk
> --
>
> Key: HDFS-12378
> URL: https://issues.apache.org/jira/browse/HDFS-12378
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiao Chen
>Assignee: Ajay Kumar
>Priority: Blocker
>
> Saw on 
> https://builds.apache.org/job/PreCommit-HDFS-Build/20928/testReport/org.apache.hadoop.hdfs/TestClientProtocolForPipelineRecovery/testZeroByteBlockRecovery/:
> Error Message
> {noformat}
> Failed to replace a bad datanode on the existing pipeline due to no more good 
> datanodes being available to try. (Nodes: 
> current=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]],
>  
> original=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]]).
>  The current failed datanode replacement policy is ALWAYS, and a client may 
> configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
> {noformat}
> Stacktrace
> {noformat}
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]],
>  
> original=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]]).
>  The current failed datanode replacement policy is ALWAYS, and a client may 
> configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
>   at 
> org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1322)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1388)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1587)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1488)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1470)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1274)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:684)
> {noformat}
> Standard Output
> {noformat}
> 2017-08-30 18:02:37,714 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:(469)) - starting cluster: numNameNodes=1, 
> numDataNodes=3
> Formatting using clusterid: testClusterID
> 2017-08-30 18:02:37,716 [main] INFO  namenode.FSEditLog 
> (FSEditLog.java:newInstance(224)) - Edit logging is async:false
> 2017-08-30 18:02:37,716 [main] INFO  namenode.FSNamesystem 
> (FSNamesystem.java:(742)) - KeyProvider: null
> 2017-08-30 18:02:37,716 [main] INFO  namenode.FSNamesystem 
> (FSNamesystemLock.java:(120)) - fsLock is fair: true
> 2017-08-30 18:02:37,716 [main] INFO  namenode.FSNamesystem 
> (FSNamesystemLock.java:(136)) - Detailed lock hold time metrics 
> enabled: false
> 2017-08-30 18:02:37,717 [main] INFO  namenode.FSNamesystem 
> (FSNamesystem.java:(763)) - fsOwner = jenkins (auth:SIMPLE)
> 2017-08-30 18:02:37,717 [main] INFO  namenode.FSNamesystem 
> (FSNamesystem.java:(764)) - supergroup  = supergroup
> 2017-08-30 18:02:37,717 [main] INFO  namenode.FSNamesystem 
> (FSNamesystem.java:(765)) - isPermissionEnabled = true
> 2017-08-30 18:02:37,717 [main] INFO  namenode.FSNamesystem 
> (FSNamesystem.java:(776)) - HA Enabled: false
> 2017-08-30 18:02:37,718 [main] INFO  common.Util 
> (Util.java:isDiskStatsEnabled(395)) - 
> dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO 
> profiling
> 2017-08-30 18:02:37,718 [main] INFO  blockmanagement.DatanodeManager 
> (DatanodeManager.java:(301)) - dfs.block.invalidate.limit: 
> configured=1000, counted=60, effected=1000
> 2017-08-30 18:02:37,718 [main] INFO  blockmanagement.DatanodeManager 
> (DatanodeManager.java:(309)) - 
> dfs.namenode.datanode.registration.ip-hostname-check=true
> 2017-08-30 18:02:37,719 [main] INFO  blockmanagement.BlockManager 
> (InvalidateBlocks.java:printBlockDeletionTime(76)) - 
> dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
> 2017-08-30 18:02:37,719 [main] INFO  blockmanagement.BlockManager 
> (InvalidateBlocks.java:printBlockDeletionTime(82)) - The block deletion will 
> start around 2017 Aug 30

[jira] [Commented] (HDFS-12310) [SPS]: Provide an option to track the status of in progress requests

2017-09-13 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165061#comment-16165061
 ] 

Andrew Wang commented on HDFS-12310:


API looks good to me, sounds like what Uma and I discussed.

I was hoping we could find a way of implementing "-w" without doing a full 
recursive traversal each time. Uma alludes to this with "we could cache the 
result for 1min or 5 min". We could fallback to a recursive traversal on error 
or failover or other cases like that.

> [SPS]: Provide an option to track the status of in progress requests
> 
>
> Key: HDFS-12310
> URL: https://issues.apache.org/jira/browse/HDFS-12310
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Surendra Singh Lilhore
>
> As per the [~andrew.wang] 's review comments in HDFS-10285, This is the JIRA 
> for tracking about the options how we track the progress of SPS requests.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12420) Disable Namenode format when data already exists

2017-09-13 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165043#comment-16165043
 ] 

Jitendra Nath Pandey commented on HDFS-12420:
-

In spite of the -force option or the prompt for Y/N, admins do make mistakes 
and end up loosing data. In a real production cluster with real data, why would 
someone want to do a format? In dev/qa clusters, I can see the need for format. 
Another option is to configure the cluster as "production" mode, where format 
will not be allowed. Dev/test clusters can be configured with 'dev' mode, where 
format is allowed. 

> Disable Namenode format when data already exists
> 
>
> Key: HDFS-12420
> URL: https://issues.apache.org/jira/browse/HDFS-12420
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch
>
>
> Disable NameNode format to avoid accidental formatting of Namenode in 
> production cluster. If someone really wants to delete the complete fsImage, 
> they can first delete the metadata dir and then run {code} hdfs namenode 
> -format{code} manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12420) Disable Namenode format when data already exists

2017-09-13 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165029#comment-16165029
 ] 

Vinayakumar B commented on HDFS-12420:
--

bq. Don't we already have the y/n check when data exists? Why do we need 
another?
Yes. We do have the prompt, which is the exact line being removed in the patch. 
{{fsImage.confirmFormat(force, isInteractive)}}.
User can format the existing data, if passed a -force flag or given 'y' as an 
answer to the prompt.

I too wanted to understand the real need for this complete disable of format.
bq. If someone really wants to delete the complete fsImage, they can first 
delete the metadata dir
How you can delete the shared edits dir in journal nodes manually?

I think current behavior of format works fine. -force option should not be used 
too lightly.

> Disable Namenode format when data already exists
> 
>
> Key: HDFS-12420
> URL: https://issues.apache.org/jira/browse/HDFS-12420
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch
>
>
> Disable NameNode format to avoid accidental formatting of Namenode in 
> production cluster. If someone really wants to delete the complete fsImage, 
> they can first delete the metadata dir and then run {code} hdfs namenode 
> -format{code} manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12395) Support erasure coding policy operations in namenode edit log

2017-09-13 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165025#comment-16165025
 ] 

Lei (Eddy) Xu commented on HDFS-12395:
--

LGTM.  

{code:}
//assertTrue("Edits " + editsStored + " should have all op codes",
//hasAllOpCodes(editsStored));
{code}
Is this going to be uncommented after OEV being implemented?

+1 pending.

> Support erasure coding policy operations in namenode edit log
> -
>
> Key: HDFS-12395
> URL: https://issues.apache.org/jira/browse/HDFS-12395
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: editsStored, HDFS-12395.001.patch, HDFS-12395.002.patch, 
> HDFS-12395.003.patch
>
>
> Support add, remove, disable, enable erasure coding policy operation in edit 
> log. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12420) Disable Namenode format when data already exists

2017-09-13 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165023#comment-16165023
 ] 

Anu Engineer commented on HDFS-12420:
-

bq. Don't we already have the y/n check when data exists? Why do we need 
another? 
We do, but the fact that it not very clear with lots of other text on the 
screen was pointed out by a cluster owner, who was visibly distressed. 

We are just trying to avoid losing data by operator mistake. I thought that you 
might have a concern with automation that is why I flagged it for your 
consideration. Let me try to understand that a bit more, do you think people 
automate formatting the clusters? if they do, then preventing accidental data 
loss is all the more important.

>From an HDFS user hat on,  I think this is a good improvement to have. I would 
>expect HDFS to refuse to format a cluster with data. But from a 
>sysadmin/developer hat on, I do like that fact that I can format a cluster 
>with data. I do that when I test and develop. 

So in my mind, the question boils down to easier dev/ops cycles vs. user 
safety. The reason why this is filed for 3.0 is that it might be our last 
opportunity to make this change.

bq. Completely breaks automation. Automation MUST work. 
I see that you are voting with the devops hat on, and I do not disagree. But 
this is a place where breaking the automation might avoid a disaster for some 
poor user. One more data point, this JIRA is based on real feedback from a real 
large cluster.  I am not apologizing for sloppy operation but trying to 
understand what we can do to prevent a user from making such a mistake.

I am presuming (please correct me if I am wrong) that you are not objecting to 
the change or the intent per se, but more about the fact that we are out right 
refusing to format a cluster with Namenode metadata. Do you think adding a flag 
which says *-DothisIamReallySmart* address the automation concern?



> Disable Namenode format when data already exists
> 
>
> Key: HDFS-12420
> URL: https://issues.apache.org/jira/browse/HDFS-12420
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch
>
>
> Disable NameNode format to avoid accidental formatting of Namenode in 
> production cluster. If someone really wants to delete the complete fsImage, 
> they can first delete the metadata dir and then run {code} hdfs namenode 
> -format{code} manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12420) Disable Namenode format when data already exists

2017-09-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16165000#comment-16165000
 ] 

Allen Wittenauer commented on HDFS-12420:
-

The more I think about this, the more I'm -1:

Completely breaks automation. Automation MUST work.  

bq. Let's also make the -force option a no-op. We can continue to accept it but 
it should have no effect and we should print a warning saying that the force 
option is being ignored.

This just makes it worse.  HDFS-5138 was a disaster for automation when 
-finalize was made a no-op.  See HDFS-8241 for the follow-up to clean it up.  



> Disable Namenode format when data already exists
> 
>
> Key: HDFS-12420
> URL: https://issues.apache.org/jira/browse/HDFS-12420
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch
>
>
> Disable NameNode format to avoid accidental formatting of Namenode in 
> production cluster. If someone really wants to delete the complete fsImage, 
> they can first delete the metadata dir and then run {code} hdfs namenode 
> -format{code} manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >