[jira] [Commented] (HDFS-15020) Add a test case of storage type quota to TestHdfsAdmin.

2019-11-27 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984193#comment-16984193
 ] 

Hadoop QA commented on HDFS-15020:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
4s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 2 unchanged - 0 fixed = 4 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 40s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}167m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDecommission |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-15020 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12987031/HDFS-15020.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6af568fc1bd0 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2b452b4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28419/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28419/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28419/testReport/ |
| Max. process+thread count | 2635 

[jira] [Comment Edited] (HDFS-13571) Dead DataNode Detector

2019-11-27 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984186#comment-16984186
 ] 

Yiqun Lin edited comment on HDFS-13571 at 11/28/19 7:09 AM:


Deadnode detection is useful and will be a nice improvement in client side. I 
help review and merge the change in recent sub-tasks. This feature is false by 
default. Any further comments/suggestions for this improvement are welcomed.

Thanks [~leosun08] for the hard working and thanks [~xiegang112], 
[~hexiaoqiao], [~weichiu], [~zhangchen]  and [~zhangduo] for the discussions.

BTW, [~leosun08] , can you help add a release note for this JIRA?


was (Author: linyiqun):
Deadnode detection is useful and will be a nice improvement in client side. I 
help review and merge the change in recent sub-tasks. This feature is false by 
default. Any further comments/suggestions for this improvement are welcomed.

Thanks [~leosun08] for the hard working. BTW, can you help add a release note 
for this JIRA?

> Dead DataNode Detector
> --
>
> Key: HDFS-13571
> URL: https://issues.apache.org/jira/browse/HDFS-13571
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.4.0, 2.6.0, 3.0.2
>Reporter: Gang Xie
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: DeadNodeDetectorDesign.pdf, HDFS-13571-2.6.diff, node 
> status machine.png
>
>
> Currently, the information of the dead datanode in DFSInputStream in stored 
> locally. So, it could not be shared among the inputstreams of the same 
> DFSClient. In our production env, every days, some datanodes dies with 
> different causes. At this time, after the first inputstream blocked and 
> detect this, it could share this information to others in the same DFSClient, 
> thus, the ohter inputstreams are still blocked by the dead node for some 
> time, which could cause bad service latency.
> To eliminate this impact from dead datanode, we designed a dead datanode 
> detector, which detect the dead ones in advance, and share this information 
> among all the inputstreams in the same client. This improvement has being 
> online for some months and works fine.  So, we decide to port to the 3.0 (the 
> version used in our production env is 2.4 and 2.6).
> I will do the porting work and upload the code later.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13571) Dead DataNode Detector

2019-11-27 Thread Yiqun Lin (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin resolved HDFS-13571.
--
Resolution: Fixed

> Dead DataNode Detector
> --
>
> Key: HDFS-13571
> URL: https://issues.apache.org/jira/browse/HDFS-13571
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.4.0, 2.6.0, 3.0.2
>Reporter: Gang Xie
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: DeadNodeDetectorDesign.pdf, HDFS-13571-2.6.diff, node 
> status machine.png
>
>
> Currently, the information of the dead datanode in DFSInputStream in stored 
> locally. So, it could not be shared among the inputstreams of the same 
> DFSClient. In our production env, every days, some datanodes dies with 
> different causes. At this time, after the first inputstream blocked and 
> detect this, it could share this information to others in the same DFSClient, 
> thus, the ohter inputstreams are still blocked by the dead node for some 
> time, which could cause bad service latency.
> To eliminate this impact from dead datanode, we designed a dead datanode 
> detector, which detect the dead ones in advance, and share this information 
> among all the inputstreams in the same client. This improvement has being 
> online for some months and works fine.  So, we decide to port to the 3.0 (the 
> version used in our production env is 2.4 and 2.6).
> I will do the porting work and upload the code later.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13571) Dead DataNode Detector

2019-11-27 Thread Yiqun Lin (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13571:
-
Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed

Deadnode detection is useful and will be a nice improvement in client side. I 
help review and merge the change in recent sub-tasks. This feature is false by 
default. Any further comments/suggestions for this improvement are welcomed.

Thanks [~leosun08] for the hard working. BTW, can you help add a release note 
for this JIRA?

> Dead DataNode Detector
> --
>
> Key: HDFS-13571
> URL: https://issues.apache.org/jira/browse/HDFS-13571
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.4.0, 2.6.0, 3.0.2
>Reporter: Gang Xie
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: DeadNodeDetectorDesign.pdf, HDFS-13571-2.6.diff, node 
> status machine.png
>
>
> Currently, the information of the dead datanode in DFSInputStream in stored 
> locally. So, it could not be shared among the inputstreams of the same 
> DFSClient. In our production env, every days, some datanodes dies with 
> different causes. At this time, after the first inputstream blocked and 
> detect this, it could share this information to others in the same DFSClient, 
> thus, the ohter inputstreams are still blocked by the dead node for some 
> time, which could cause bad service latency.
> To eliminate this impact from dead datanode, we designed a dead datanode 
> detector, which detect the dead ones in advance, and share this information 
> among all the inputstreams in the same client. This improvement has being 
> online for some months and works fine.  So, we decide to port to the 3.0 (the 
> version used in our production env is 2.4 and 2.6).
> I will do the porting work and upload the code later.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15019) Refactor the unit test of TestDeadNodeDetection

2019-11-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984183#comment-16984183
 ] 

Hudson commented on HDFS-15019:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17707 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17707/])
HDFS-15019. Refactor the unit test of TestDeadNodeDetection. Contributed 
(yqlin: rev c3659f8f94bef7cfad0c3fb04391a7ffd4221679)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDeadNodeDetection.java


> Refactor the unit test of TestDeadNodeDetection 
> 
>
> Key: HDFS-15019
> URL: https://issues.apache.org/jira/browse/HDFS-15019
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yiqun Lin
>Assignee: Lisheng Sun
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-15019.001.patch
>
>
> There are many duplicated lines in unit test \{{TestDeadNodeDetection}}. We 
> can simplified that.
> In additional in {{testDeadNodeDetectionInMultipleDFSInputStream}}, the 
> DFSInputstream is passed incorrectly in asset operation.
> {code}
> din2 = (DFSInputStream) in1.getWrappedStream();
> {code}
> Should be 
> {code}
> din2 = (DFSInputStream) in2.getWrappedStream();
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15019) Refactor the unit test of TestDeadNodeDetection

2019-11-27 Thread Yiqun Lin (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-15019:
-
Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed this to trunk with fixing checkstyle issue.
Thanks [~leosun08] for the contribution.

> Refactor the unit test of TestDeadNodeDetection 
> 
>
> Key: HDFS-15019
> URL: https://issues.apache.org/jira/browse/HDFS-15019
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yiqun Lin
>Assignee: Lisheng Sun
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-15019.001.patch
>
>
> There are many duplicated lines in unit test \{{TestDeadNodeDetection}}. We 
> can simplified that.
> In additional in {{testDeadNodeDetectionInMultipleDFSInputStream}}, the 
> DFSInputstream is passed incorrectly in asset operation.
> {code}
> din2 = (DFSInputStream) in1.getWrappedStream();
> {code}
> Should be 
> {code}
> din2 = (DFSInputStream) in2.getWrappedStream();
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15019) Refactor the unit test of TestDeadNodeDetection

2019-11-27 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984175#comment-16984175
 ] 

Yiqun Lin commented on HDFS-15019:
--

LGTM, +1.

> Refactor the unit test of TestDeadNodeDetection 
> 
>
> Key: HDFS-15019
> URL: https://issues.apache.org/jira/browse/HDFS-15019
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yiqun Lin
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-15019.001.patch
>
>
> There are many duplicated lines in unit test \{{TestDeadNodeDetection}}. We 
> can simplified that.
> In additional in {{testDeadNodeDetectionInMultipleDFSInputStream}}, the 
> DFSInputstream is passed incorrectly in asset operation.
> {code}
> din2 = (DFSInputStream) in1.getWrappedStream();
> {code}
> Should be 
> {code}
> din2 = (DFSInputStream) in2.getWrappedStream();
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14986) ReplicaCachingGetSpaceUsed throws ConcurrentModificationException

2019-11-27 Thread Aiphago (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984170#comment-16984170
 ] 

Aiphago commented on HDFS-14986:


Thanks a lot for the review [~linyiqun].

> ReplicaCachingGetSpaceUsed throws  ConcurrentModificationException
> --
>
> Key: HDFS-14986
> URL: https://issues.apache.org/jira/browse/HDFS-14986
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, performance
>Affects Versions: 2.10.0
>Reporter: Ryan Wu
>Assignee: Aiphago
>Priority: Major
> Fix For: 3.3.0, 2.10.1, 2.11.0
>
> Attachments: HDFS-14986.001.patch, HDFS-14986.002.patch, 
> HDFS-14986.003.patch, HDFS-14986.004.patch, HDFS-14986.005.patch, 
> HDFS-14986.006.patch
>
>
> Running DU across lots of disks is very expensive . We applied the patch 
> HDFS-14313 to get  used space from ReplicaInfo in memory.However, new du 
> threads throw the exception
> {code:java}
> // 2019-11-08 18:07:13,858 ERROR 
> [refreshUsed-/home/vipshop/hard_disk/7/dfs/dn/current/BP-1203969992--1450855658517]
>  
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
>  ReplicaCachingGetSpaceUsed refresh error
> java.util.ConcurrentModificationException: Tree has been modified outside of 
> iterator
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.checkForModification(FoldedTreeSet.java:311)
> 
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.hasNext(FoldedTreeSet.java:256)
> 
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(FsDatasetImpl.java:1052)
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed.refresh(ReplicaCachingGetSpaceUsed.java:73)
> 
> at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:178)
>    
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14546) Document block placement policies

2019-11-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984166#comment-16984166
 ] 

Ayush Saxena edited comment on HDFS-14546 at 11/28/19 6:17 AM:
---

bq.  how this will be applied to the master?
The person who will commit your patch, He will apply locally and commit it to 
the master(trunk).
You handle the comments in the git PR, and get a combined patch. 


was (Author: ayushtkn):
bq.  how this will be applied to the master?
The person who will commit your patch, He will apply locally and commit it to 
the master(trunk)


> Document block placement policies
> -
>
> Key: HDFS-14546
> URL: https://issues.apache.org/jira/browse/HDFS-14546
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Amithsha
>Priority: Major
>  Labels: documentation
> Attachments: HDFS-14546-01.patch, HDFS-14546-02.patch, 
> HDFS-14546-03.patch, HDFS-14546-04.patch, HDFS-14546-05.patch, 
> HDFS-14546-06.patch, HDFS-14546-07.patch, HdfsDesign.patch
>
>
> Currently, all the documentation refers to the default block placement policy.
> However, over time there have been new policies:
> * BlockPlacementPolicyRackFaultTolerant (HDFS-7891)
> * BlockPlacementPolicyWithNodeGroup (HDFS-3601)
> * BlockPlacementPolicyWithUpgradeDomain (HDFS-9006)
> We should update the documentation to refer to them explaining their 
> particularities and probably how to setup each one of them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14546) Document block placement policies

2019-11-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984166#comment-16984166
 ] 

Ayush Saxena commented on HDFS-14546:
-

bq.  how this will be applied to the master?
The person who will commit your patch, He will apply locally and commit it to 
the master(trunk)


> Document block placement policies
> -
>
> Key: HDFS-14546
> URL: https://issues.apache.org/jira/browse/HDFS-14546
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Amithsha
>Priority: Major
>  Labels: documentation
> Attachments: HDFS-14546-01.patch, HDFS-14546-02.patch, 
> HDFS-14546-03.patch, HDFS-14546-04.patch, HDFS-14546-05.patch, 
> HDFS-14546-06.patch, HDFS-14546-07.patch, HdfsDesign.patch
>
>
> Currently, all the documentation refers to the default block placement policy.
> However, over time there have been new policies:
> * BlockPlacementPolicyRackFaultTolerant (HDFS-7891)
> * BlockPlacementPolicyWithNodeGroup (HDFS-3601)
> * BlockPlacementPolicyWithUpgradeDomain (HDFS-9006)
> We should update the documentation to refer to them explaining their 
> particularities and probably how to setup each one of them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14546) Document block placement policies

2019-11-27 Thread Amithsha (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984164#comment-16984164
 ] 

Amithsha commented on HDFS-14546:
-

[~ayushtkn] since the ticket has been marked as the patch available. how this 
will be applied to the master? Just what to know the process.

Also, git comments will take that separately and fix it.

> Document block placement policies
> -
>
> Key: HDFS-14546
> URL: https://issues.apache.org/jira/browse/HDFS-14546
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Amithsha
>Priority: Major
>  Labels: documentation
> Attachments: HDFS-14546-01.patch, HDFS-14546-02.patch, 
> HDFS-14546-03.patch, HDFS-14546-04.patch, HDFS-14546-05.patch, 
> HDFS-14546-06.patch, HDFS-14546-07.patch, HdfsDesign.patch
>
>
> Currently, all the documentation refers to the default block placement policy.
> However, over time there have been new policies:
> * BlockPlacementPolicyRackFaultTolerant (HDFS-7891)
> * BlockPlacementPolicyWithNodeGroup (HDFS-3601)
> * BlockPlacementPolicyWithUpgradeDomain (HDFS-9006)
> We should update the documentation to refer to them explaining their 
> particularities and probably how to setup each one of them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15019) Refactor the unit test of TestDeadNodeDetection

2019-11-27 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984152#comment-16984152
 ] 

Hadoop QA commented on HDFS-15019:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
55s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}168m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-15019 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12987014/HDFS-15019.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f33d23284ca7 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 82ad9b5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28417/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28417/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-14960) TestBalancerWithNodeGroup should not succeed with DFSNetworkTopology

2019-11-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984147#comment-16984147
 ] 

Ayush Saxena commented on HDFS-14960:
-

[~Jim_Brennan] you can use method {{verifyBlockPlacement(..)}} by providing 
dummy locations, say locations in different rack but in same nodegroup. for 
DefaultBPP the verify result should be true but for NodeGroup BPP that shall be 
false.

You can get to reach to this method by :

{code:java}
  BlockPlacementPolicy replicator = cluster.getNameNode(0).getNamesystem()
  .getBlockManager().getBlockPlacementPolicy();
  replicator.verifyBlockPlacement(locs, numOfReplicas, blockSize, 
storagePolicy)
{code}

Give a check if you can achieve the result by this, if not I will try to find 
some other solution for you.


> TestBalancerWithNodeGroup should not succeed with DFSNetworkTopology
> 
>
> Key: HDFS-14960
> URL: https://issues.apache.org/jira/browse/HDFS-14960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.3
>Reporter: Jim Brennan
>Priority: Minor
>
> As reported in HDFS-14958, TestBalancerWithNodeGroup was succeeding even 
> though it was using DFSNetworkTopology instead of 
> NetworkTopologyWithNodeGroup.
> [~inigoiri] rightly suggested that this indicates the test is not very good - 
> it should fail when run without NetworkTopologyWithNodeGroup.
> We should improve this test.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15012) NN fails to parse Edit logs after applying HDFS-13101

2019-11-27 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee reassigned HDFS-15012:
--

Assignee: Shashikant Banerjee

> NN fails to parse Edit logs after applying HDFS-13101
> -
>
> Key: HDFS-15012
> URL: https://issues.apache.org/jira/browse/HDFS-15012
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nn
>Reporter: Eric Lin
>Assignee: Shashikant Banerjee
>Priority: Critical
>
> After applying HDFS-13101, and deleting and creating large number of 
> snapshots, SNN exited with below error:
>   
> {code:sh}
> 2019-11-18 08:28:06,528 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: Encountered exception 
> on operation DeleteSnapshotOp [snapshotRoot=/path/to/hdfs/file, 
> snapshotName=distcp-3479-31-old, 
> RpcClientId=b16a6cb5-bdbb-45ae-9f9a-f7dc57931f37, Rpc
> CallId=1]
> java.lang.AssertionError: Element already exists: 
> element=partition_isactive=true, DELETED=[partition_isactive=true]
> at org.apache.hadoop.hdfs.util.Diff.insert(Diff.java:193)
> at org.apache.hadoop.hdfs.util.Diff.delete(Diff.java:239)
> at org.apache.hadoop.hdfs.util.Diff.combinePosterior(Diff.java:462)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature$DirectoryDiff$2.initChildren(DirectoryWithSnapshotFeature.java:240)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature$DirectoryDiff$2.iterator(DirectoryWithSnapshotFeature.java:250)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtreeRecursively(INodeDirectory.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature.cleanDirectory(DirectoryWithSnapshotFeature.java:753)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtree(INodeDirectory.java:790)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeReference.cleanSubtree(INodeReference.java:332)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeReference$WithName.cleanSubtree(INodeReference.java:583)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtreeRecursively(INodeDirectory.java:760)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature.cleanDirectory(DirectoryWithSnapshotFeature.java:753)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtree(INodeDirectory.java:790)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.removeSnapshot(DirectorySnapshottableFeature.java:235)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.removeSnapshot(INodeDirectory.java:259)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:301)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:688)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:232)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:141)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:903)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:756)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:324)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1144)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:796)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:844)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:823)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1547)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615)
> {code}
> We confirmed that fsimage and edit files were NOT corrupted, as reverting 
> HDFS-13101 fixed the issue. So the logic introduced in HDFS-13101 is broken 
> and failed to parse edit log files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15020) Add a test case of storage type quota to TestHdfsAdmin.

2019-11-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984138#comment-16984138
 ] 

Ayush Saxena edited comment on HDFS-15020 at 11/28/19 4:24 AM:
---

Isn't this already covered in {{TestQuota}}?
If not give a check if you can use the method {{checkContentSummary}} there 
instead of the new one you are creating.


was (Author: ayushtkn):
Isn't this already covered?

> Add a test case of storage type quota to TestHdfsAdmin.
> ---
>
> Key: HDFS-15020
> URL: https://issues.apache.org/jira/browse/HDFS-15020
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
> Attachments: HDFS-15020.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15020) Add a test case of storage type quota to TestHdfsAdmin.

2019-11-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984138#comment-16984138
 ] 

Ayush Saxena commented on HDFS-15020:
-

Isn't this already covered?

> Add a test case of storage type quota to TestHdfsAdmin.
> ---
>
> Key: HDFS-15020
> URL: https://issues.apache.org/jira/browse/HDFS-15020
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
> Attachments: HDFS-15020.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15020) Add a test case of storage type quota to TestHdfsAdmin.

2019-11-27 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-15020:
---
Attachment: HDFS-15020.001.patch
Status: Patch Available  (was: Open)

> Add a test case of storage type quota to TestHdfsAdmin.
> ---
>
> Key: HDFS-15020
> URL: https://issues.apache.org/jira/browse/HDFS-15020
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
> Attachments: HDFS-15020.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15020) Add a test case of storage type quota to TestHdfsAdmin.

2019-11-27 Thread Jinglun (Jira)
Jinglun created HDFS-15020:
--

 Summary: Add a test case of storage type quota to TestHdfsAdmin.
 Key: HDFS-15020
 URL: https://issues.apache.org/jira/browse/HDFS-15020
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jinglun
Assignee: Jinglun






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14960) TestBalancerWithNodeGroup should not succeed with DFSNetworkTopology

2019-11-27 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984126#comment-16984126
 ] 

hemanthboyina commented on HDFS-14960:
--

If possible,  We can assert the cluster map from DfsnetworkTopology , it is a 
way of confirming the topology is set to NetworktopologywithNodeGroup 

> TestBalancerWithNodeGroup should not succeed with DFSNetworkTopology
> 
>
> Key: HDFS-14960
> URL: https://issues.apache.org/jira/browse/HDFS-14960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.3
>Reporter: Jim Brennan
>Priority: Minor
>
> As reported in HDFS-14958, TestBalancerWithNodeGroup was succeeding even 
> though it was using DFSNetworkTopology instead of 
> NetworkTopologyWithNodeGroup.
> [~inigoiri] rightly suggested that this indicates the test is not very good - 
> it should fail when run without NetworkTopologyWithNodeGroup.
> We should improve this test.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14998) Update Observer Namenode doc for ZKFC after HDFS-14130

2019-11-27 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984125#comment-16984125
 ] 

Fei Hui commented on HDFS-14998:


[~csun] [~ayushtkn]Thanks for your comments
{quote}
ZKFC does not need to be turned on the Observer NameNode unless transition 
between between Observer and Standby is required in your cluster
{quote}

{quote}
the only benefit for running ZKFC on Observer NameNode is to enable dynamic 
transition from Observer to Standby role (which will then automatically join 
the Zookeeper controlled failover group) and vise versa.
{quote}

I'm confused about these comments, right now ZKFC is  for dynamic transition 
between active and standby.
HDFS-14130 aims to prevent observer from commanding Obserbers to transition to 
SBN and participating in ANN election.

I think maybe we should change it to following
{quote}
ZKFC does not need to be turned on the Observer NameNode unless dynamic 
transition between Active and Standby is required in your cluster after you 
transition Observer to Standby.
{quote}

{quote}
the only benefit for running ZKFC on Observer NameNode is to enable dynamic 
transition between  Active 
 and Standby after you transition Observer to Standby role (which will then 
automatically join the Zookeeper controlled failover group).
{quote}

[~csun][~ayushtkn] Pls let me know whether it is correct. Thanks

> Update Observer Namenode doc for ZKFC after HDFS-14130
> --
>
> Key: HDFS-14998
> URL: https://issues.apache.org/jira/browse/HDFS-14998
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-14998.001.patch, HDFS-14998.002.patch, 
> HDFS-14998.003.patch
>
>
> After HDFS-14130, we should update observer namenode doc, observer namenode 
> can run with ZKFC running



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15013) Reduce NameNode overview tab response time

2019-11-27 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984118#comment-16984118
 ] 

Hadoop QA commented on HDFS-15013:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
51s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
34m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-15013 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12987015/HDFS-15013.002.patch |
| Optional Tests |  dupname  asflicense  shadedclient  |
| uname | Linux 6768cc90b692 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 82ad9b5 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 307 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28418/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Reduce NameNode overview tab response time
> --
>
> Key: HDFS-15013
> URL: https://issues.apache.org/jira/browse/HDFS-15013
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-15013.001.patch, HDFS-15013.002.patch, 
> image-2019-11-26-10-05-39-640.png, image-2019-11-26-10-09-07-952.png
>
>
> Now, the overview tab load /conf synchronously as follow picture.
>  !image-2019-11-26-10-05-39-640.png! 
> This issue will change it to an asynchronous method. The effect diagram is as 
> follows.
>  !image-2019-11-26-10-09-07-952.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14960) TestBalancerWithNodeGroup should not succeed with DFSNetworkTopology

2019-11-27 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984105#comment-16984105
 ] 

Íñigo Goiri commented on HDFS-14960:


The whole point of NetworkTopologyWithNodeGroup is that it makes some nodes in 
the same category.
So if the topology is not set to NetworkTopologyWithNodeGroup we should fail.
The proper way would be to have a test that would fail because is not getting 
the expected functionality from DFSNetworkTopology.

In addition to that, we can add the pre-condition.
However, the priority is to have a test that would only pass with 
NetworkTopologyWithNodeGroup.

> TestBalancerWithNodeGroup should not succeed with DFSNetworkTopology
> 
>
> Key: HDFS-14960
> URL: https://issues.apache.org/jira/browse/HDFS-14960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.3
>Reporter: Jim Brennan
>Priority: Minor
>
> As reported in HDFS-14958, TestBalancerWithNodeGroup was succeeding even 
> though it was using DFSNetworkTopology instead of 
> NetworkTopologyWithNodeGroup.
> [~inigoiri] rightly suggested that this indicates the test is not very good - 
> it should fail when run without NetworkTopologyWithNodeGroup.
> We should improve this test.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14986) ReplicaCachingGetSpaceUsed throws ConcurrentModificationException

2019-11-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984102#comment-16984102
 ] 

Hudson commented on HDFS-14986:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17705 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17705/])
HDFS-14986. ReplicaCachingGetSpaceUsed throws (yqlin: rev 
2b452b4e6063072b2bec491edd3f412eb7ac21f3)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestReplicaCachingGetSpaceUsed.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaCachingGetSpaceUsed.java


> ReplicaCachingGetSpaceUsed throws  ConcurrentModificationException
> --
>
> Key: HDFS-14986
> URL: https://issues.apache.org/jira/browse/HDFS-14986
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, performance
>Affects Versions: 2.10.0
>Reporter: Ryan Wu
>Assignee: Aiphago
>Priority: Major
> Fix For: 3.3.0, 2.10.1, 2.11.0
>
> Attachments: HDFS-14986.001.patch, HDFS-14986.002.patch, 
> HDFS-14986.003.patch, HDFS-14986.004.patch, HDFS-14986.005.patch, 
> HDFS-14986.006.patch
>
>
> Running DU across lots of disks is very expensive . We applied the patch 
> HDFS-14313 to get  used space from ReplicaInfo in memory.However, new du 
> threads throw the exception
> {code:java}
> // 2019-11-08 18:07:13,858 ERROR 
> [refreshUsed-/home/vipshop/hard_disk/7/dfs/dn/current/BP-1203969992--1450855658517]
>  
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
>  ReplicaCachingGetSpaceUsed refresh error
> java.util.ConcurrentModificationException: Tree has been modified outside of 
> iterator
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.checkForModification(FoldedTreeSet.java:311)
> 
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.hasNext(FoldedTreeSet.java:256)
> 
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(FsDatasetImpl.java:1052)
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed.refresh(ReplicaCachingGetSpaceUsed.java:73)
> 
> at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:178)
>    
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14986) ReplicaCachingGetSpaceUsed throws ConcurrentModificationException

2019-11-27 Thread Yiqun Lin (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-14986:
-
Affects Version/s: 2.10.0

> ReplicaCachingGetSpaceUsed throws  ConcurrentModificationException
> --
>
> Key: HDFS-14986
> URL: https://issues.apache.org/jira/browse/HDFS-14986
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, performance
>Affects Versions: 2.10.0
>Reporter: Ryan Wu
>Assignee: Aiphago
>Priority: Major
> Fix For: 3.3.0, 2.10.1, 2.11.0
>
> Attachments: HDFS-14986.001.patch, HDFS-14986.002.patch, 
> HDFS-14986.003.patch, HDFS-14986.004.patch, HDFS-14986.005.patch, 
> HDFS-14986.006.patch
>
>
> Running DU across lots of disks is very expensive . We applied the patch 
> HDFS-14313 to get  used space from ReplicaInfo in memory.However, new du 
> threads throw the exception
> {code:java}
> // 2019-11-08 18:07:13,858 ERROR 
> [refreshUsed-/home/vipshop/hard_disk/7/dfs/dn/current/BP-1203969992--1450855658517]
>  
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
>  ReplicaCachingGetSpaceUsed refresh error
> java.util.ConcurrentModificationException: Tree has been modified outside of 
> iterator
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.checkForModification(FoldedTreeSet.java:311)
> 
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.hasNext(FoldedTreeSet.java:256)
> 
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(FsDatasetImpl.java:1052)
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed.refresh(ReplicaCachingGetSpaceUsed.java:73)
> 
> at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:178)
>    
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14986) ReplicaCachingGetSpaceUsed throws ConcurrentModificationException

2019-11-27 Thread Yiqun Lin (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-14986:
-
Fix Version/s: 2.11.0
   2.10.1
   3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2 and branch-2.10.
Thanks [~Aiphag0] for the contribution and thanks [~jianliang.wu] for reporting 
this.

> ReplicaCachingGetSpaceUsed throws  ConcurrentModificationException
> --
>
> Key: HDFS-14986
> URL: https://issues.apache.org/jira/browse/HDFS-14986
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, performance
>Reporter: Ryan Wu
>Assignee: Aiphago
>Priority: Major
> Fix For: 3.3.0, 2.10.1, 2.11.0
>
> Attachments: HDFS-14986.001.patch, HDFS-14986.002.patch, 
> HDFS-14986.003.patch, HDFS-14986.004.patch, HDFS-14986.005.patch, 
> HDFS-14986.006.patch
>
>
> Running DU across lots of disks is very expensive . We applied the patch 
> HDFS-14313 to get  used space from ReplicaInfo in memory.However, new du 
> threads throw the exception
> {code:java}
> // 2019-11-08 18:07:13,858 ERROR 
> [refreshUsed-/home/vipshop/hard_disk/7/dfs/dn/current/BP-1203969992--1450855658517]
>  
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
>  ReplicaCachingGetSpaceUsed refresh error
> java.util.ConcurrentModificationException: Tree has been modified outside of 
> iterator
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.checkForModification(FoldedTreeSet.java:311)
> 
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.hasNext(FoldedTreeSet.java:256)
> 
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(FsDatasetImpl.java:1052)
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed.refresh(ReplicaCachingGetSpaceUsed.java:73)
> 
> at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:178)
>    
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14986) ReplicaCachingGetSpaceUsed throws ConcurrentModificationException

2019-11-27 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984096#comment-16984096
 ] 

Yiqun Lin commented on HDFS-14986:
--

LGTM, +1. Committing this.

> ReplicaCachingGetSpaceUsed throws  ConcurrentModificationException
> --
>
> Key: HDFS-14986
> URL: https://issues.apache.org/jira/browse/HDFS-14986
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, performance
>Reporter: Ryan Wu
>Assignee: Aiphago
>Priority: Major
> Attachments: HDFS-14986.001.patch, HDFS-14986.002.patch, 
> HDFS-14986.003.patch, HDFS-14986.004.patch, HDFS-14986.005.patch, 
> HDFS-14986.006.patch
>
>
> Running DU across lots of disks is very expensive . We applied the patch 
> HDFS-14313 to get  used space from ReplicaInfo in memory.However, new du 
> threads throw the exception
> {code:java}
> // 2019-11-08 18:07:13,858 ERROR 
> [refreshUsed-/home/vipshop/hard_disk/7/dfs/dn/current/BP-1203969992--1450855658517]
>  
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
>  ReplicaCachingGetSpaceUsed refresh error
> java.util.ConcurrentModificationException: Tree has been modified outside of 
> iterator
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.checkForModification(FoldedTreeSet.java:311)
> 
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.hasNext(FoldedTreeSet.java:256)
> 
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(FsDatasetImpl.java:1052)
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed.refresh(ReplicaCachingGetSpaceUsed.java:73)
> 
> at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:178)
>    
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15013) Reduce NameNode overview tab response time

2019-11-27 Thread HuangTao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984093#comment-16984093
 ] 

HuangTao commented on HDFS-15013:
-

Upload v002 patch. fix non-ha mode

> Reduce NameNode overview tab response time
> --
>
> Key: HDFS-15013
> URL: https://issues.apache.org/jira/browse/HDFS-15013
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-15013.001.patch, HDFS-15013.002.patch, 
> image-2019-11-26-10-05-39-640.png, image-2019-11-26-10-09-07-952.png
>
>
> Now, the overview tab load /conf synchronously as follow picture.
>  !image-2019-11-26-10-05-39-640.png! 
> This issue will change it to an asynchronous method. The effect diagram is as 
> follows.
>  !image-2019-11-26-10-09-07-952.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15013) Reduce NameNode overview tab response time

2019-11-27 Thread HuangTao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984072#comment-16984072
 ] 

HuangTao edited comment on HDFS-15013 at 11/28/19 2:32 AM:
---

[~surendrasingh]  sorry, I didn't think of non-HA mode, I will fix it right 
away.

[~ayushtkn] I just moved render() into setInterval(), which will check http 
request to /conf per 5 ms. If the request finishes, will call render()


was (Author: marvelrock):
[~surendrasingh]  sorry, I didn't think of non-HA mode, I will fix it right 
away.

[~ayushtkn] I just moved render() into setInterval()

> Reduce NameNode overview tab response time
> --
>
> Key: HDFS-15013
> URL: https://issues.apache.org/jira/browse/HDFS-15013
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-15013.001.patch, HDFS-15013.002.patch, 
> image-2019-11-26-10-05-39-640.png, image-2019-11-26-10-09-07-952.png
>
>
> Now, the overview tab load /conf synchronously as follow picture.
>  !image-2019-11-26-10-05-39-640.png! 
> This issue will change it to an asynchronous method. The effect diagram is as 
> follows.
>  !image-2019-11-26-10-09-07-952.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15013) Reduce NameNode overview tab response time

2019-11-27 Thread HuangTao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HuangTao updated HDFS-15013:

Attachment: HDFS-15013.002.patch

> Reduce NameNode overview tab response time
> --
>
> Key: HDFS-15013
> URL: https://issues.apache.org/jira/browse/HDFS-15013
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-15013.001.patch, HDFS-15013.002.patch, 
> image-2019-11-26-10-05-39-640.png, image-2019-11-26-10-09-07-952.png
>
>
> Now, the overview tab load /conf synchronously as follow picture.
>  !image-2019-11-26-10-05-39-640.png! 
> This issue will change it to an asynchronous method. The effect diagram is as 
> follows.
>  !image-2019-11-26-10-09-07-952.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15019) Refactor the unit test of TestDeadNodeDetection

2019-11-27 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984089#comment-16984089
 ] 

Lisheng Sun commented on HDFS-15019:


Thanks  [~linyiqun] for your review.

the v001 pathc refactors the unit test of TestDeadNodeDetection.

> Refactor the unit test of TestDeadNodeDetection 
> 
>
> Key: HDFS-15019
> URL: https://issues.apache.org/jira/browse/HDFS-15019
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yiqun Lin
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-15019.001.patch
>
>
> There are many duplicated lines in unit test \{{TestDeadNodeDetection}}. We 
> can simplified that.
> In additional in {{testDeadNodeDetectionInMultipleDFSInputStream}}, the 
> DFSInputstream is passed incorrectly in asset operation.
> {code}
> din2 = (DFSInputStream) in1.getWrappedStream();
> {code}
> Should be 
> {code}
> din2 = (DFSInputStream) in2.getWrappedStream();
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15019) Refactor the unit test of TestDeadNodeDetection

2019-11-27 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-15019:
---
Attachment: HDFS-15019.001.patch
Status: Patch Available  (was: In Progress)

> Refactor the unit test of TestDeadNodeDetection 
> 
>
> Key: HDFS-15019
> URL: https://issues.apache.org/jira/browse/HDFS-15019
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yiqun Lin
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-15019.001.patch
>
>
> There are many duplicated lines in unit test \{{TestDeadNodeDetection}}. We 
> can simplified that.
> In additional in {{testDeadNodeDetectionInMultipleDFSInputStream}}, the 
> DFSInputstream is passed incorrectly in asset operation.
> {code}
> din2 = (DFSInputStream) in1.getWrappedStream();
> {code}
> Should be 
> {code}
> din2 = (DFSInputStream) in2.getWrappedStream();
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-15019) Refactor the unit test of TestDeadNodeDetection

2019-11-27 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-15019 started by Lisheng Sun.
--
> Refactor the unit test of TestDeadNodeDetection 
> 
>
> Key: HDFS-15019
> URL: https://issues.apache.org/jira/browse/HDFS-15019
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yiqun Lin
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-15019.001.patch
>
>
> There are many duplicated lines in unit test \{{TestDeadNodeDetection}}. We 
> can simplified that.
> In additional in {{testDeadNodeDetectionInMultipleDFSInputStream}}, the 
> DFSInputstream is passed incorrectly in asset operation.
> {code}
> din2 = (DFSInputStream) in1.getWrappedStream();
> {code}
> Should be 
> {code}
> din2 = (DFSInputStream) in2.getWrappedStream();
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15013) Reduce NameNode overview tab response time

2019-11-27 Thread HuangTao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HuangTao updated HDFS-15013:

Fix Version/s: 3.3.0

> Reduce NameNode overview tab response time
> --
>
> Key: HDFS-15013
> URL: https://issues.apache.org/jira/browse/HDFS-15013
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-15013.001.patch, image-2019-11-26-10-05-39-640.png, 
> image-2019-11-26-10-09-07-952.png
>
>
> Now, the overview tab load /conf synchronously as follow picture.
>  !image-2019-11-26-10-05-39-640.png! 
> This issue will change it to an asynchronous method. The effect diagram is as 
> follows.
>  !image-2019-11-26-10-09-07-952.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15013) Reduce NameNode overview tab response time

2019-11-27 Thread HuangTao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984072#comment-16984072
 ] 

HuangTao edited comment on HDFS-15013 at 11/28/19 1:41 AM:
---

[~surendrasingh]  sorry, I didn't think of non-HA mode, I will fix it right 
away.

[~ayushtkn] I just moved render() into setInterval()


was (Author: marvelrock):
[~surendrasingh] [~ayushtkn] sorry, I didn't think of non-HA mode, I will fix 
it right away

> Reduce NameNode overview tab response time
> --
>
> Key: HDFS-15013
> URL: https://issues.apache.org/jira/browse/HDFS-15013
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Minor
> Attachments: HDFS-15013.001.patch, image-2019-11-26-10-05-39-640.png, 
> image-2019-11-26-10-09-07-952.png
>
>
> Now, the overview tab load /conf synchronously as follow picture.
>  !image-2019-11-26-10-05-39-640.png! 
> This issue will change it to an asynchronous method. The effect diagram is as 
> follows.
>  !image-2019-11-26-10-09-07-952.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15013) Reduce NameNode overview tab response time

2019-11-27 Thread HuangTao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984072#comment-16984072
 ] 

HuangTao commented on HDFS-15013:
-

[~surendrasingh] [~ayushtkn] sorry, I didn't think of non-HA mode, I will fix 
it right away

> Reduce NameNode overview tab response time
> --
>
> Key: HDFS-15013
> URL: https://issues.apache.org/jira/browse/HDFS-15013
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Minor
> Attachments: HDFS-15013.001.patch, image-2019-11-26-10-05-39-640.png, 
> image-2019-11-26-10-09-07-952.png
>
>
> Now, the overview tab load /conf synchronously as follow picture.
>  !image-2019-11-26-10-05-39-640.png! 
> This issue will change it to an asynchronous method. The effect diagram is as 
> follows.
>  !image-2019-11-26-10-09-07-952.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14998) Update Observer Namenode doc for ZKFC after HDFS-14130

2019-11-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984032#comment-16984032
 ] 

Ayush Saxena commented on HDFS-14998:
-

Fair enough. Thanx
 [~ferhui] Please update accordingly.

> Update Observer Namenode doc for ZKFC after HDFS-14130
> --
>
> Key: HDFS-14998
> URL: https://issues.apache.org/jira/browse/HDFS-14998
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-14998.001.patch, HDFS-14998.002.patch, 
> HDFS-14998.003.patch
>
>
> After HDFS-14130, we should update observer namenode doc, observer namenode 
> can run with ZKFC running



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14998) Update Observer Namenode doc for ZKFC after HDFS-14130

2019-11-27 Thread Chao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984031#comment-16984031
 ] 

Chao Sun commented on HDFS-14998:
-

Good point. I think we are on the same page then. I have a few comments on the 
latest patch:

1. 
{quote}
In general ZKFC shouldn't be turned on the Observer NameNode since it doesn't 
add any such value, though now it is not mandatory to do so
{quote}

How about:
{quote}
ZKFC does not need to be turned on the Observer NameNode unless transition 
between between Observer and Standby is required in your cluster
{quote}

2. 
{quote}
If **dfs.ha.automatic-failover.enabled** is turned on, you could run ZKFC on 
the namenode for observer, but it is not recommended because the Observer 
NameNode will not participate in  failover. In addition to that, you'll also 
need to add **forcemanual** flag to the **transitionToObserver** command:
{quote}

How about:
{quote}
If **dfs.ha.automatic-failover.enabled** is turned on, the only benefit for 
running ZKFC on Observer NameNode is to enable dynamic transition from Observer 
to Standby role (which will then automatically join the Zookeeper controlled 
failover group) and vise versa. If this is not desired, you can disable ZKFC on 
the Observer NameNode.
In addition, currently a **forcemanual** flag is required with the 
**transitionToObserver** command:
{quote}

3.
{quote}
After the namenode is transitioned to observer state, you could run ZKFC on 
this namenode if you want, but ZKFC doesn't do anything right now. When you 
transition the namenode to standby state, ZKFC running on this namenode will 
participate in automatic failover.
{quote}

This paragraph may not be necessary with the above change.

Let me know what you think. Thanks.



> Update Observer Namenode doc for ZKFC after HDFS-14130
> --
>
> Key: HDFS-14998
> URL: https://issues.apache.org/jira/browse/HDFS-14998
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-14998.001.patch, HDFS-14998.002.patch, 
> HDFS-14998.003.patch
>
>
> After HDFS-14130, we should update observer namenode doc, observer namenode 
> can run with ZKFC running



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15015) Backport HDFS-5040 to branch-2

2019-11-27 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983998#comment-16983998
 ] 

Hadoop QA commented on HDFS-15015:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 21m 
49s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 5 new + 259 unchanged - 5 fixed = 264 total (was 264) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}129m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
|   | hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:f555aa740b5 |
| JIRA Issue | HDFS-15015 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986997/HDFS-15015-branch-2.000.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 63fd4c858e13 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| 

[jira] [Commented] (HDFS-15010) BlockPoolSlice#addReplicaThreadPool static pool should be initialized by static method

2019-11-27 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983980#comment-16983980
 ] 

Íñigo Goiri commented on HDFS-15010:


Not the fastest test but all this package is fairly expensive.
+1 on  [^HDFS-15010.05.patch].

> BlockPoolSlice#addReplicaThreadPool static pool should be initialized by 
> static method
> --
>
> Key: HDFS-15010
> URL: https://issues.apache.org/jira/browse/HDFS-15010
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.1.2
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-15010.001.patch, HDFS-15010.02.patch, 
> HDFS-15010.03.patch, HDFS-15010.04.patch, HDFS-15010.05.patch
>
>
> {{BlockPoolSlice#initializeAddReplicaPool()}} method currently initialize the 
> static thread pool instance. But when two {{BPServiceActor}} actor try to 
> load block pool parallelly then it may create different instance. 
> So {{BlockPoolSlice#initializeAddReplicaPool()}} method should be a static 
> method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15010) BlockPoolSlice#addReplicaThreadPool static pool should be initialized by static method

2019-11-27 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983971#comment-16983971
 ] 

Hadoop QA commented on HDFS-15010:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
51s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
|   | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.TestDeadNodeDetection |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-15010 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986989/HDFS-15010.05.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 85901c5b2908 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9e69628 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28413/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28413/testReport/ |
| Max. process+thread count | 2730 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| 

[jira] [Commented] (HDFS-9695) HTTPFS - CHECKACCESS operation missing

2019-11-27 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-9695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983957#comment-16983957
 ] 

Hadoop QA commented on HDFS-9695:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-httpfs: The 
patch generated 1 new + 454 unchanged - 0 fixed = 455 total (was 454) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-httpfs generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 15s{color} 
| {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-httpfs |
|  |  Call to equals(null) in new 
org.apache.hadoop.fs.http.server.HttpFSParametersProvider$FsActionParam(String) 
 At 
HttpFSParametersProvider.java:org.apache.hadoop.fs.http.server.HttpFSParametersProvider$FsActionParam(String)
  At HttpFSParametersProvider.java:[line 694] |
| Failed junit tests | hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-9695 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986996/HDFS-9695.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 88009d08b54c 4.15.0-70-generic #79-Ubuntu SMP Tue Nov 12 
10:36:11 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9e69628 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |

[jira] [Commented] (HDFS-15009) FSCK "-list-corruptfileblocks" return Invalid Entries

2019-11-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983927#comment-16983927
 ] 

Ayush Saxena commented on HDFS-15009:
-

You could have used static import rather than using DFSUtil.isParentEntry(..)

> FSCK "-list-corruptfileblocks" return Invalid Entries
> -
>
> Key: HDFS-15009
> URL: https://issues.apache.org/jira/browse/HDFS-15009
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15009.001.patch, HDFS-15009.002.patch, 
> HDFS-15009.003.patch
>
>
> Scenario :  if we have two directories dir1, dir10 and only dir10 have 
> corrupt files 
> Now if we run -list-corruptfileblocks for dir1,  corrupt files count for dir1 
> showing is of dir10
> {code:java}
>   while (blkIterator.hasNext()) {
> BlockInfo blk = blkIterator.next();
> final INodeFile inode = getBlockCollection(blk);
> skip++;
> if (inode != null) {
>   String src = inode.getFullPathName();
>   if (src.startsWith(path)){
> corruptFiles.add(new CorruptFileBlockInfo(src, blk));
> count++;
> if (count >= DEFAULT_MAX_CORRUPT_FILEBLOCKS_RETURNED)
>   break;
>   }
> }
>   } {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15013) Reduce NameNode overview tab response time

2019-11-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983926#comment-16983926
 ] 

Ayush Saxena commented on HDFS-15013:
-

Thanx Surendra for catching. Seems issue with the overview page only?
[~marvelrock] why you have removed this line. Remove this change and the 
Overview page works.
{code:java}
-render();
{code}

> Reduce NameNode overview tab response time
> --
>
> Key: HDFS-15013
> URL: https://issues.apache.org/jira/browse/HDFS-15013
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Minor
> Attachments: HDFS-15013.001.patch, image-2019-11-26-10-05-39-640.png, 
> image-2019-11-26-10-09-07-952.png
>
>
> Now, the overview tab load /conf synchronously as follow picture.
>  !image-2019-11-26-10-05-39-640.png! 
> This issue will change it to an asynchronous method. The effect diagram is as 
> follows.
>  !image-2019-11-26-10-09-07-952.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14998) Update Observer Namenode doc for ZKFC after HDFS-14130

2019-11-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983923#comment-16983923
 ] 

Ayush Saxena edited comment on HDFS-14998 at 11/27/19 8:56 PM:
---

I don't object or say remove the support for having ZKFC running on ONN. Its 
there, I am just saying for a normal deployment, Just the "Apache Hadoop" where 
an Observer can not participate in ZKFC and there isn't any external help out 
there, he doesn't need  to get ZKFC running for the most of the time and let a 
burden of a process running.
if someone has the scripts or alarms as you suggest, he may turn the ZKFC on, 
thats why we have supported the feature in. My point is in a basic 
deployment(Where not all people have the scripts transitioning ONN to standby 
dynamically), they don't need to turn the ZKFC on as a compulsion as they would 
do in a basic HA setup. 
We are just giving a suggestion that ONN will not participate in automatic 
failover so, you may not need to turn it up in general. If someone has a 
usecase as yours he is free to do that, Suggestions are in general not for 
specific cases. Even we don't keep on switching from ONN to SNN in any such way.



was (Author: ayushtkn):
I don't object or say remove the support for having ZKFC running on ONN. Its 
there, I am just saying for a normal deployment, Just the "Apache Hadoop" where 
an Observer can not participate in ZKFC and there isn't any external help 
there, he doesn't need to have to get ZKFC running for the most of the time and 
let a burden of a process running.
if someone has the scripts or alarms as you suggest, he may turn the ZKFC on, 
thats why we have supported the featggure in. My point is in a basic 
deployment(Where not all people have the scripts transitioning ONN to standby 
dynamically), they don't need to turn the ZKFC on as a compulsion as they would 
do in a basic HA setup. 
We are just giving a suggestion that ONN will not participate in automatic 
failover so, you may not need to turn it up in general. If someone has a 
usecase as yours he is free to do that, Suggestions are in genreal not for 
specific cases. Even we don't keep on switching from ONN to SNN in any such way.


> Update Observer Namenode doc for ZKFC after HDFS-14130
> --
>
> Key: HDFS-14998
> URL: https://issues.apache.org/jira/browse/HDFS-14998
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-14998.001.patch, HDFS-14998.002.patch, 
> HDFS-14998.003.patch
>
>
> After HDFS-14130, we should update observer namenode doc, observer namenode 
> can run with ZKFC running



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14998) Update Observer Namenode doc for ZKFC after HDFS-14130

2019-11-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983923#comment-16983923
 ] 

Ayush Saxena commented on HDFS-14998:
-

I don't object or say remove the support for having ZKFC running on ONN. Its 
there, I am just saying for a normal deployment, Just the "Apache Hadoop" where 
an Observer can not participate in ZKFC and there isn't any external help 
there, he doesn't need to have to get ZKFC running for the most of the time and 
let a burden of a process running.
if someone has the scripts or alarms as you suggest, he may turn the ZKFC on, 
thats why we have supported the featggure in. My point is in a basic 
deployment(Where not all people have the scripts transitioning ONN to standby 
dynamically), they don't need to turn the ZKFC on as a compulsion as they would 
do in a basic HA setup. 
We are just giving a suggestion that ONN will not participate in automatic 
failover so, you may not need to turn it up in general. If someone has a 
usecase as yours he is free to do that, Suggestions are in genreal not for 
specific cases. Even we don't keep on switching from ONN to SNN in any such way.


> Update Observer Namenode doc for ZKFC after HDFS-14130
> --
>
> Key: HDFS-14998
> URL: https://issues.apache.org/jira/browse/HDFS-14998
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-14998.001.patch, HDFS-14998.002.patch, 
> HDFS-14998.003.patch
>
>
> After HDFS-14130, we should update observer namenode doc, observer namenode 
> can run with ZKFC running



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14998) Update Observer Namenode doc for ZKFC after HDFS-14130

2019-11-27 Thread Chao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983910#comment-16983910
 ] 

Chao Sun commented on HDFS-14998:
-

Thanks [~ayushtkn] for sharing your thoughts. 

By disabling ZKFC on ObserverNameNodes, we are adding more operational overhead 
on users. They need to write extra scripts for the failover, potentially modify 
the NameNode startup script, etc. Also, in our environment we have ZKFC alerts 
on NameNode hosts, which we have to disable for observers. If we want to enable 
dynamic transitions between observer and SBN, we'll also need to enable/disable 
this alert dynamically on the hosts, which is not straightforward.

If we allow ZKFC running on Observer, users can pretty much just use the same 
operational workflow today for managing NameNodes. Therefore, IMO we should 
think whether it is necessary to ask users to do extra work if they want to use 
this feature.

> Update Observer Namenode doc for ZKFC after HDFS-14130
> --
>
> Key: HDFS-14998
> URL: https://issues.apache.org/jira/browse/HDFS-14998
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-14998.001.patch, HDFS-14998.002.patch, 
> HDFS-14998.003.patch
>
>
> After HDFS-14130, we should update observer namenode doc, observer namenode 
> can run with ZKFC running



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9695) HTTPFS - CHECKACCESS operation missing

2019-11-27 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-9695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983893#comment-16983893
 ] 

Hadoop QA commented on HDFS-9695:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
56s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-httpfs: The 
patch generated 1 new + 454 unchanged - 0 fixed = 455 total (was 454) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-httpfs generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 25s{color} 
| {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-httpfs |
|  |  Call to equals(null) in new 
org.apache.hadoop.fs.http.server.HttpFSParametersProvider$FsActionParam(String) 
 At 
HttpFSParametersProvider.java:org.apache.hadoop.fs.http.server.HttpFSParametersProvider$FsActionParam(String)
  At HttpFSParametersProvider.java:[line 694] |
| Failed junit tests | hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-9695 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986994/HDFS-9695.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d635a2ef0a2d 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9e69628 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| 

[jira] [Commented] (HDFS-14960) TestBalancerWithNodeGroup should not succeed with DFSNetworkTopology

2019-11-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983892#comment-16983892
 ] 

Ayush Saxena commented on HDFS-14960:
-

Seems like I got confused, Let me confirm. 
[~elgoiri] what exactlly do you expect here, a test which should pass only with 
{{NetworkTopologyWithNodeGroup}} or this test class tests shouldn't pass if the 
topology of the cluster isn't {{NetworkTopologyWithNodeGroup}}



> TestBalancerWithNodeGroup should not succeed with DFSNetworkTopology
> 
>
> Key: HDFS-14960
> URL: https://issues.apache.org/jira/browse/HDFS-14960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.3
>Reporter: Jim Brennan
>Priority: Minor
>
> As reported in HDFS-14958, TestBalancerWithNodeGroup was succeeding even 
> though it was using DFSNetworkTopology instead of 
> NetworkTopologyWithNodeGroup.
> [~inigoiri] rightly suggested that this indicates the test is not very good - 
> it should fail when run without NetworkTopologyWithNodeGroup.
> We should improve this test.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14960) TestBalancerWithNodeGroup should not succeed with DFSNetworkTopology

2019-11-27 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983887#comment-16983887
 ] 

Jim Brennan commented on HDFS-14960:


[~ayushtkn] I think the intention was to add a test case that will succeed for 
NetworkTopologyWithNodeGroup but would fail for DFSNetworkTopology.

 

> TestBalancerWithNodeGroup should not succeed with DFSNetworkTopology
> 
>
> Key: HDFS-14960
> URL: https://issues.apache.org/jira/browse/HDFS-14960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.3
>Reporter: Jim Brennan
>Priority: Minor
>
> As reported in HDFS-14958, TestBalancerWithNodeGroup was succeeding even 
> though it was using DFSNetworkTopology instead of 
> NetworkTopologyWithNodeGroup.
> [~inigoiri] rightly suggested that this indicates the test is not very good - 
> it should fail when run without NetworkTopologyWithNodeGroup.
> We should improve this test.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14960) TestBalancerWithNodeGroup should not succeed with DFSNetworkTopology

2019-11-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983876#comment-16983876
 ] 

Ayush Saxena commented on HDFS-14960:
-

Will just asserting the cluster topology in @BeforeClass not work?

> TestBalancerWithNodeGroup should not succeed with DFSNetworkTopology
> 
>
> Key: HDFS-14960
> URL: https://issues.apache.org/jira/browse/HDFS-14960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.3
>Reporter: Jim Brennan
>Priority: Minor
>
> As reported in HDFS-14958, TestBalancerWithNodeGroup was succeeding even 
> though it was using DFSNetworkTopology instead of 
> NetworkTopologyWithNodeGroup.
> [~inigoiri] rightly suggested that this indicates the test is not very good - 
> it should fail when run without NetworkTopologyWithNodeGroup.
> We should improve this test.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14901) RBF: Add Encryption Zone related ClientProtocol APIs

2019-11-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983875#comment-16983875
 ] 

Ayush Saxena commented on HDFS-14901:
-

Thanx [~hemanthboyina] for the patch.

* {{createEncryption}} is using invokeSequential, Though it was there already, 
you just moved, but I feel it should check the order and then if it is PathAll 
it should be invokeConcurrent, to support the multiple destinations as we are 
doing in {{reencryptEncryptionZone}}
* For the test :

{code:java}
+routerDFS.mkdirs(ezPath1);
+routerDFS.mkdirs(ezPath);
+routerProtocol.createEncryptionZone("/ez", TEST_KEY);
+routerProtocol.createEncryptionZone("/ez1", TEST_KEY);
+EncryptionZone ez = routerProtocol.getEZForPath("/ez/file");
{code}
For mkdir you are using routerDFS, but for Encryption API, you are using 
routerProtocol, if no specific reason, you can use routerDFS only for both and 
chunk of having {{routerProtocol}} from the test.
* The number of Datanodes for the test are 2 as of now(default, if you don't 
specify explicitlly), if I see it correct, you are using replication factor of 
1 for the file. I guess you can reduce the number to 1.
* Why you need this :

{code:java}
+cluster.deleteAllFiles();
{code}
if that was @Before it would still make sense, but in @BeforeClass why?


> RBF: Add Encryption Zone related ClientProtocol APIs
> 
>
> Key: HDFS-14901
> URL: https://issues.apache.org/jira/browse/HDFS-14901
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14901.001.patch, HDFS-14901.002.patch
>
>
> Currently listEncryptionZones,reencryptEncryptionZone,listReencryptionStatus 
> these APIs are not implemented in Router.
> This JIRA is intend to implement above mentioned APIs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15015) Backport HDFS-5040 to branch-2

2019-11-27 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-15015:

Attachment: HDFS-15015-branch-2.000.patch

> Backport HDFS-5040 to branch-2
> --
>
> Key: HDFS-15015
> URL: https://issues.apache.org/jira/browse/HDFS-15015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: logging
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-15015-branch-2.000.patch
>
>
> HDFS-5040 added audit logging for several admin commands which are useful for 
> diagnosing and debugging. For instance, {{getDatanodeReport}} is an expensive 
> call and can be invoked by components such as RBF for metrics and others. 
> It's better to track them in audit log.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9695) HTTPFS - CHECKACCESS operation missing

2019-11-27 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-9695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983853#comment-16983853
 ] 

hemanthboyina commented on HDFS-9695:
-

thanks for the review [~elgoiri] [~tasanuma]

updated the patch [^HDFS-9695.003.patch]  with  comments fixed . please review .

> HTTPFS - CHECKACCESS operation missing
> --
>
> Key: HDFS-9695
> URL: https://issues.apache.org/jira/browse/HDFS-9695
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bert Hekman
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-9695.001.patch, HDFS-9695.002.patch, 
> HDFS-9695.003.patch
>
>
> Hi,
> The CHECKACCESS operation seems to be missing in HTTPFS. I'm getting the 
> following error:
> {code}
> QueryParamException: java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.CHECKACCESS
> {code}
> A quick look into the org.apache.hadoop.fs.http.client.HttpFSFileSystem class 
> reveals that CHECKACCESS is not defined at all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9695) HTTPFS - CHECKACCESS operation missing

2019-11-27 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-9695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-9695:

Attachment: HDFS-9695.003.patch

> HTTPFS - CHECKACCESS operation missing
> --
>
> Key: HDFS-9695
> URL: https://issues.apache.org/jira/browse/HDFS-9695
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bert Hekman
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-9695.001.patch, HDFS-9695.002.patch, 
> HDFS-9695.003.patch
>
>
> Hi,
> The CHECKACCESS operation seems to be missing in HTTPFS. I'm getting the 
> following error:
> {code}
> QueryParamException: java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.CHECKACCESS
> {code}
> A quick look into the org.apache.hadoop.fs.http.client.HttpFSFileSystem class 
> reveals that CHECKACCESS is not defined at all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15013) Reduce NameNode overview tab response time

2019-11-27 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983852#comment-16983852
 ] 

Surendra Singh Lilhore commented on HDFS-15013:
---

[~marvelrock], After this change I am not able to open namenode UI.

Please can you check it again ?

> Reduce NameNode overview tab response time
> --
>
> Key: HDFS-15013
> URL: https://issues.apache.org/jira/browse/HDFS-15013
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Minor
> Attachments: HDFS-15013.001.patch, image-2019-11-26-10-05-39-640.png, 
> image-2019-11-26-10-09-07-952.png
>
>
> Now, the overview tab load /conf synchronously as follow picture.
>  !image-2019-11-26-10-05-39-640.png! 
> This issue will change it to an asynchronous method. The effect diagram is as 
> follows.
>  !image-2019-11-26-10-09-07-952.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12102) VolumeScanner throttle dropped (fast scan enabled) when there is a corrupt block

2019-11-27 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983850#comment-16983850
 ] 

Hadoop QA commented on HDFS-12102:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HDFS-12102 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12102 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878635/HDFS-12102-003.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28414/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> VolumeScanner throttle dropped (fast scan enabled) when there is a corrupt 
> block
> 
>
> Key: HDFS-12102
> URL: https://issues.apache.org/jira/browse/HDFS-12102
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Affects Versions: 2.8.2
>Reporter: Ashwin Ramesh
>Assignee: Ahmed Hussein
>Priority: Minor
> Attachments: HDFS-12102-001.patch, HDFS-12102-002.patch, 
> HDFS-12102-003.patch
>
>
> When the Volume scanner sees a corrupt block, it restarts the scan and scans 
> the blocks at much faster rate with a negligible scan period. This is so that 
> it doesn't take 3 weeks to report blocks since a corrupt block means 
> increased likelihood that there are more corrupt blocks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9695) HTTPFS - CHECKACCESS operation missing

2019-11-27 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-9695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-9695:

Attachment: HDFS-9695.002.patch

> HTTPFS - CHECKACCESS operation missing
> --
>
> Key: HDFS-9695
> URL: https://issues.apache.org/jira/browse/HDFS-9695
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bert Hekman
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-9695.001.patch, HDFS-9695.002.patch
>
>
> Hi,
> The CHECKACCESS operation seems to be missing in HTTPFS. I'm getting the 
> following error:
> {code}
> QueryParamException: java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.CHECKACCESS
> {code}
> A quick look into the org.apache.hadoop.fs.http.client.HttpFSFileSystem class 
> reveals that CHECKACCESS is not defined at all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14960) TestBalancerWithNodeGroup should not succeed with DFSNetworkTopology

2019-11-27 Thread Jim Brennan (Jira)


[jira] [Commented] (HDFS-15014) RBF: WebHdfs chooseDatanode shouldn't call getDatanodeReport

2019-11-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983837#comment-16983837
 ] 

Ayush Saxena commented on HDFS-15014:
-

Seems fair to improve, Do you propose any ALT?

> RBF: WebHdfs chooseDatanode shouldn't call getDatanodeReport 
> -
>
> Key: HDFS-15014
> URL: https://issues.apache.org/jira/browse/HDFS-15014
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: Chao Sun
>Priority: Major
>
> Currently the {{chooseDatanode}} call (which is shared by {{open}}, 
> {{create}}, {{append}} and {{getFileChecksum}}) in RBF WebHDFS calls 
> {{getDatanodeReport}} from ALL downstream namenodes:
> {code}
>   private DatanodeInfo chooseDatanode(final Router router,
>   final String path, final HttpOpParam.Op op, final long openOffset,
>   final String excludeDatanodes) throws IOException {
> // We need to get the DNs as a privileged user
> final RouterRpcServer rpcServer = getRPCServer(router);
> UserGroupInformation loginUser = UserGroupInformation.getLoginUser();
> RouterRpcServer.setCurrentUser(loginUser);
> DatanodeInfo[] dns = null;
> try {
>   dns = rpcServer.getDatanodeReport(DatanodeReportType.LIVE);
> } catch (IOException e) {
>   LOG.error("Cannot get the datanodes from the RPC server", e);
> } finally {
>   // Reset ugi to remote user for remaining operations.
>   RouterRpcServer.resetCurrentUser();
> }
> HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>   Collection collection =
>   getTrimmedStringCollection(excludeDatanodes);
>   for (DatanodeInfo dn : dns) {
> if (collection.contains(dn.getName())) {
>   excludes.add(dn);
> }
>   }
> }
> ...
> {code}
> The {{getDatanodeReport}} is very expensive (particularly in a large cluster) 
> as it need to lock the {{DatanodeManager}} which is also shared by calls such 
> as processing heartbeats. Check HDFS-14366 for a similar issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14960) TestBalancerWithNodeGroup should not succeed with DFSNetworkTopology

2019-11-27 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14960:

Summary: TestBalancerWithNodeGroup should not succeed with 
DFSNetworkTopology  (was: TesteBalancerWithNodeGroup should not succeed with 
DFSNetworkTopology)

> TestBalancerWithNodeGroup should not succeed with DFSNetworkTopology
> 
>
> Key: HDFS-14960
> URL: https://issues.apache.org/jira/browse/HDFS-14960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.3
>Reporter: Jim Brennan
>Priority: Minor
>
> As reported in HDFS-14958, TestBalancerWithNodeGroup was succeeding even 
> though it was using DFSNetworkTopology instead of 
> NetworkTopologyWithNodeGroup.
> [~inigoiri] rightly suggested that this indicates the test is not very good - 
> it should fail when run without NetworkTopologyWithNodeGroup.
> We should improve this test.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14960) TesteBalancerWithNodeGroup should not succeed with DFSNetworkTopology

2019-11-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983831#comment-16983831
 ] 

Ayush Saxena commented on HDFS-14960:
-

Ideally we should change the test only, Improve the test in a way, that it 
fails if it isn't using {{NetworkTopologyWithNodeGroup}} and refrain from 
making changes in the non-test code for this.

> TesteBalancerWithNodeGroup should not succeed with DFSNetworkTopology
> -
>
> Key: HDFS-14960
> URL: https://issues.apache.org/jira/browse/HDFS-14960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.3
>Reporter: Jim Brennan
>Priority: Minor
>
> As reported in HDFS-14958, TestBalancerWithNodeGroup was succeeding even 
> though it was using DFSNetworkTopology instead of 
> NetworkTopologyWithNodeGroup.
> [~inigoiri] rightly suggested that this indicates the test is not very good - 
> it should fail when run without NetworkTopologyWithNodeGroup.
> We should improve this test.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14901) RBF: Add Encryption Zone related ClientProtocol APIs

2019-11-27 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983832#comment-16983832
 ] 

Íñigo Goiri commented on HDFS-14901:


Let's remove the sleep or wait until a particular cluster property is ready.

> RBF: Add Encryption Zone related ClientProtocol APIs
> 
>
> Key: HDFS-14901
> URL: https://issues.apache.org/jira/browse/HDFS-14901
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14901.001.patch, HDFS-14901.002.patch
>
>
> Currently listEncryptionZones,reencryptEncryptionZone,listReencryptionStatus 
> these APIs are not implemented in Router.
> This JIRA is intend to implement above mentioned APIs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14998) Update Observer Namenode doc for ZKFC after HDFS-14130

2019-11-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983826#comment-16983826
 ] 

Ayush Saxena commented on HDFS-14998:
-

bq.  if we should still recommend people to disable ZKFC on Observer NameNodes
Though we can have supported running ZKFC on an Observer Node, still in a 
normal deployment, I don't think there is a need of running ZKFC on the 
Observer Node, since when the namenode is in Observer State, it won't be doing 
much. There is no point running a service unnecessarily. Regarding if somebody 
has a script to dynamically change Observers to Standby he can have a part to 
start the ZKFC too, when he is transitioning, rather than keeping the ZKFC 
running all the time with Observer.
I feel its fair enough to put up a recommendation to not keep the ZKFC running 
on ONN.

> Update Observer Namenode doc for ZKFC after HDFS-14130
> --
>
> Key: HDFS-14998
> URL: https://issues.apache.org/jira/browse/HDFS-14998
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-14998.001.patch, HDFS-14998.002.patch, 
> HDFS-14998.003.patch
>
>
> After HDFS-14130, we should update observer namenode doc, observer namenode 
> can run with ZKFC running



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12102) VolumeScanner throttle dropped (fast scan enabled) when there is a corrupt block

2019-11-27 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-12102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein reassigned HDFS-12102:


Assignee: Ahmed Hussein  (was: Ashwin Ramesh)

> VolumeScanner throttle dropped (fast scan enabled) when there is a corrupt 
> block
> 
>
> Key: HDFS-12102
> URL: https://issues.apache.org/jira/browse/HDFS-12102
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Affects Versions: 2.8.2
>Reporter: Ashwin Ramesh
>Assignee: Ahmed Hussein
>Priority: Minor
> Attachments: HDFS-12102-001.patch, HDFS-12102-002.patch, 
> HDFS-12102-003.patch
>
>
> When the Volume scanner sees a corrupt block, it restarts the scan and scans 
> the blocks at much faster rate with a negligible scan period. This is so that 
> it doesn't take 3 weeks to report blocks since a corrupt block means 
> increased likelihood that there are more corrupt blocks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14960) TesteBalancerWithNodeGroup should not succeed with DFSNetworkTopology

2019-11-27 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983809#comment-16983809
 ] 

Íñigo Goiri commented on HDFS-14960:


Let's add the check and make the test more specific so it would fail anyway 
without the check.

> TesteBalancerWithNodeGroup should not succeed with DFSNetworkTopology
> -
>
> Key: HDFS-14960
> URL: https://issues.apache.org/jira/browse/HDFS-14960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.3
>Reporter: Jim Brennan
>Priority: Minor
>
> As reported in HDFS-14958, TestBalancerWithNodeGroup was succeeding even 
> though it was using DFSNetworkTopology instead of 
> NetworkTopologyWithNodeGroup.
> [~inigoiri] rightly suggested that this indicates the test is not very good - 
> it should fail when run without NetworkTopologyWithNodeGroup.
> We should improve this test.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14986) ReplicaCachingGetSpaceUsed throws ConcurrentModificationException

2019-11-27 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983808#comment-16983808
 ] 

Hadoop QA commented on HDFS-14986:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
36s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
6s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 33s{color} | {color:orange} root: The patch generated 1 new + 117 unchanged 
- 0 fixed = 118 total (was 117) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
49s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}200m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestHASafeMode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14986 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986950/HDFS-14986.006.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7e4938caed7c 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7f2ea2a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (HDFS-15005) Backport HDFS-12300 to branch-2

2019-11-27 Thread Chao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983798#comment-16983798
 ] 

Chao Sun commented on HDFS-15005:
-

[~weichiu] could you take another look on this? test failures seems are not 
related. Thanks.

> Backport HDFS-12300 to branch-2
> ---
>
> Key: HDFS-15005
> URL: https://issues.apache.org/jira/browse/HDFS-15005
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-15005-branch-2.000.patch, 
> HDFS-15005-branch-2.001.patch
>
>
> Having DT related information is very useful in audit log. This tracks effort 
> to backport HDFS-12300 to branch-2.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15010) BlockPoolSlice#addReplicaThreadPool static pool should be initialized by static method

2019-11-27 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983790#comment-16983790
 ] 

Surendra Singh Lilhore commented on HDFS-15010:
---

Fixed checkstyle..

> BlockPoolSlice#addReplicaThreadPool static pool should be initialized by 
> static method
> --
>
> Key: HDFS-15010
> URL: https://issues.apache.org/jira/browse/HDFS-15010
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.1.2
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-15010.001.patch, HDFS-15010.02.patch, 
> HDFS-15010.03.patch, HDFS-15010.04.patch, HDFS-15010.05.patch
>
>
> {{BlockPoolSlice#initializeAddReplicaPool()}} method currently initialize the 
> static thread pool instance. But when two {{BPServiceActor}} actor try to 
> load block pool parallelly then it may create different instance. 
> So {{BlockPoolSlice#initializeAddReplicaPool()}} method should be a static 
> method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15010) BlockPoolSlice#addReplicaThreadPool static pool should be initialized by static method

2019-11-27 Thread Surendra Singh Lilhore (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-15010:
--
Attachment: HDFS-15010.05.patch

> BlockPoolSlice#addReplicaThreadPool static pool should be initialized by 
> static method
> --
>
> Key: HDFS-15010
> URL: https://issues.apache.org/jira/browse/HDFS-15010
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.1.2
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-15010.001.patch, HDFS-15010.02.patch, 
> HDFS-15010.03.patch, HDFS-15010.04.patch, HDFS-15010.05.patch
>
>
> {{BlockPoolSlice#initializeAddReplicaPool()}} method currently initialize the 
> static thread pool instance. But when two {{BPServiceActor}} actor try to 
> load block pool parallelly then it may create different instance. 
> So {{BlockPoolSlice#initializeAddReplicaPool()}} method should be a static 
> method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14998) Update Observer Namenode doc for ZKFC after HDFS-14130

2019-11-27 Thread Chao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983742#comment-16983742
 ] 

Chao Sun commented on HDFS-14998:
-

With HDFS-14130 and HDFS-14961 both resolved, I'm not sure if we should still 
recommend people to disable ZKFC on Observer NameNodes. Instead, shall we 
remove the ZKFC part from the README and just state that you cannot have 
Observer participate in the auto-failover but have to manually transition them 
to SBN first?

cc [~shv], [~xkrogen] and [~vagarychen] also.

> Update Observer Namenode doc for ZKFC after HDFS-14130
> --
>
> Key: HDFS-14998
> URL: https://issues.apache.org/jira/browse/HDFS-14998
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-14998.001.patch, HDFS-14998.002.patch, 
> HDFS-14998.003.patch
>
>
> After HDFS-14130, we should update observer namenode doc, observer namenode 
> can run with ZKFC running



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14998) Update Observer Namenode doc for ZKFC after HDFS-14130

2019-11-27 Thread Chao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983729#comment-16983729
 ] 

Chao Sun commented on HDFS-14998:
-

Thanks [~ferhui]. I'm taking a look now.

> Update Observer Namenode doc for ZKFC after HDFS-14130
> --
>
> Key: HDFS-14998
> URL: https://issues.apache.org/jira/browse/HDFS-14998
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-14998.001.patch, HDFS-14998.002.patch, 
> HDFS-14998.003.patch
>
>
> After HDFS-14130, we should update observer namenode doc, observer namenode 
> can run with ZKFC running



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14960) TesteBalancerWithNodeGroup should not succeed with DFSNetworkTopology

2019-11-27 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983711#comment-16983711
 ] 

hemanthboyina commented on HDFS-14960:
--

got the intention of this Jira [~Jim_Brennan] . we may need to improve the 
tests.
{quote}
NetworkTopology's clusterMap should be instance of NetworkTopologyWithNodeGroup 
{quote}
if this check was present ,  +HDFS-14958+ wouldn't have been occured .

> TesteBalancerWithNodeGroup should not succeed with DFSNetworkTopology
> -
>
> Key: HDFS-14960
> URL: https://issues.apache.org/jira/browse/HDFS-14960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.3
>Reporter: Jim Brennan
>Priority: Minor
>
> As reported in HDFS-14958, TestBalancerWithNodeGroup was succeeding even 
> though it was using DFSNetworkTopology instead of 
> NetworkTopologyWithNodeGroup.
> [~inigoiri] rightly suggested that this indicates the test is not very good - 
> it should fail when run without NetworkTopologyWithNodeGroup.
> We should improve this test.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15009) FSCK "-list-corruptfileblocks" return Invalid Entries

2019-11-27 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983698#comment-16983698
 ] 

Hadoop QA commented on HDFS-15009:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
16s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}205m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDecommission |
|   | hadoop.hdfs.TestDeadNodeDetection |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.balancer.TestBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-15009 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986922/HDFS-15009.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3332d9f42eb1 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk 

[jira] [Commented] (HDFS-12733) Option to disable to namenode local edits

2019-11-27 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983682#comment-16983682
 ] 

Hadoop QA commented on HDFS-12733:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}119m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}186m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks |
|   | hadoop.hdfs.server.namenode.TestFSImage |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-12733 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986941/HDFS-12733.008.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5ac9adabe021 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7f2ea2a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28408/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28408/testReport/ |
| Max. process+thread count | 2728 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Comment Edited] (HDFS-15019) Refactor the unit test of TestDeadNodeDetection

2019-11-27 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983619#comment-16983619
 ] 

Yiqun Lin edited comment on HDFS-15019 at 11/27/19 3:20 PM:


We can put common setting in @Before method and leave specific setting in test 
method. Here io.bytes.per.checksum is a deprecated key, use 
{{HdfsClientConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY}} instead of.

{code}
  @Before
  public void setUp() {
cluster = null;
conf = new HdfsConfiguration();
conf.setBoolean(DFS_CLIENT_DEAD_NODE_DETECTION_ENABLED_KEY, true);
conf.setLong(DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_DEAD_NODE_INTERVAL_MS_KEY,
1000);

conf.setLong(
DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_SUSPECT_NODE_INTERVAL_MS_KEY, 100);
// We'll be using a 512 bytes block size just for tests
// so making sure the checksum bytes match it too.
conf.setInt(HdfsClientConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY, 512);
  }
{code}

It  would be better to add additional check for the dfsClient got in 
{{testDeadNodeDetectionInMultipleDFSInputStream}}. The dfsClient got from 
dfsinputstream1/2 should be same one.
{code}
  assertEquals(dfsClient1.toString(), dfsClient2.toString());  <===
  assertEquals(1, dfsClient1.getDeadNodes(din1).size());
  assertEquals(1, dfsClient2.getDeadNodes(din2).size());
{code}

cc [~leosun08]


was (Author: linyiqun):
We can put common setting in @Before method and leave specific setting in test 
method. Here io.bytes.per.checksum is a deprecated key, use 
{{HdfsClientConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY}} instead of.

{code}
  @Before
  public void setUp() {
cluster = null;
conf = new HdfsConfiguration();
conf.setBoolean(DFS_CLIENT_DEAD_NODE_DETECTION_ENABLED_KEY, true);
conf.setLong(DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_DEAD_NODE_INTERVAL_MS_KEY,
1000);

conf.setLong(
DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_SUSPECT_NODE_INTERVAL_MS_KEY, 100);
// We'll be using a 512 bytes block size just for tests
// so making sure the checksum bytes match it too.
conf.setInt(HdfsClientConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY, 512);
  }
{code}

It  would be better to add additional check for the dfsClient got in 
{{testDeadNodeDetectionInMultipleDFSInputStream}}. The dfsClient got from 
dfsinputstream1/2 should be same one.
{code}
  assertEquals(dfsClient1.toString(), dfsClient2.toString());  <===
  assertEquals(1, dfsClient1.getDeadNodes(din1).size());
  assertEquals(1, dfsClient2.getDeadNodes(din2).size());
{code}

> Refactor the unit test of TestDeadNodeDetection 
> 
>
> Key: HDFS-15019
> URL: https://issues.apache.org/jira/browse/HDFS-15019
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yiqun Lin
>Assignee: Lisheng Sun
>Priority: Minor
>
> There are many duplicated lines in unit test \{{TestDeadNodeDetection}}. We 
> can simplified that.
> In additional in {{testDeadNodeDetectionInMultipleDFSInputStream}}, the 
> DFSInputstream is passed incorrectly in asset operation.
> {code}
> din2 = (DFSInputStream) in1.getWrappedStream();
> {code}
> Should be 
> {code}
> din2 = (DFSInputStream) in2.getWrappedStream();
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15019) Refactor the unit test of TestDeadNodeDetection

2019-11-27 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983619#comment-16983619
 ] 

Yiqun Lin edited comment on HDFS-15019 at 11/27/19 3:20 PM:


We can put common setting in @Before method and leave specific setting in test 
method. Here io.bytes.per.checksum is a deprecated key, use 
{{HdfsClientConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY}} instead of.

{code}
  @Before
  public void setUp() {
cluster = null;
conf = new HdfsConfiguration();
conf.setBoolean(DFS_CLIENT_DEAD_NODE_DETECTION_ENABLED_KEY, true);
conf.setLong(DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_DEAD_NODE_INTERVAL_MS_KEY,
1000);

conf.setLong(
DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_SUSPECT_NODE_INTERVAL_MS_KEY, 100);
// We'll be using a 512 bytes block size just for tests
// so making sure the checksum bytes match it too.
conf.setInt(HdfsClientConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY, 512);
  }
{code}

It  would be better to add additional check for the dfsClient got in 
{{testDeadNodeDetectionInMultipleDFSInputStream}}. The dfsClient got from 
dfsinputstream1/2 should be same one.
{code}
  assertEquals(dfsClient1.toString(), dfsClient2.toString());  <===
  assertEquals(1, dfsClient1.getDeadNodes(din1).size());
  assertEquals(1, dfsClient2.getDeadNodes(din2).size());
{code}


was (Author: linyiqun):
We can put common setting in @Before method and leave specific setting in test 
method. Here io.bytes.per.checksum is a deprecated key, use 
{{HdfsClientConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY}} instead of.

{code}
  @Before
  public void setUp() {
cluster = null;
conf = new HdfsConfiguration();
conf.setBoolean(DFS_CLIENT_DEAD_NODE_DETECTION_ENABLED_KEY, true);
conf.setLong(DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_DEAD_NODE_INTERVAL_MS_KEY,
1000);

conf.setLong(
DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_SUSPECT_NODE_INTERVAL_MS_KEY, 100);
// We'll be using a 512 bytes block size just for tests
// so making sure the checksum bytes match it too.
conf.setInt(HdfsClientConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY, 512);
  }
{code}

> Refactor the unit test of TestDeadNodeDetection 
> 
>
> Key: HDFS-15019
> URL: https://issues.apache.org/jira/browse/HDFS-15019
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yiqun Lin
>Assignee: Lisheng Sun
>Priority: Minor
>
> There are many duplicated lines in unit test \{{TestDeadNodeDetection}}. We 
> can simplified that.
> In additional in {{testDeadNodeDetectionInMultipleDFSInputStream}}, the 
> DFSInputstream is passed incorrectly in asset operation.
> {code}
> din2 = (DFSInputStream) in1.getWrappedStream();
> {code}
> Should be 
> {code}
> din2 = (DFSInputStream) in2.getWrappedStream();
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15019) Refactor the unit test of TestDeadNodeDetection

2019-11-27 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983619#comment-16983619
 ] 

Yiqun Lin commented on HDFS-15019:
--

We can put common setting in @Before method and leave specific setting in test 
method. Here io.bytes.per.checksum is a deprecated key, use 
{{HdfsClientConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY}} instead of.

{code}
  @Before
  public void setUp() {
cluster = null;
conf = new HdfsConfiguration();
conf.setBoolean(DFS_CLIENT_DEAD_NODE_DETECTION_ENABLED_KEY, true);
conf.setLong(DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_DEAD_NODE_INTERVAL_MS_KEY,
1000);

conf.setLong(
DFS_CLIENT_DEAD_NODE_DETECTION_PROBE_SUSPECT_NODE_INTERVAL_MS_KEY, 100);
// We'll be using a 512 bytes block size just for tests
// so making sure the checksum bytes match it too.
conf.setInt(HdfsClientConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY, 512);
  }
{code}

> Refactor the unit test of TestDeadNodeDetection 
> 
>
> Key: HDFS-15019
> URL: https://issues.apache.org/jira/browse/HDFS-15019
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yiqun Lin
>Assignee: Lisheng Sun
>Priority: Minor
>
> There are many duplicated lines in unit test \{{TestDeadNodeDetection}}. We 
> can simplified that.
> In additional in {{testDeadNodeDetectionInMultipleDFSInputStream}}, the 
> DFSInputstream is passed incorrectly in asset operation.
> {code}
> din2 = (DFSInputStream) in1.getWrappedStream();
> {code}
> Should be 
> {code}
> din2 = (DFSInputStream) in2.getWrappedStream();
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15019) Refactor the unit test of TestDeadNodeDetection

2019-11-27 Thread Yiqun Lin (Jira)
Yiqun Lin created HDFS-15019:


 Summary: Refactor the unit test of TestDeadNodeDetection 
 Key: HDFS-15019
 URL: https://issues.apache.org/jira/browse/HDFS-15019
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Yiqun Lin
Assignee: Lisheng Sun


There are many duplicated lines in unit test \{{TestDeadNodeDetection}}. We can 
simplified that.

In additional in {{testDeadNodeDetectionInMultipleDFSInputStream}}, the 
DFSInputstream is passed incorrectly in asset operation.

{code}
din2 = (DFSInputStream) in1.getWrappedStream();
{code}
Should be 
{code}
din2 = (DFSInputStream) in2.getWrappedStream();
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14960) TesteBalancerWithNodeGroup should not succeed with DFSNetworkTopology

2019-11-27 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983598#comment-16983598
 ] 

Jim Brennan commented on HDFS-14960:


[~hemanthboyina] that does seem like a reasonable check to me, and likely would 
have caught the problem reported in HDFS-14958.   I think the intent of this 
Jira is to improve the test so that it includes some test cases that are unique 
to NetworkTopologyWithNodeGroup.   The fact that it was succeeding when it 
wasn't using the right class suggests that it could be improved.

 

> TesteBalancerWithNodeGroup should not succeed with DFSNetworkTopology
> -
>
> Key: HDFS-14960
> URL: https://issues.apache.org/jira/browse/HDFS-14960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.3
>Reporter: Jim Brennan
>Priority: Minor
>
> As reported in HDFS-14958, TestBalancerWithNodeGroup was succeeding even 
> though it was using DFSNetworkTopology instead of 
> NetworkTopologyWithNodeGroup.
> [~inigoiri] rightly suggested that this indicates the test is not very good - 
> it should fail when run without NetworkTopologyWithNodeGroup.
> We should improve this test.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14997) BPServiceActor process command from NameNode asynchronously

2019-11-27 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983557#comment-16983557
 ] 

Xiaoqiao He commented on HDFS-14997:


Check and rerun the failed unit tests at local, it could not be related to this 
changes.

> BPServiceActor process command from NameNode asynchronously
> ---
>
> Key: HDFS-14997
> URL: https://issues.apache.org/jira/browse/HDFS-14997
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-14997.001.patch, HDFS-14997.002.patch, 
> HDFS-14997.003.patch, HDFS-14997.004.patch, HDFS-14997.005.patch
>
>
> There are two core functions, report(#sendHeartbeat, #blockReport, 
> #cacheReport) and #processCommand in #BPServiceActor main process flow. If 
> processCommand cost long time it will block send report flow. Meanwhile 
> processCommand could cost long time(over 1000s the worst case I meet) when IO 
> load  of DataNode is very high. Since some IO operations are under 
> #datasetLock, So it has to wait to acquire #datasetLock long time when 
> process some of commands(such as #DNA_INVALIDATE). In such case, #heartbeat 
> will not send to NameNode in-time, and trigger other disasters.
> I propose to improve #processCommand asynchronously and not block 
> #BPServiceActor to send heartbeat back to NameNode when meet high IO load.
> Notes:
> 1. Lifeline could be one effective solution, however some old branches are 
> not support this feature.
> 2. IO operations under #datasetLock is another issue, I think we should solve 
> it at another JIRA.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15003) RBF: Make Router support storage type quota.

2019-11-27 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983542#comment-16983542
 ] 

Hadoop QA commented on HDFS-15003:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
3s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-rbf generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m  8s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-rbf |
|  |  Nullcheck of oldEntry at line 293 of value previously dereferenced in 
org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(UpdateMountTableEntryRequest)
  At RouterAdminServer.java:293 of value previously dereferenced in 
org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(UpdateMountTableEntryRequest)
  At RouterAdminServer.java:[line 291] |
| Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterAdminCLI |
|   | hadoop.hdfs.server.federation.router.TestRouterFaultTolerant |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-15003 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986940/HDFS-15003.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 70137680f761 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Updated] (HDFS-14986) ReplicaCachingGetSpaceUsed throws ConcurrentModificationException

2019-11-27 Thread Aiphago (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aiphago updated HDFS-14986:
---
Attachment: HDFS-14986.006.patch

> ReplicaCachingGetSpaceUsed throws  ConcurrentModificationException
> --
>
> Key: HDFS-14986
> URL: https://issues.apache.org/jira/browse/HDFS-14986
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, performance
>Reporter: Ryan Wu
>Assignee: Aiphago
>Priority: Major
> Attachments: HDFS-14986.001.patch, HDFS-14986.002.patch, 
> HDFS-14986.003.patch, HDFS-14986.004.patch, HDFS-14986.005.patch, 
> HDFS-14986.006.patch
>
>
> Running DU across lots of disks is very expensive . We applied the patch 
> HDFS-14313 to get  used space from ReplicaInfo in memory.However, new du 
> threads throw the exception
> {code:java}
> // 2019-11-08 18:07:13,858 ERROR 
> [refreshUsed-/home/vipshop/hard_disk/7/dfs/dn/current/BP-1203969992--1450855658517]
>  
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
>  ReplicaCachingGetSpaceUsed refresh error
> java.util.ConcurrentModificationException: Tree has been modified outside of 
> iterator
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.checkForModification(FoldedTreeSet.java:311)
> 
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.hasNext(FoldedTreeSet.java:256)
> 
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(FsDatasetImpl.java:1052)
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed.refresh(ReplicaCachingGetSpaceUsed.java:73)
> 
> at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:178)
>    
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14986) ReplicaCachingGetSpaceUsed throws ConcurrentModificationException

2019-11-27 Thread Aiphago (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983513#comment-16983513
 ] 

Aiphago commented on HDFS-14986:


Good advice,I change the retrytimes to 10 and close the stream in while 
loop.[^HDFS-14986.006.patch]

> ReplicaCachingGetSpaceUsed throws  ConcurrentModificationException
> --
>
> Key: HDFS-14986
> URL: https://issues.apache.org/jira/browse/HDFS-14986
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, performance
>Reporter: Ryan Wu
>Assignee: Aiphago
>Priority: Major
> Attachments: HDFS-14986.001.patch, HDFS-14986.002.patch, 
> HDFS-14986.003.patch, HDFS-14986.004.patch, HDFS-14986.005.patch, 
> HDFS-14986.006.patch
>
>
> Running DU across lots of disks is very expensive . We applied the patch 
> HDFS-14313 to get  used space from ReplicaInfo in memory.However, new du 
> threads throw the exception
> {code:java}
> // 2019-11-08 18:07:13,858 ERROR 
> [refreshUsed-/home/vipshop/hard_disk/7/dfs/dn/current/BP-1203969992--1450855658517]
>  
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
>  ReplicaCachingGetSpaceUsed refresh error
> java.util.ConcurrentModificationException: Tree has been modified outside of 
> iterator
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.checkForModification(FoldedTreeSet.java:311)
> 
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.hasNext(FoldedTreeSet.java:256)
> 
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(FsDatasetImpl.java:1052)
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed.refresh(ReplicaCachingGetSpaceUsed.java:73)
> 
> at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:178)
>    
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12733) Option to disable to namenode local edits

2019-11-27 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983480#comment-16983480
 ] 

Xiaoqiao He commented on HDFS-12733:


v008 try to fix checkstyle reported by Jenkins.
{quote}I met many problems that edits in JN is missing, so NN starts failed. 
Every time, I copy the good edits from NN to JN to solve the problem.{quote}
Hi [~lindongdong], I don't think the patch could resolve your case. Anyway we 
could help to give some suggestions if you could offer more information or 
reproduce this case. Thanks.

> Option to disable to namenode local edits
> -
>
> Key: HDFS-12733
> URL: https://issues.apache.org/jira/browse/HDFS-12733
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, performance
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-12733-001.patch, HDFS-12733-002.patch, 
> HDFS-12733-003.patch, HDFS-12733.004.patch, HDFS-12733.005.patch, 
> HDFS-12733.006.patch, HDFS-12733.007.patch, HDFS-12733.008.patch
>
>
> As of now, Edits will be written in local and shared locations which will be 
> redundant and local edits never used in HA setup.
> Disabling local edits gives little performance improvement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12733) Option to disable to namenode local edits

2019-11-27 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-12733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-12733:
---
Attachment: HDFS-12733.008.patch

> Option to disable to namenode local edits
> -
>
> Key: HDFS-12733
> URL: https://issues.apache.org/jira/browse/HDFS-12733
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, performance
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-12733-001.patch, HDFS-12733-002.patch, 
> HDFS-12733-003.patch, HDFS-12733.004.patch, HDFS-12733.005.patch, 
> HDFS-12733.006.patch, HDFS-12733.007.patch, HDFS-12733.008.patch
>
>
> As of now, Edits will be written in local and shared locations which will be 
> redundant and local edits never used in HA setup.
> Disabling local edits gives little performance improvement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12733) Option to disable to namenode local edits

2019-11-27 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983476#comment-16983476
 ] 

Hadoop QA commented on HDFS-12733:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 46s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 188 unchanged - 0 fixed = 190 total (was 188) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}166m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
|   | hadoop.hdfs.TestDFSStorageStateRecovery |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
|   | hadoop.hdfs.server.namenode.TestRedudantBlocks |
|   | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR |
|   | hadoop.hdfs.TestDeadNodeDetection |
|   | hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile 
|
|   | hadoop.hdfs.TestErasureCodingPolicies |
|   | hadoop.hdfs.TestDecommissionWithStriped |
|   | hadoop.hdfs.TestMultipleNNPortQOP |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.TestReadStripedFileWithDecodingDeletedData |
|   | hadoop.hdfs.server.namenode.TestStartup |
|   | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-12733 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986911/HDFS-12733.007.patch |
| Optional 

[jira] [Commented] (HDFS-14986) ReplicaCachingGetSpaceUsed throws ConcurrentModificationException

2019-11-27 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983466#comment-16983466
 ] 

Hadoop QA commented on HDFS-14986:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 32m  
7s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m  
9s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 17s{color} | {color:orange} root: The patch generated 1 new + 117 unchanged 
- 0 fixed = 118 total (was 117) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 48s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}110m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 5s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}275m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestFixKerberosTicketOrder |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14986 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986884/HDFS-14986.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1b4d8f61cf6c 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c8bef4d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 

[jira] [Updated] (HDFS-15003) RBF: Make Router support storage type quota.

2019-11-27 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-15003:
---
Attachment: HDFS-15003.001.patch
Status: Patch Available  (was: Open)

> RBF: Make Router support storage type quota.
> 
>
> Key: HDFS-15003
> URL: https://issues.apache.org/jira/browse/HDFS-15003
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-15003.001.patch
>
>
> Make Router support storage type quota.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12733) Option to disable to namenode local edits

2019-11-27 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983458#comment-16983458
 ] 

Hadoop QA commented on HDFS-12733:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 31m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}205m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant |
|   | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.namenode.TestFSDirectory |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogsDuringFailover |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.server.namenode.TestListCorruptFileBlocks |
|   | hadoop.hdfs.TestDFSPermission |
|   | hadoop.hdfs.server.namenode.TestQuotaByStorageType |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.client.impl.TestBlockReaderFactory |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-12733 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12974367/HDFS-12733.006.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  

[jira] [Commented] (HDFS-14986) ReplicaCachingGetSpaceUsed throws ConcurrentModificationException

2019-11-27 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983455#comment-16983455
 ] 

Yiqun Lin commented on HDFS-14986:
--

[~Aiphag0], the change looks good to me. I rerun the unit test in my local, I 
find it's hard to reproduce CME exception based on 3 retry attempts. Can you 
increase the times from 3 to 10? I can see the CME error based on 10 times 
without lock change.
{code}
int retryTimes = 10;
{code}

In addition, there is a resource leak that I missed before. Can you move close 
operation into while loop?
{code:java}
public void run() {
  FSDataOutputStream os = null;
  while (shouldRun) {
try {
  int id = RandomUtils.nextInt();
  os = fs.create(new Path("/testFsDatasetImplDeepCopyReplica/" + id));
  byte[] bytes = new byte[2048];
  InputStream is = new ByteArrayInputStream(bytes);
  IOUtils.copyBytes(is, os, bytes.length);
  os.hsync();
  os.close();   <=== move here
} catch (IOException e) {}
  }

  try {
fs.delete(new Path("/testFsDatasetImplDeepCopyReplica"), true);
  } catch (IOException e) {}
}
{code}
 
Others looks good to me, :).

> ReplicaCachingGetSpaceUsed throws  ConcurrentModificationException
> --
>
> Key: HDFS-14986
> URL: https://issues.apache.org/jira/browse/HDFS-14986
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, performance
>Reporter: Ryan Wu
>Assignee: Aiphago
>Priority: Major
> Attachments: HDFS-14986.001.patch, HDFS-14986.002.patch, 
> HDFS-14986.003.patch, HDFS-14986.004.patch, HDFS-14986.005.patch
>
>
> Running DU across lots of disks is very expensive . We applied the patch 
> HDFS-14313 to get  used space from ReplicaInfo in memory.However, new du 
> threads throw the exception
> {code:java}
> // 2019-11-08 18:07:13,858 ERROR 
> [refreshUsed-/home/vipshop/hard_disk/7/dfs/dn/current/BP-1203969992--1450855658517]
>  
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
>  ReplicaCachingGetSpaceUsed refresh error
> java.util.ConcurrentModificationException: Tree has been modified outside of 
> iterator
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.checkForModification(FoldedTreeSet.java:311)
> 
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.hasNext(FoldedTreeSet.java:256)
> 
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(FsDatasetImpl.java:1052)
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed.refresh(ReplicaCachingGetSpaceUsed.java:73)
> 
> at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:178)
>    
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12733) Option to disable to namenode local edits

2019-11-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983448#comment-16983448
 ] 

Ayush Saxena commented on HDFS-12733:
-

[~lindongdong] I didn't check the patch, but by default we won't be changing 
anything, If a user wants to disable explicitly the local edits, then only this 
change will effect. So should be safe. if you want to have the local edits, 
this change won't affect you.

bq. I met many problems that edits in JN is missing, so NN starts failed

Would be great, if you can report the issues, or even contribute a fix. if they 
aren't fixed yet. Ideally JN loosing edits would be a critical bug.

> Option to disable to namenode local edits
> -
>
> Key: HDFS-12733
> URL: https://issues.apache.org/jira/browse/HDFS-12733
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, performance
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-12733-001.patch, HDFS-12733-002.patch, 
> HDFS-12733-003.patch, HDFS-12733.004.patch, HDFS-12733.005.patch, 
> HDFS-12733.006.patch, HDFS-12733.007.patch
>
>
> As of now, Edits will be written in local and shared locations which will be 
> redundant and local edits never used in HA setup.
> Disabling local edits gives little performance improvement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12733) Option to disable to namenode local edits

2019-11-27 Thread lindongdong (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983435#comment-16983435
 ] 

lindongdong commented on HDFS-12733:


Hi, [~hexiaoqiao]  [~brahmareddy] [~ayushtkn]

I met many problems that edits in JN is missing, so NN starts failed. Every 
time, I copy the good edits from NN to JN to solve the problem.

If we disable the edits in NN, it can't save many space, but a solution to fix 
the cluster.

> Option to disable to namenode local edits
> -
>
> Key: HDFS-12733
> URL: https://issues.apache.org/jira/browse/HDFS-12733
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, performance
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-12733-001.patch, HDFS-12733-002.patch, 
> HDFS-12733-003.patch, HDFS-12733.004.patch, HDFS-12733.005.patch, 
> HDFS-12733.006.patch, HDFS-12733.007.patch
>
>
> As of now, Edits will be written in local and shared locations which will be 
> redundant and local edits never used in HA setup.
> Disabling local edits gives little performance improvement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14997) BPServiceActor process command from NameNode asynchronously

2019-11-27 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983431#comment-16983431
 ] 

Hadoop QA commented on HDFS-14997:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 23m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 112 unchanged - 5 fixed = 112 total (was 117) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDeadNodeDetection |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.namenode.TestRedudantBlocks |
|   | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.TestFileChecksumCompositeCrc |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14997 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986892/HDFS-14997.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 046cf746398b 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c8bef4d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28406/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-14546) Document block placement policies

2019-11-27 Thread Amithsha (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983430#comment-16983430
 ] 

Amithsha commented on HDFS-14546:
-

sure [~ayushtkn] 

> Document block placement policies
> -
>
> Key: HDFS-14546
> URL: https://issues.apache.org/jira/browse/HDFS-14546
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Amithsha
>Priority: Major
>  Labels: documentation
> Attachments: HDFS-14546-01.patch, HDFS-14546-02.patch, 
> HDFS-14546-03.patch, HDFS-14546-04.patch, HDFS-14546-05.patch, 
> HDFS-14546-06.patch, HDFS-14546-07.patch, HdfsDesign.patch
>
>
> Currently, all the documentation refers to the default block placement policy.
> However, over time there have been new policies:
> * BlockPlacementPolicyRackFaultTolerant (HDFS-7891)
> * BlockPlacementPolicyWithNodeGroup (HDFS-3601)
> * BlockPlacementPolicyWithUpgradeDomain (HDFS-9006)
> We should update the documentation to refer to them explaining their 
> particularities and probably how to setup each one of them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15010) BlockPoolSlice#addReplicaThreadPool static pool should be initialized by static method

2019-11-27 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983417#comment-16983417
 ] 

Hadoop QA commented on HDFS-15010:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 32m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 17 unchanged - 0 fixed = 18 total (was 17) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}115m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}214m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDecommission |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.namenode.TestRedudantBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-15010 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986882/HDFS-15010.04.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f87c2c7109da 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c8bef4d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28403/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28403/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-14546) Document block placement policies

2019-11-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983418#comment-16983418
 ] 

Ayush Saxena commented on HDFS-14546:
-

Its Ok, you can go ahead with the patch. No issues, but you need to address the 
comments there.

> Document block placement policies
> -
>
> Key: HDFS-14546
> URL: https://issues.apache.org/jira/browse/HDFS-14546
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Amithsha
>Priority: Major
>  Labels: documentation
> Attachments: HDFS-14546-01.patch, HDFS-14546-02.patch, 
> HDFS-14546-03.patch, HDFS-14546-04.patch, HDFS-14546-05.patch, 
> HDFS-14546-06.patch, HDFS-14546-07.patch, HdfsDesign.patch
>
>
> Currently, all the documentation refers to the default block placement policy.
> However, over time there have been new policies:
> * BlockPlacementPolicyRackFaultTolerant (HDFS-7891)
> * BlockPlacementPolicyWithNodeGroup (HDFS-3601)
> * BlockPlacementPolicyWithUpgradeDomain (HDFS-9006)
> We should update the documentation to refer to them explaining their 
> particularities and probably how to setup each one of them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14546) Document block placement policies

2019-11-27 Thread Amithsha (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983415#comment-16983415
 ] 

Amithsha commented on HDFS-14546:
-

haven't updated the git pr will update and check his comment and also I think 
the git pr is not required. Because I think by mistake I have created that PR.

> Document block placement policies
> -
>
> Key: HDFS-14546
> URL: https://issues.apache.org/jira/browse/HDFS-14546
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Amithsha
>Priority: Major
>  Labels: documentation
> Attachments: HDFS-14546-01.patch, HDFS-14546-02.patch, 
> HDFS-14546-03.patch, HDFS-14546-04.patch, HDFS-14546-05.patch, 
> HDFS-14546-06.patch, HDFS-14546-07.patch, HdfsDesign.patch
>
>
> Currently, all the documentation refers to the default block placement policy.
> However, over time there have been new policies:
> * BlockPlacementPolicyRackFaultTolerant (HDFS-7891)
> * BlockPlacementPolicyWithNodeGroup (HDFS-3601)
> * BlockPlacementPolicyWithUpgradeDomain (HDFS-9006)
> We should update the documentation to refer to them explaining their 
> particularities and probably how to setup each one of them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14546) Document block placement policies

2019-11-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983377#comment-16983377
 ] 

Ayush Saxena commented on HDFS-14546:
-

Yes

> Document block placement policies
> -
>
> Key: HDFS-14546
> URL: https://issues.apache.org/jira/browse/HDFS-14546
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Amithsha
>Priority: Major
>  Labels: documentation
> Attachments: HDFS-14546-01.patch, HDFS-14546-02.patch, 
> HDFS-14546-03.patch, HDFS-14546-04.patch, HDFS-14546-05.patch, 
> HDFS-14546-06.patch, HDFS-14546-07.patch, HdfsDesign.patch
>
>
> Currently, all the documentation refers to the default block placement policy.
> However, over time there have been new policies:
> * BlockPlacementPolicyRackFaultTolerant (HDFS-7891)
> * BlockPlacementPolicyWithNodeGroup (HDFS-3601)
> * BlockPlacementPolicyWithUpgradeDomain (HDFS-9006)
> We should update the documentation to refer to them explaining their 
> particularities and probably how to setup each one of them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14546) Document block placement policies

2019-11-27 Thread Amithsha (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983362#comment-16983362
 ] 

Amithsha commented on HDFS-14546:
-

[~ayushtkn] you mean in git PR ? 

> Document block placement policies
> -
>
> Key: HDFS-14546
> URL: https://issues.apache.org/jira/browse/HDFS-14546
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Amithsha
>Priority: Major
>  Labels: documentation
> Attachments: HDFS-14546-01.patch, HDFS-14546-02.patch, 
> HDFS-14546-03.patch, HDFS-14546-04.patch, HDFS-14546-05.patch, 
> HDFS-14546-06.patch, HDFS-14546-07.patch, HdfsDesign.patch
>
>
> Currently, all the documentation refers to the default block placement policy.
> However, over time there have been new policies:
> * BlockPlacementPolicyRackFaultTolerant (HDFS-7891)
> * BlockPlacementPolicyWithNodeGroup (HDFS-3601)
> * BlockPlacementPolicyWithUpgradeDomain (HDFS-9006)
> We should update the documentation to refer to them explaining their 
> particularities and probably how to setup each one of them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15009) FSCK "-list-corruptfileblocks" return Invalid Entries

2019-11-27 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-15009:
-
Attachment: HDFS-15009.003.patch

> FSCK "-list-corruptfileblocks" return Invalid Entries
> -
>
> Key: HDFS-15009
> URL: https://issues.apache.org/jira/browse/HDFS-15009
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15009.001.patch, HDFS-15009.002.patch, 
> HDFS-15009.003.patch
>
>
> Scenario :  if we have two directories dir1, dir10 and only dir10 have 
> corrupt files 
> Now if we run -list-corruptfileblocks for dir1,  corrupt files count for dir1 
> showing is of dir10
> {code:java}
>   while (blkIterator.hasNext()) {
> BlockInfo blk = blkIterator.next();
> final INodeFile inode = getBlockCollection(blk);
> skip++;
> if (inode != null) {
>   String src = inode.getFullPathName();
>   if (src.startsWith(path)){
> corruptFiles.add(new CorruptFileBlockInfo(src, blk));
> count++;
> if (count >= DEFAULT_MAX_CORRUPT_FILEBLOCKS_RETURNED)
>   break;
>   }
> }
>   } {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >