[jira] [Commented] (HDFS-14825) [Dynamometer] Workload doesn't start unless an absolute path of Mapper class given

2019-12-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16987462#comment-16987462
 ] 

Hudson commented on HDFS-14825:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17716 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17716/])
HDFS-14825. [Dynamometer] Workload doesn't start unless an absolute path 
(aajisaka: rev 54e760511a2e2f8e5ecf1eb8762434fcd041f4d6)
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/WorkloadDriver.java


> [Dynamometer] Workload doesn't start unless an absolute path of Mapper class 
> given
> --
>
> Key: HDFS-14825
> URL: https://issues.apache.org/jira/browse/HDFS-14825
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Soya Miyoshi
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.3.0
>
>
> When starting a workload by start-workload.sh, unless an absolute path of 
> Mapper is given, the workload doesn't start.
>  
> {code:java}
> $ hadoop/tools/dynamometer/dynamometer-workload/bin/start-workload.sh - \
> Dauditreplay.input-path=hdfs:///user/souya/input/audit  \
> -Dauditreplay.output-path=hdfs:///user/souya/results/ \
> -Dauditreplay.num-threads=50 -Dauditreplay.log-start-time.ms=5 \
> -nn_uri hdfs://namenode_address:port/ \
> -mapper_class_name AuditReplayMapper
> {code}
> results in
> {code:java}
> SLF4J: Defaulting to no-operation (NOP) logger implementation
> SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
> details.
> Exception in thread "main" java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.tools.dynamometer.workloadgenerator.AuditReplayMapper not 
> found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2572)
>   at 
> org.apache.hadoop.tools.dynamometer.workloadgenerator.WorkloadDriver.getMapperClass(WorkloadDriver.java:183)
>   at 
> org.apache.hadoop.tools.dynamometer.workloadgenerator.WorkloadDriver.run(WorkloadDriver.java:127)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at 
> org.apache.hadoop.tools.dynamometer.workloadgenerator.WorkloadDriver.main(WorkloadDriver.java:172)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15026) TestPendingReconstruction#testPendingReconstruction() fail in trunk

2019-12-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16986592#comment-16986592
 ] 

Hudson commented on HDFS-15026:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17714 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17714/])
HDFS-15026. TestPendingReconstruction#testPendingReconstruction() fail 
(ayushsaxena: rev 0c217feed8acf98234d3367b24a475b620027bce)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReconstruction.java


> TestPendingReconstruction#testPendingReconstruction() fail in trunk
> ---
>
> Key: HDFS-15026
> URL: https://issues.apache.org/jira/browse/HDFS-15026
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-15026.001.patch, HDFS-15026.002.patch
>
>
> when run only a UT of TestPendingReconstruction#testPendingReconstruction(), 
> it will fail and throw NullPointerException.
> {code:java}
> 2019-12-02 16:02:54,887 
> [org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor@57baeedf]
>  WARN  blockmanagement.BlockManager 
> (PendingReconstructionBlocks.java:pendingReconstructionCheck(273)) - 
> PendingReconstructionMonitor timed out blk_0_02019-12-02 16:02:54,887 
> [org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor@57baeedf]
>  WARN  blockmanagement.BlockManager 
> (PendingReconstructionBlocks.java:pendingReconstructionCheck(273)) - 
> PendingReconstructionMonitor timed out blk_0_0Exception in thread 
> "org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor@57baeedf"
>  java.lang.NullPointerException at 
> org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor.pendingReconstructionCheck(PendingReconstructionBlocks.java:274)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor.run(PendingReconstructionBlocks.java:248)
>  at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9695) HTTPFS - CHECKACCESS operation missing

2019-12-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-9695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16986481#comment-16986481
 ] 

Hudson commented on HDFS-9695:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17713 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17713/])
HDFS-9695. HTTPFS - CHECKACCESS operation missing. Contributed by (tasanuma: 
rev 4ede8bce28aadc62007ad65dc6d44be550632b5f)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java


> HTTPFS - CHECKACCESS operation missing
> --
>
> Key: HDFS-9695
> URL: https://issues.apache.org/jira/browse/HDFS-9695
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bert Hekman
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-9695.001.patch, HDFS-9695.002.patch, 
> HDFS-9695.003.patch, HDFS-9695.004.patch, HDFS-9695.005.patch
>
>
> Hi,
> The CHECKACCESS operation seems to be missing in HTTPFS. I'm getting the 
> following error:
> {code}
> QueryParamException: java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.CHECKACCESS
> {code}
> A quick look into the org.apache.hadoop.fs.http.client.HttpFSFileSystem class 
> reveals that CHECKACCESS is not defined at all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15009) FSCK "-list-corruptfileblocks" return Invalid Entries

2019-11-29 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985149#comment-16985149
 ] 

Hudson commented on HDFS-15009:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17712 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17712/])
HDFS-15009. FSCK -list-corruptfileblocks return Invalid Entries. (ayushsaxena: 
rev 6b2d6d4aafb110bef1b77d4ccbba4350e624b57d)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/FederationUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MountTableStoreImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java


> FSCK "-list-corruptfileblocks" return Invalid Entries
> -
>
> Key: HDFS-15009
> URL: https://issues.apache.org/jira/browse/HDFS-15009
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-15009.001.patch, HDFS-15009.002.patch, 
> HDFS-15009.003.patch, HDFS-15009.004.patch
>
>
> Scenario :  if we have two directories dir1, dir10 and only dir10 have 
> corrupt files 
> Now if we run -list-corruptfileblocks for dir1,  corrupt files count for dir1 
> showing is of dir10
> {code:java}
>   while (blkIterator.hasNext()) {
> BlockInfo blk = blkIterator.next();
> final INodeFile inode = getBlockCollection(blk);
> skip++;
> if (inode != null) {
>   String src = inode.getFullPathName();
>   if (src.startsWith(path)){
> corruptFiles.add(new CorruptFileBlockInfo(src, blk));
> count++;
> if (count >= DEFAULT_MAX_CORRUPT_FILEBLOCKS_RETURNED)
>   break;
>   }
> }
>   } {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15013) Reduce NameNode overview tab response time

2019-11-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984618#comment-16984618
 ] 

Hudson commented on HDFS-15013:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17710 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17710/])
HDFS-15013. Reduce NameNode overview tab response time. Contributed by 
(surendralilhore: rev 44f7b9159d8eec151f199231bafe0677f9383dc3)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js


> Reduce NameNode overview tab response time
> --
>
> Key: HDFS-15013
> URL: https://issues.apache.org/jira/browse/HDFS-15013
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-15013.001.patch, HDFS-15013.002.patch, 
> image-2019-11-26-10-05-39-640.png, image-2019-11-26-10-09-07-952.png
>
>
> Now, the overview tab load /conf synchronously as follow picture.
>  !image-2019-11-26-10-05-39-640.png! 
> This issue will change it to an asynchronous method. The effect diagram is as 
> follows.
>  !image-2019-11-26-10-09-07-952.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15010) BlockPoolSlice#addReplicaThreadPool static pool should be initialized by static method

2019-11-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984600#comment-16984600
 ] 

Hudson commented on HDFS-15010:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17709 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17709/])
HDFS-15010. BlockPoolSlice#addReplicaThreadPool static pool should be 
(surendralilhore: rev 0384687811446a52009b96cc85bf961a3e83afc4)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsVolumeList.java


> BlockPoolSlice#addReplicaThreadPool static pool should be initialized by 
> static method
> --
>
> Key: HDFS-15010
> URL: https://issues.apache.org/jira/browse/HDFS-15010
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.1.2
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-15010.001.patch, HDFS-15010.02.patch, 
> HDFS-15010.03.patch, HDFS-15010.04.patch, HDFS-15010.05.patch
>
>
> {{BlockPoolSlice#initializeAddReplicaPool()}} method currently initialize the 
> static thread pool instance. But when two {{BPServiceActor}} actor try to 
> load block pool parallelly then it may create different instance. 
> So {{BlockPoolSlice#initializeAddReplicaPool()}} method should be a static 
> method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14961) [SBN read] Prevent ZKFC changing Observer Namenode state

2019-11-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984407#comment-16984407
 ] 

Hudson commented on HDFS-14961:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17708 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17708/])
HDFS-14961. [SBN read] Prevent ZKFC changing Observer Namenode state. 
(ayushsaxena: rev 46166bd8d1be6f25bd38703fb9b0a417e3ef750b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSZKFailoverController.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java


> [SBN read] Prevent ZKFC changing Observer Namenode state
> 
>
> Key: HDFS-14961
> URL: https://issues.apache.org/jira/browse/HDFS-14961
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14961-01.patch, HDFS-14961-02.patch, 
> HDFS-14961-03.patch, HDFS-14961-04.patch, ZKFC-TEST-14961.patch
>
>
> HDFS-14130 made ZKFC aware of the Observer Namenode and hence allows ZKFC 
> running along with the observer NOde.
> The Observer namenode isn't suppose to be part of ZKFC election process.
> But if the  Namenode was part of election, before turning into Observer by 
> transitionToObserver Command. The ZKFC still sends instruction to the 
> Namenode as a result of previous participation and sometimes tend to change 
> the state of Observer to Standby.
> This is also the reason for  failure in TestDFSZKFailoverController.
> TestDFSZKFailoverController has been consistently failing with a time out 
> waiting in testManualFailoverWithDFSHAAdmin(). In particular 
> {{waitForHAState(1, HAServiceState.OBSERVER);}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15019) Refactor the unit test of TestDeadNodeDetection

2019-11-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984183#comment-16984183
 ] 

Hudson commented on HDFS-15019:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17707 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17707/])
HDFS-15019. Refactor the unit test of TestDeadNodeDetection. Contributed 
(yqlin: rev c3659f8f94bef7cfad0c3fb04391a7ffd4221679)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDeadNodeDetection.java


> Refactor the unit test of TestDeadNodeDetection 
> 
>
> Key: HDFS-15019
> URL: https://issues.apache.org/jira/browse/HDFS-15019
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yiqun Lin
>Assignee: Lisheng Sun
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-15019.001.patch
>
>
> There are many duplicated lines in unit test \{{TestDeadNodeDetection}}. We 
> can simplified that.
> In additional in {{testDeadNodeDetectionInMultipleDFSInputStream}}, the 
> DFSInputstream is passed incorrectly in asset operation.
> {code}
> din2 = (DFSInputStream) in1.getWrappedStream();
> {code}
> Should be 
> {code}
> din2 = (DFSInputStream) in2.getWrappedStream();
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14986) ReplicaCachingGetSpaceUsed throws ConcurrentModificationException

2019-11-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984102#comment-16984102
 ] 

Hudson commented on HDFS-14986:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17705 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17705/])
HDFS-14986. ReplicaCachingGetSpaceUsed throws (yqlin: rev 
2b452b4e6063072b2bec491edd3f412eb7ac21f3)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestReplicaCachingGetSpaceUsed.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaCachingGetSpaceUsed.java


> ReplicaCachingGetSpaceUsed throws  ConcurrentModificationException
> --
>
> Key: HDFS-14986
> URL: https://issues.apache.org/jira/browse/HDFS-14986
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, performance
>Affects Versions: 2.10.0
>Reporter: Ryan Wu
>Assignee: Aiphago
>Priority: Major
> Fix For: 3.3.0, 2.10.1, 2.11.0
>
> Attachments: HDFS-14986.001.patch, HDFS-14986.002.patch, 
> HDFS-14986.003.patch, HDFS-14986.004.patch, HDFS-14986.005.patch, 
> HDFS-14986.006.patch
>
>
> Running DU across lots of disks is very expensive . We applied the patch 
> HDFS-14313 to get  used space from ReplicaInfo in memory.However, new du 
> threads throw the exception
> {code:java}
> // 2019-11-08 18:07:13,858 ERROR 
> [refreshUsed-/home/vipshop/hard_disk/7/dfs/dn/current/BP-1203969992--1450855658517]
>  
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
>  ReplicaCachingGetSpaceUsed refresh error
> java.util.ConcurrentModificationException: Tree has been modified outside of 
> iterator
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.checkForModification(FoldedTreeSet.java:311)
> 
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.hasNext(FoldedTreeSet.java:256)
> 
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(FsDatasetImpl.java:1052)
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed.refresh(ReplicaCachingGetSpaceUsed.java:73)
> 
> at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:178)
>    
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14649) Add suspect probe for DeadNodeDetector

2019-11-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16983127#comment-16983127
 ] 

Hudson commented on HDFS-14649:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17701 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17701/])
HDFS-14649. Add suspect probe for DeadNodeDetector. Contributed by (yqlin: rev 
c8bef4d6a6d7d5affd00cff6ea4a2e2ef778050e)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DeadNodeDetector.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDeadNodeDetection.java


> Add suspect probe for DeadNodeDetector
> --
>
> Key: HDFS-14649
> URL: https://issues.apache.org/jira/browse/HDFS-14649
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14649.001.patch, HDFS-14649.002.patch, 
> HDFS-14649.003.patch, HDFS-14649.004.patch, HDFS-14649.005.patch
>
>
> Add suspect probe for DeadNodeDetector.
> when  some DataNode of the block is found to inaccessible, put the DataNode 
> into it will be placed in the Suspicious Node list. Because when DataNode is 
> not accessible, it is likely that the replica has been removed from the 
> DataNode.Therefore, it needs to be confirmed by re-probing and requires a 
> higher priority processing.
> And when DataNode is placed in the Suspicious Node list, it is accessed by 
> other dfsinputstream.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14519) NameQuota is not update after concat operation, so namequota is wrong

2019-11-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16980529#comment-16980529
 ] 

Hudson commented on HDFS-14519:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17689 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17689/])
HDFS-14519. NameQuota is not update after concat operation, so namequota 
(ayushsaxena: rev 049940e77b055518e1de44b1be60d2ee2e1e9143)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java


> NameQuota is not update after concat operation, so namequota is wrong
> -
>
> Key: HDFS-14519
> URL: https://issues.apache.org/jira/browse/HDFS-14519
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14519.001.patch, HDFS-14519.002.patch, 
> HDFS-14519.003.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13842) RBF: Exceptions are conflicting when creating the same mount entry twice

2019-11-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16979896#comment-16979896
 ] 

Hudson commented on HDFS-13842:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17686 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17686/])
HDFS-13842. RBF: Exceptions are conflicting when creating the same mount 
(ayushsaxena: rev c422e36397920311bd2823deb0206a97cf288bf0)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java


> RBF: Exceptions are conflicting when creating the same mount entry twice
> 
>
> Key: HDFS-13842
> URL: https://issues.apache.org/jira/browse/HDFS-13842
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Reporter: Soumyapn
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
> Fix For: 3.3.0
>
> Attachments: HDFS-13842-01.patch, HDFS-13842.patch
>
>
> Test Steps:
>  # Execute the command : hdfs dfsrouteradmin -add /apps7 hacluster /tmp7
>  # Execute the same command once again.
> Expected Result:
> User should get the message saying already the mount entry is present.
>  
> Actual Result:
> console message is displayed like below.
> "Cannot add destination at hacluster /tmp7
> Successfully added mount point /apps7"
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14924) RenameSnapshot not updating new modification time

2019-11-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16979850#comment-16979850
 ] 

Hudson commented on HDFS-14924:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17685 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17685/])
HDFS-14924. RenameSnapshot not updating new modification time. (tasanuma: rev 
b25e94ce29b311a37334317d72e46373b256c111)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshot.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSnapshotOp.java


> RenameSnapshot not updating new modification time
> -
>
> Key: HDFS-14924
> URL: https://issues.apache.org/jira/browse/HDFS-14924
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14924.001.patch, HDFS-14924.002.patch, 
> HDFS-14924.003.patch, HDFS-14924.004.patch
>
>
> RenameSnapshot doesnt updating modification time



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14651) DeadNodeDetector checks dead node periodically

2019-11-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16979802#comment-16979802
 ] 

Hudson commented on HDFS-14651:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17684 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17684/])
HDFS-14651. DeadNodeDetector checks dead node periodically. Contributed (yqlin: 
rev 9b6906fe914829f50076c2291dba59d425475d7b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DeadNodeDetector.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDeadNodeDetection.java


> DeadNodeDetector checks dead node periodically
> --
>
> Key: HDFS-14651
> URL: https://issues.apache.org/jira/browse/HDFS-14651
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14651.001.patch, HDFS-14651.002.patch, 
> HDFS-14651.003.patch, HDFS-14651.004.patch, HDFS-14651.005.patch, 
> HDFS-14651.006.patch, HDFS-14651.007.patch, HDFS-14651.008.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15002) RBF: Fix annotation in RouterAdmin

2019-11-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16979577#comment-16979577
 ] 

Hudson commented on HDFS-15002:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17683 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17683/])
HDFS-15002. RBF: Fix annotation in RouterAdmin. Contributed by Jinglun. 
(ayushsaxena: rev b89fd4dfe95a1e0a18a346ec1c1321c366ff97a0)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java


>  RBF: Fix annotation in RouterAdmin
> ---
>
> Key: HDFS-15002
> URL: https://issues.apache.org/jira/browse/HDFS-15002
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Trivial
> Fix For: 3.3.0
>
> Attachments: HDFS-15002.001.patch
>
>
> Fix annotation in RouterAdmin.java.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14940) HDFS Balancer : Do not allow to set balancer maximum network bandwidth more than 1TB

2019-11-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16979531#comment-16979531
 ] 

Hudson commented on HDFS-14940:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17682 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17682/])
HDFS-14940. HDFS Balancer : Do not allow to set balancer maximum network 
(surendralilhore: rev 26270196a2d2cfddcf24ee7bbd2b84a2e93accef)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBalancerBandwidth.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java


> HDFS Balancer : Do not allow to set balancer maximum network bandwidth more 
> than 1TB
> 
>
> Key: HDFS-14940
> URL: https://issues.apache.org/jira/browse/HDFS-14940
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 3.1.1
> Environment: 3 Node HA Setup
>Reporter: Souryakanta Dwivedy
>Assignee: hemanthboyina
>Priority: Minor
> Attachments: BalancerBW.PNG, HDFS-14940.001.patch, 
> HDFS-14940.002.patch, HDFS-14940.003.patch, HDFS-14940.004.patch, 
> HDFS-14940.005.patch
>
>
> HDFS Balancer : getBalancerBandwidth displaying wrong values for the maximum 
> network bandwidth used by the datanode
>  while network bandwidth set with values as 1048576000g/1048p/1e
> Steps :-        
>  * Set balancer bandwith with command setBalancerBandwidth and vlaues as 
> [1048576000g/1048p/1e]
>  * - Check bandwidth used by the datanode during HDFS block balancing with 
> command :hdfs dfsadmin -getBalancerBandwidth "    check it will display some 
> different values not the same value as set



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14996) RBF: GetFileStatus fails for directory with EC policy set in case of multiple destinations

2019-11-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16979033#comment-16979033
 ] 

Hudson commented on HDFS-14996:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17681 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17681/])
HDFS-14996. RBF: GetFileStatus fails for directory with EC policy set in 
(ayushsaxena: rev 98d249dcdabb664ca82083a323afb1a8ed13c062)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRPCMultipleDestinationMountTableResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/FederationUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java


> RBF: GetFileStatus fails for directory with EC policy set in case of multiple 
> destinations 
> ---
>
> Key: HDFS-14996
> URL: https://issues.apache.org/jira/browse/HDFS-14996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, rbf
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14996-01.patch, HDFS-14996-02.patch, 
> HDFS-14996-03.patch
>
>
> In case of multi destinations for one mount and following PathAll type Order.
> Getting FileStatus Fails if it has an EC Policy set on it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14949) Add getServerDefaults() support to HttpFS

2019-11-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978680#comment-16978680
 ] 

Hudson commented on HDFS-14949:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17676 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17676/])
HDFS-14949. Add getServerDefaults() support to HttpFS. Contributed by 
(inigoiri: rev 3037762b2ca2bee0a281b16455c8592173f92315)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java


> Add getServerDefaults() support to HttpFS
> -
>
> Key: HDFS-14949
> URL: https://issues.apache.org/jira/browse/HDFS-14949
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14949.001.patch, HDFS-14949.002.patch, 
> HDFS-14949.003.patch, HDFS-14949.004.patch
>
>
> For HttpFS server to function as a fully webhdfs-compatible service, 
> getServerDefaults() support is needed.  It is increasingly used in new 
> features and improvements.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14995) Use log variable directly instead of passing as argument in InvalidateBlocks#printBlockDeletionTime()

2019-11-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978669#comment-16978669
 ] 

Hudson commented on HDFS-14995:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17675 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17675/])
HDFS-14995. Use log variable directly instead of passing as argument in 
(surendralilhore: rev fd264b826576b67adb04586002c3f94b7ea5a2f1)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/InvalidateBlocks.java


> Use log variable directly instead of passing as argument in 
> InvalidateBlocks#printBlockDeletionTime()
> -
>
> Key: HDFS-14995
> URL: https://issues.apache.org/jira/browse/HDFS-14995
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14995.001.patch, HDFS-14995.002.patch
>
>
> Refactor {{InvalidateBlocks#printBlockDeletionTime()}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14952) Skip safemode if blockTotal is 0 in new NN

2019-11-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16977823#comment-16977823
 ] 

Hudson commented on HDFS-14952:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17671 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17671/])
HDFS-14952. Skip safemode if blockTotal is 0 in new NN. Contributed by 
(weichiu: rev 0b50aa29fd5dc114b3e0fc54b5c393bbc9f3102e)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHASafeMode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerSafeMode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManagerSafeMode.java


> Skip safemode if blockTotal is 0 in new NN
> --
>
> Key: HDFS-14952
> URL: https://issues.apache.org/jira/browse/HDFS-14952
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Rajesh Balamohan
>Assignee: Xiaoqiao He
>Priority: Trivial
>  Labels: performance
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.11.0
>
> Attachments: HDFS-14952.001.patch, HDFS-14952.002.patch, 
> HDFS-14952.003.patch
>
>
> When new NN is installed, it spends 30-45 seconds in Safemode. When 
> {{blockTotal}} is 0, it should be possible to short circuit safemode check in 
> {{BlockManagerSafeMode::areThresholdsMet}}.
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerSafeMode.java#L571



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14992) TestOfflineEditsViewer is failing in Trunk

2019-11-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16977707#comment-16977707
 ] 

Hudson commented on HDFS-14992:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17668 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17668/])
HDFS-14992. TestOfflineEditsViewer is failing in Trunk. Contributed by 
(surendralilhore: rev c8705147408c302159ced1a00ca95137536f1459)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored


> TestOfflineEditsViewer is failing in Trunk
> --
>
> Key: HDFS-14992
> URL: https://issues.apache.org/jira/browse/HDFS-14992
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14992.001.patch, HDFS-14992.002.patch, editsStored
>
>
> [https://builds.apache.org/job/PreCommit-HDFS-Build/28310/testReport/]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14967) TestWebHDFS fails in Windows

2019-11-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16976539#comment-16976539
 ] 

Hudson commented on HDFS-14967:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17655 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17655/])
HDFS-14967. TestWebHDFS fails in Windows. Contributed by Renukaprasad C. 
(ayushsaxena: rev 34cb595d2ebb0585b0c19ef8fdbf051e4eb69d19)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java


> TestWebHDFS  fails in Windows 
> --
>
> Key: HDFS-14967
> URL: https://issues.apache.org/jira/browse/HDFS-14967
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14967.001.patch, HDFS-14967.002.patch
>
>
> In TestWebHDFS test class, few test cases are not closing the MiniDFSCluster, 
> which results in remaining test failures in Windows. Once cluster status is 
> open, all consecutive test cases fail to get the lock on Data dir which 
> results  in test case failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14955) RBF: getQuotaUsage() on mount point should return global quota.

2019-11-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16976465#comment-16976465
 ] 

Hudson commented on HDFS-14955:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17653 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17653/])
HDFS-14955. RBF: getQuotaUsage() on mount point should return global 
(ayushsaxena: rev 12617fad2eb32108412dac9ecee286de6641d060)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUpdateService.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java


> RBF: getQuotaUsage() on mount point should return global quota.
> ---
>
> Key: HDFS-14955
> URL: https://issues.apache.org/jira/browse/HDFS-14955
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14955.001.patch, HDFS-14955.002.patch, 
> HDFS-14955.003.patch
>
>
> When getQuotaUsage() on a mount point path, the quota part should be the 
> global quota. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14974) RBF: Make tests use free ports

2019-11-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16976427#comment-16976427
 ] 

Hudson commented on HDFS-14974:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17652 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17652/])
HDFS-14974. RBF: Make tests use free ports. Contributed by Inigo Goiri. 
(ayushsaxena: rev 3b5a0e86c100c41162b145884a147b91bcb50cad)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/RouterConfigBuilder.java


> RBF: Make tests use free ports
> --
>
> Key: HDFS-14974
> URL: https://issues.apache.org/jira/browse/HDFS-14974
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14974.000.patch
>
>
> Currently, {{TestRouterSecurityManager#testCreateCredentials}} create a 
> Router with the default ports. However, these ports might be used. We should 
> set it to :0 for it to be assigned dynamically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14648) Implement DeadNodeDetector basic model

2019-11-15 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16975566#comment-16975566
 ] 

Hudson commented on HDFS-14648:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17649 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17649/])
HDFS-14648. Implement DeadNodeDetector basic model. Contributed by (yqlin: rev 
b3119b9ab60a19d624db476c4e1c53410870c7a6)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DeadNodeDetector.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDeadNodeDetection.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java


> Implement DeadNodeDetector basic model
> --
>
> Key: HDFS-14648
> URL: https://issues.apache.org/jira/browse/HDFS-14648
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14648.001.patch, HDFS-14648.002.patch, 
> HDFS-14648.003.patch, HDFS-14648.004.patch, HDFS-14648.005.patch, 
> HDFS-14648.006.patch, HDFS-14648.007.patch, HDFS-14648.008.patch, 
> HDFS-14648.009.patch, HDFS-14648.010.patch, HDFS-14648.011.patch
>
>
> This Jira constructs DeadNodeDetector state machine model. The function it 
> implements as follow:
>  # When a DFSInputstream is opened, a BlockReader is opened. If some DataNode 
> of the block is found to inaccessible, put the DataNode into 
> DeadNodeDetector#deadnode.(HDFS-14649) will optimize this part. Because when 
> DataNode is not accessible, it is likely that the replica has been removed 
> from the DataNode.Therefore, it needs to be confirmed by re-probing and 
> requires a higher priority processing.
>  # DeadNodeDetector will periodically detect the Node in 
> DeadNodeDetector#deadnode, If the access is successful, the Node will be 
> moved from DeadNodeDetector#deadnode. Continuous detection of the dead node 
> is necessary. The DataNode need rejoin the cluster due to a service 
> restart/machine repair. The DataNode may be permanently excluded if there is 
> no added probe mechanism.
>  # DeadNodeDetector#dfsInputStreamNodes Record the DFSInputstream using 
> DataNode. When the DFSInputstream is closed, it will be moved from 
> DeadNodeDetector#dfsInputStreamNodes.
>  # Every time get the global deanode, update the DeadNodeDetector#deadnode. 
> The new DeadNodeDetector#deadnode Equals to the intersection of the old 
> DeadNodeDetector#deadnode and the Datanodes are by 
> DeadNodeDetector#dfsInputStreamNodes.
>  # DeadNodeDetector has a switch that is turned off by default. When it is 
> closed, each DFSInputstream still uses its own local deadnode.
>  # This feature has been used in the XIAOMI production environment for a long 
> time. Reduced hbase read stuck, due to node hangs.
>  # Just open the DeadNodeDetector switch and you can use it directly. No 
> other restrictions. Don't want to use DeadNodeDetector, just close it.
> {code:java}
> if (sharedDeadNodesEnabled && deadNodeDetector == null) {
>   deadNodeDetector = new DeadNodeDetector(name);
>   deadNodeDetectorThr = new Daemon(deadNodeDetector);
>   deadNodeDetectorThr.start();
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14802) The feature of protect directories should be used in RenameOp

2019-11-15 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16975399#comment-16975399
 ] 

Hudson commented on HDFS-14802:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17648 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17648/])
HDFS-14802. The feature of protect directories should be used in (weichiu: rev 
67f2c491fe3cd400605fb6082fd3504bc5e97037)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* (edit) 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestProtectedDirectories.java


> The feature of protect directories should be used in RenameOp
> -
>
> Key: HDFS-14802
> URL: https://issues.apache.org/jira/browse/HDFS-14802
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14802.001.patch, HDFS-14802.002.patch, 
> HDFS-14802.003.patch, HDFS-14802.004.patch
>
>
> Now we could set fs.protected.directories to prevent users from deleting 
> important directories. But users can delete directories around the limitation.
> 1. Rename the directories and delete them.
> 2. move the directories to trash and namenode will delete them.
> So I think we should use the feature of protected directories in RenameOp



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14882) Consider DataNode load when #getBlockLocation

2019-11-15 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16975387#comment-16975387
 ] 

Hudson commented on HDFS-14882:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17647 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17647/])
HDFS-14882. Consider DataNode load when #getBlockLocation. Contributed 
(weichiu: rev c892a879ddce3abfd51c8609c81148bf6e4f9daa)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/net/TestNetworkTopology.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java


> Consider DataNode load when #getBlockLocation
> -
>
> Key: HDFS-14882
> URL: https://issues.apache.org/jira/browse/HDFS-14882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14882.001.patch, HDFS-14882.002.patch, 
> HDFS-14882.003.patch, HDFS-14882.004.patch, HDFS-14882.005.patch, 
> HDFS-14882.006.patch, HDFS-14882.007.patch, HDFS-14882.008.patch, 
> HDFS-14882.009.patch, HDFS-14882.010.patch, HDFS-14882.suggestion
>
>
> Currently, we consider load of datanode when #chooseTarget for writer, 
> however not consider it for reader. Thus, the process slot of datanode could 
> be occupied by #BlockSender for reader, and disk/network will be busy 
> workload, then meet some slow node exception. IIRC same case is reported 
> times. Based on the fact, I propose to consider load for reader same as it 
> did #chooseTarget for writer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14973) Balancer getBlocks RPC dispersal does not function properly

2019-11-15 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16975301#comment-16975301
 ] 

Hudson commented on HDFS-14973:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17645 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17645/])
HDFS-14973. More strictly enforce Balancer/Mover/SPS throttling of (xkrogen: 
rev b2cc8b6b4a78f31cdd937dc4d1a2255f80c5881e)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerRPCDelay.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java


> Balancer getBlocks RPC dispersal does not function properly
> ---
>
> Key: HDFS-14973
> URL: https://issues.apache.org/jira/browse/HDFS-14973
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.9.0, 2.7.4, 2.8.2, 3.0.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.3.0, 3.2.2
>
> Attachments: HDFS-14973.000.patch, HDFS-14973.001.patch, 
> HDFS-14973.002.patch, HDFS-14973.003.patch, HDFS-14973.test.patch
>
>
> In HDFS-11384, a mechanism was added to make the {{getBlocks}} RPC calls 
> issued by the Balancer/Mover more dispersed, to alleviate load on the 
> NameNode, since {{getBlocks}} can be very expensive and the Balancer should 
> not impact normal cluster operation.
> Unfortunately, this functionality does not function as expected, especially 
> when the dispatcher thread count is low. The primary issue is that the delay 
> is applied only to the first N threads that are submitted to the dispatcher's 
> executor, where N is the size of the dispatcher's threadpool, but *not* to 
> the first R threads, where R is the number of allowed {{getBlocks}} QPS 
> (currently hardcoded to 20). For example, if the threadpool size is 100 (the 
> default), threads 0-19 have no delay, 20-99 have increased levels of delay, 
> and 100+ have no delay. As I understand it, the intent of the logic was that 
> the delay applied to the first 100 threads would force the dispatcher 
> executor's threads to all be consumed, thus blocking subsequent (non-delayed) 
> threads until the delay period has expired. However, threads 0-19 can finish 
> very quickly (their work can often be fulfilled in the time it takes to 
> execute a single {{getBlocks}} RPC, on the order of tens of milliseconds), 
> thus opening up 20 new slots in the executor, which are then consumed by 
> non-delayed threads 100-119, and so on. So, although 80 threads have had a 
> delay applied, the non-delay threads rush through in the 20 non-delay slots.
> This problem gets even worse when the dispatcher threadpool size is less than 
> the max {{getBlocks}} QPS. For example, if the threadpool size is 10, _no 
> threads ever have a delay applied_, and the feature is not enabled at all.
> This problem wasn't surfaced in the original JIRA because the test 
> incorrectly measured the period across which {{getBlocks}} RPCs were 
> distributed. The variables {{startGetBlocksTime}} and {{endGetBlocksTime}} 
> were used to track the time over which the {{getBlocks}} calls were made. 
> However, {{startGetBlocksTime}} was initialized at the time of creation of 
> the {{FSNameystem}} spy, which is before the mock DataNodes are started. Even 
> worse, the Balancer in this test takes 2 iterations to complete balancing the 
> cluster, so the time period {{endGetBlocksTime - startGetBlocksTime}} 
> actually represents:
> {code}
> (time to submit getBlocks RPCs) + (DataNode startup time) + (time for the 
> Dispatcher to complete an iteration of moving blocks)
> {code}
> Thus, the RPC QPS reported by the test is much lower than the RPC QPS seen 
> during the period of initial block fetching.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14979) [Observer Node] Balancer should submit getBlocks to Observer Node when possible

2019-11-13 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16973648#comment-16973648
 ] 

Hudson commented on HDFS-14979:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17638 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17638/])
HDFS-14979 Allow Balancer to submit getBlocks calls to Observer Nodes (xkrogen: 
rev 586defe7113ed246ed0275bb3833882a3d873d70)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/NamenodeProtocol.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithHANameNodes.java


> [Observer Node] Balancer should submit getBlocks to Observer Node when 
> possible
> ---
>
> Key: HDFS-14979
> URL: https://issues.apache.org/jira/browse/HDFS-14979
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover, hdfs
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-14979.000.patch
>
>
> In HDFS-14162, we made it so that the Balancer could function when 
> {{ObserverReadProxyProvider}} was in use. However, the Balancer would still 
> read from the active NameNode, because {{getBlocks}} wasn't annotated as 
> {{@ReadOnly}}. This task is to enable the Balancer to actually read from the 
> Observer Node to alleviate load from the active NameNode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14959) [SBNN read] access time should be turned off

2019-11-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16972736#comment-16972736
 ] 

Hudson commented on HDFS-14959:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17636 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17636/])
HDFS-14959: [SBNN read] access time should be turned off (#1706) (weichiu: rev 
97ec34e117af71e1a9950b8002131c45754009c7)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ObserverNameNode.md


> [SBNN read] access time should be turned off
> 
>
> Key: HDFS-14959
> URL: https://issues.apache.org/jira/browse/HDFS-14959
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation
>Reporter: Wei-Chiu Chuang
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
>
> Both Uber and Didi shared that access time has to be switched off to avoid 
> spiky NameNode RPC process time. If access time is not off entirely, 
> getBlockLocations RPCs have to update access time and must access the active 
> NameNode. (that's my understanding. haven't checked the code)
> We should record this as a best practice in our doc.
> (If you are on the ASF slack, check out this thread
> https://the-asf.slack.com/archives/CAD7C52Q3/p1572033324008600)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14922) Prevent snapshot modification time got change on startup

2019-11-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16972707#comment-16972707
 ] 

Hudson commented on HDFS-14922:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17634 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17634/])
HDFS-14922. Prevent snapshot modification time got change on startup. 
(inigoiri: rev 40150da1e12a41c2e774fe2a277ddc3988bed239)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSnapshotOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshot.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java


> Prevent snapshot modification time got change on startup
> 
>
> Key: HDFS-14922
> URL: https://issues.apache.org/jira/browse/HDFS-14922
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14922.001.patch, HDFS-14922.002.patch, 
> HDFS-14922.003.patch, HDFS-14922.004.patch, HDFS-14922.005.patch
>
>
> Snapshot modification time got changed on namenode restart



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14975) Add CR for SetECPolicyCommand usage

2019-11-10 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16971327#comment-16971327
 ] 

Hudson commented on HDFS-14975:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17625 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17625/])
HDFS-14975. Add CR for SetECPolicyCommand usage. Contributed by Fei Hui. 
(ayushsaxena: rev 77934bc07b9bef3e129826e93e1c1e8a47c00c95)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/ECAdmin.java


> Add CR for SetECPolicyCommand usage
> ---
>
> Key: HDFS-14975
> URL: https://issues.apache.org/jira/browse/HDFS-14975
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Trivial
> Fix For: 3.3.0
>
> Attachments: HDFS-14975.001.patch
>
>
> *bin/hdfs ec -help* output the following message
> {quote}
> [-listPolicies]
> Get the list of all erasure coding policies.
> [-addPolicies -policyFile ]
> Add a list of user defined erasure coding policies.
>   The path of the xml file which defines the EC policies to add 
> [-getPolicy -path ]
> Get the erasure coding policy of a file/directory.
>   The path of the file/directory for getting the erasure coding policy 
> [-removePolicy -policy ]
> Remove an user defined erasure coding policy.
>   The name of the erasure coding policy 
> [-setPolicy -path  [-policy ] [-replicate]]
> Set the erasure coding policy for a file/directory.
>   The path of the file/directory to set the erasure coding policy 
> The name of the erasure coding policy   
> -replicate  force 3x replication scheme on the directory
> -replicate and -policy are optional arguments. They cannot been used at the 
> same time
> [-unsetPolicy -path ]
> Unset the erasure coding policy for a directory.
>   The path of the directory from which the erasure coding policy will 
> be 
> unset.
>  
> [-listCodecs]
> Get the list of supported erasure coding codecs and coders.
> A coder is an implementation of a codec. A codec can have different 
> implementations, thus different coders.
> The coders for a codec are listed in a fall back order.
> [-enablePolicy -policy ]
> Enable the erasure coding policy.
>   The name of the erasure coding policy 
> [-disablePolicy -policy ]
> Disable the erasure coding policy.
>   The name of the erasure coding policy 
> [-verifyClusterSetup [-policy ...]]
> Verify if the cluster setup can support all enabled erasure coding policies. 
> If optional parameter -policy is specified, verify if the cluster setup can 
> support the given policy.
> {quote}
> The output format is not so good to users
> We should add CR between SetECPolicyCommand and UnsetECPolicyCommand like 
> other commands
> {quote}
> [-setPolicy -path  [-policy ] [-replicate]]
> Set the erasure coding policy for a file/directory.
>  The path of the file/directory to set the erasure coding policy
>  The name of the erasure coding policy
> -replicate force 3x replication scheme on the directory
> -replicate and -policy are optional arguments. They cannot been used at the 
> same time
> -here-
> [-unsetPolicy -path ]
> Unset the erasure coding policy for a directory.
>  The path of the directory from which the erasure coding policy will be
> unset.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14962) RBF: ConnectionPool#newConnection() error log wrong protocol class

2019-11-10 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16971321#comment-16971321
 ] 

Hudson commented on HDFS-14962:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17624 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17624/])
HDFS-14962. RBF: ConnectionPool#newConnection() error log wrong protocol 
(ayushsaxena: rev b25a37c3229e1a66699d649f6caf80ffc71db5b8)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java


> RBF: ConnectionPool#newConnection() error log wrong protocol class
> --
>
> Key: HDFS-14962
> URL: https://issues.apache.org/jira/browse/HDFS-14962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Yuxuan Wang
>Assignee: Yuxuan Wang
>Priority: Minor
>  Labels: RBF
> Fix For: 3.3.0
>
>
> ConnectionPool#newConnection() has following code:
> {code:java}
> String msg = "Unsupported protocol for connection to NameNode: "
> + ((proto != null) ? proto.getClass().getName() : "null");
> {code}
> *proto.getClass().getName()* should be *proto.getName()*
> My IDE can figure out the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14928) UI: unifying the WebUI across different components.

2019-11-10 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16971318#comment-16971318
 ] 

Hudson commented on HDFS-14928:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17623 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17623/])
HDFS-14928. UI: unifying the WebUI across different components. (tasanuma: rev 
6663d6a5c2d0e01523c81e0719fd305ee279ea55)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/static/rbf.css
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css


> UI: unifying the WebUI across different components.
> ---
>
> Key: HDFS-14928
> URL: https://issues.apache.org/jira/browse/HDFS-14928
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ui
>Reporter: Xieming Li
>Assignee: Xieming Li
>Priority: Trivial
> Fix For: 3.3.0
>
> Attachments: DN_orig.png, DN_with_legend.png.png, DN_wo_legend.png, 
> HDFS-14892-2.jpg, HDFS-14928.001.patch, HDFS-14928.002.patch, 
> HDFS-14928.003.patch, HDFS-14928.004.patch, HDFS-14928.jpg, NN_orig.png, 
> NN_with_legend.png, NN_wo_legend.png, RBF_orig.png, RBF_with_legend.png, 
> RBF_wo_legend.png
>
>
> The WebUI of different components could be unified.
> *Router:*
> |Current|  !RBF_orig.png|width=500! | 
> |Proposed 1 (With Icon) |  !RBF_wo_legend.png|width=500! | 
> |Proposed 2 (With Icon and Legend)|!RBF_with_legend.png|width=500!  | 
> *NameNode:*
> |Current| !NN_orig.png|width=500! |
> |Proposed 1 (With Icon) | !NN_wo_legend.png|width=500! |
> |Proposed 2 (With Icon and Legend)| !NN_with_legend.png|width=500! |
> *DataNode:*
> |Current| !DN_orig.png|width=500! |
> |Proposed 1 (With Icon) | !DN_wo_legend.png|width=500! |
> |Proposed 2 (With Icon and Legend)| !DN_with_legend.png.png|width=500! |



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14720) DataNode shouldn't report block as bad block if the block length is Long.MAX_VALUE.

2019-11-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970878#comment-16970878
 ] 

Hudson commented on HDFS-14720:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17622 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17622/])
HDFS-14720. DataNode shouldn't report block as bad block if the block 
(surendralilhore: rev 320008bb7cc558b1300398178bd2f48cbf0b6c80)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ReplicationWork.java


> DataNode shouldn't report block as bad block if the block length is 
> Long.MAX_VALUE.
> ---
>
> Key: HDFS-14720
> URL: https://issues.apache.org/jira/browse/HDFS-14720
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14720.001.patch, HDFS-14720.002.patch, 
> HDFS-14720.003.patch
>
>
> {noformat}
> 2019-08-11 09:15:58,092 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Can't replicate block 
> BP-725378529-10.0.0.8-1410027444173:blk_13276745777_1112363330268 because 
> on-disk length 175085 is shorter than NameNode recorded length 
> 9223372036854775807.{noformat}
> If the block length is Long.MAX_VALUE, means file belongs to this block is 
> deleted from the namenode and DN got the command after deletion of file. In 
> this case command should be ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14815) RBF: Update the quota in MountTable when calling setQuota on a MountTable src.

2019-11-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969812#comment-16969812
 ] 

Hudson commented on HDFS-14815:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17618 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17618/])
HDFS-14815. RBF: Update the quota in MountTable when calling setQuota on 
(ayushsaxena: rev 42fc8884ab9763e8778670f301896bf473ecf1d2)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestDisableRouterQuota.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java


> RBF: Update the quota in MountTable when calling setQuota on a MountTable src.
> --
>
> Key: HDFS-14815
> URL: https://issues.apache.org/jira/browse/HDFS-14815
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14815.001.patch, HDFS-14815.002.patch, 
> HDFS-14815.003.patch, HDFS-14815.004.patch, HDFS-14815.005.patch
>
>
> The method setQuota() can make the remote quota(the quota on real clusters) 
> inconsistent with the MountTable. I think we have 3 ways to fix it:
>  # Reject all the setQuota() rpcs if it trys to change the quota of a mount 
> table.
>  # Let setQuota() to change the mount table quota. Update the quota on zk 
> first and then update remote quotas.
>  # Do nothing. The RouterQuotaUpdateService will finally make all the remote 
> quota right. We can tolerate short-term inconsistencies.
> I'd like option 1 because I want the RouterAdmin to be the only entrance to 
> update the MountTable.
> Option 3 we don't need change anything, but the quota will be inconsistent 
> for a short-term. The remote quota will be effective immediately and 
> auto-changed back after a while. User might be confused about the behavior.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14958) TestBalancerWithNodeGroup is not using NetworkTopologyWithNodeGroup

2019-11-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969451#comment-16969451
 ] 

Hudson commented on HDFS-14958:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17617 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17617/])
HDFS-14958. TestBalancerWithNodeGroup is not using (ayushsaxena: rev 
247584eb63db3a49705f330da96f37fd9c7dee70)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithNodeGroup.java


> TestBalancerWithNodeGroup is not using NetworkTopologyWithNodeGroup
> ---
>
> Key: HDFS-14958
> URL: https://issues.apache.org/jira/browse/HDFS-14958
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.3
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Minor
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1, 2.11.0
>
> Attachments: HDFS-14958.001.patch
>
>
> TestBalancerWithNodeGroup is intended to test with 
> {{NetworkTopologyWithNodeGroup}}, but it is not configured correctly.  
> Because {{DFSConfigKeys.DFS_USE_DFS_NETWORK_TOPOLOGY_KEY}} defaults to true, 
> {{CommonConfigurationKeysPublic.NET_TOPOLOGY_IMPL_KEY}} is ignored and the 
> test actually uses the default {{DFSNetworkTopology}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14941) Potential editlog race condition can cause corrupted file

2019-11-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968569#comment-16968569
 ] 

Hudson commented on HDFS-14941:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17616 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17616/])
HDFS-14941. Potential editlog race condition can cause corrupted file. (cliang: 
rev dd900259c421d6edd0b89a535a1fe08ada91735f)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/SequentialNumber.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestAddBlockTailing.java


> Potential editlog race condition can cause corrupted file
> -
>
> Key: HDFS-14941
> URL: https://issues.apache.org/jira/browse/HDFS-14941
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
>  Labels: ha
> Fix For: 3.3.0
>
> Attachments: HDFS-14941.001.patch, HDFS-14941.002.patch, 
> HDFS-14941.003.patch, HDFS-14941.004.patch, HDFS-14941.005.patch, 
> HDFS-14941.006.patch
>
>
> Recently we encountered an issue that, after a failover, NameNode complains 
> corrupted file/missing blocks. The blocks did recover after full block 
> reports, so the blocks are not actually missing. After further investigation, 
> we believe this is what happened:
> First of all, on SbN, it is possible that it receives block reports before 
> corresponding edit tailing happened. In which case SbN postpones processing 
> the DN block report, handled by the guarding logic below:
> {code:java}
>   if (shouldPostponeBlocksFromFuture &&
>   namesystem.isGenStampInFuture(iblk)) {
> queueReportedBlock(storageInfo, iblk, reportedState,
> QUEUE_REASON_FUTURE_GENSTAMP);
> continue;
>   }
> {code}
> Basically if reported block has a future generation stamp, the DN report gets 
> requeued.
> However, in {{FSNamesystem#storeAllocatedBlock}}, we have the following code:
> {code:java}
>   // allocate new block, record block locations in INode.
>   newBlock = createNewBlock();
>   INodesInPath inodesInPath = INodesInPath.fromINode(pendingFile);
>   saveAllocatedBlock(src, inodesInPath, newBlock, targets);
>   persistNewBlock(src, pendingFile);
>   offset = pendingFile.computeFileSize();
> {code}
> The line
>  {{newBlock = createNewBlock();}}
>  Would log an edit entry {{OP_SET_GENSTAMP_V2}} to bump generation stamp on 
> Standby
>  while the following line
>  {{persistNewBlock(src, pendingFile);}}
>  would log another edit entry {{OP_ADD_BLOCK}} to actually add the block on 
> Standby.
> Then the race condition is that, imagine Standby has just processed 
> {{OP_SET_GENSTAMP_V2}}, but not yet {{OP_ADD_BLOCK}} (if they just happen to 
> be in different setment). Now a block report with new generation stamp comes 
> in.
> Since the genstamp bump has already been processed, the reported block may 
> not be considered as future block. So the guarding logic passes. But 
> actually, the block hasn't been added to blockmap, because the second edit is 
> yet to be tailed. So, the block then gets added to invalidate block list and 
> we saw messages like:
> {code:java}
> BLOCK* addBlock: block XXX on node XXX size XXX does not belong to any file
> {code}
> Even worse, since this IBR is effectively lost, the NameNode has no 
> information about this block, until the next full block report. So after a 
> failover, the NN marks it as corrupt.
> This issue won't happen though, if both of the edit entries get tailed all 
> together, so no IBR processing can happen in between. But in our case, we set 
> edit tailing interval to super low (to allow Standby read), so when under 
> high workload, there is a much much higher chance that the two entries are 
> tailed separately, causing the issue.



--
This message was sent by A

[jira] [Commented] (HDFS-14806) Bootstrap standby may fail if used in-progress tailing

2019-11-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968544#comment-16968544
 ] 

Hudson commented on HDFS-14806:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17615 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17615/])
HDFS-14806. Bootstrap standby may fail if with in-progress tailing. (cliang: 
rev 9d0d580031006ca6db9b4150f17ab678ce68a257)
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestBootstrapStandbyWithInProgressTailing.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/BootstrapStandby.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestBootstrapStandbyWithQJM.java


> Bootstrap standby may fail if used in-progress tailing
> --
>
> Key: HDFS-14806
> URL: https://issues.apache.org/jira/browse/HDFS-14806
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.3.0
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14806.001.patch, HDFS-14806.002.patch, 
> HDFS-14806.003.patch, HDFS-14806.004.patch
>
>
> One issue we went across was that if in-progress tailing is enabled, 
> bootstrap standby could fail.
> When in-progress tailing is enabled, Bootstrap uses the RPC mechanism to get 
> edits. There is a config {{dfs.ha.tail-edits.qjm.rpc.max-txns}} that sets an 
> upper bound on how many txnid can be included in one RPC call. The default is 
> 5000. Meaning bootstraping NN (say NN1) can only pull at most 5000 edits from 
> JN. However, as part of bootstrap, NN1 queries another NN (say NN2) for NN2's 
> current transactionID, NN2 may return a state that is > 5000 txnid from NN1's 
> current image. But NN1 can only see 5000 more txnid from JNs. At this point 
> NN1 goes panic, because txnid retuned by JNs is behind NN2's returned state, 
> bootstrap then fail.
> Essentially, bootstrap standby can fail if both of two following conditions 
> are met:
>  # in-progress tailing is enabled AND
>  # the boostraping NN is too far (>5000 txid)  behind 
> Increasing the value of {{dfs.ha.tail-edits.qjm.rpc.max-txns}} to some super 
> large value allowed bootstrap to continue. But this is hardly the ideal 
> solution.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14942) Change Log Level to debug in JournalNodeSyncer#syncWithJournalAtIndex

2019-11-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968507#comment-16968507
 ] 

Hudson commented on HDFS-14942:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17614 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17614/])
HDFS-14942. Change Log Level to debug in (ayushsaxena: rev 
9e287054a8aa0725643bc5c90601645302fffade)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java


> Change Log Level to debug in JournalNodeSyncer#syncWithJournalAtIndex
> -
>
> Key: HDFS-14942
> URL: https://issues.apache.org/jira/browse/HDFS-14942
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14942.001.patch, HDFS-14942.002.patch, 
> HDFS-14942.003.patch, HDFS-14942.004.patch
>
>
> when hadoop 2.x upgrades to hadoop 3.x,  InterQJournalProtocol is newly 
> added,so  throw Unknown protocol. 
> the newly InterQJournalProtocol is used to sychronize past log segments to 
> JNs that missed them.  And that an error occurs does not affect normal 
> service. I think it should not be a ERROR log,and that log a warn log is more 
> reasonable.
> {code:java}
>  private void syncWithJournalAtIndex(int index) {
>   ...
> GetEditLogManifestResponseProto editLogManifest;
> try {
>   editLogManifest = jnProxy.getEditLogManifestFromJournal(jid,
>   nameServiceId, 0, false);
> } catch (IOException e) {
>   LOG.error("Could not sync with Journal at " +
>   otherJNProxies.get(journalNodeIndexForSync), e);
>   return;
> }
> {code}
> {code:java}
> 2019-10-30,15:11:17,388 ERROR 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Could not sync with 
> Journal at mos1-hadoop-prc-ct17.ksru/10.85.3.59:111002019-10-30,15:11:17,388 
> ERROR org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Could not 
> sync with Journal at 
> mos1-hadoop-prc-ct17.ksru/10.85.3.59:11100org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
>  Unknown protocol: 
> org.apache.hadoop.hdfs.qjournal.protocol.InterQJournalProtocol
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1565)
> at org.apache.hadoop.ipc.Client.call(Client.java:1511)
> at org.apache.hadoop.ipc.Client.call(Client.java:1421)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy16.getEditLogManifestFromJournal(Unknown Source)
> at 
> org.apache.hadoop.hdfs.qjournal.protocolPB.InterQJournalProtocolTranslatorPB.getEditLogManifestFromJournal(InterQJournalProtocolTranslatorPB.java:75)
> at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer.syncWithJournalAtIndex(JournalNodeSyncer.java:250)
> at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer.syncJournals(JournalNodeSyncer.java:226)
> at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer.lambda$startSyncJournalsDaemon$0(JournalNodeSyncer.java:186)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14880) Correct the sequence of statistics & exit message in balencer

2019-11-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968494#comment-16968494
 ] 

Hudson commented on HDFS-14880:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17613 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17613/])
HDFS-14880. Correct the sequence of statistics & exit message in (ayushsaxena: 
rev dcf55838ae41aecfa8a7b37d9b95478ce6acf0a7)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java


> Correct the sequence of statistics & exit message in balencer
> -
>
> Key: HDFS-14880
> URL: https://issues.apache.org/jira/browse/HDFS-14880
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 3.1.1, 3.2.1
> Environment: Run the balancer tool in cluster.
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14880.0001.patch, HDFS-14880.0002.patch, 
> HDFS-14880.0003.patch
>
>
> Actual:
> Time Stamp Iteration# Bytes Already Moved Bytes Left To Move Bytes Being Moved
> The cluster is balanced. Exiting...
> Sep 27, 2019 5:13:15 PM   0   0 B  0 B
>   0 B
> Sep 27, 2019 5:13:15 PM Balancing took 1.726 seconds
> Done!
> Expected: Exit message should be after loggin all the balancer movement 
> statistics data.
> Time Stamp Iteration# Bytes Already Moved Bytes Left To Move Bytes Being Moved
> Sep 27, 2019 5:13:15 PM   0   0 B  0 B
>   0 B
> The cluster is balanced. Exiting...
> Sep 27, 2019 5:13:15 PM Balancing took 1.726 seconds
> Done!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14384) When lastLocatedBlock token expire, it will take 1~3s second to refetch it.

2019-11-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968386#comment-16968386
 ] 

Hudson commented on HDFS-14384:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17612 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17612/])
HDFS-14384. When lastLocatedBlock token expire, it will take 1~3s second 
(surendralilhore: rev c36014165c212b26d75268ee3659aa2cadcff349)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java


> When lastLocatedBlock token expire, it will take 1~3s second to refetch it.
> ---
>
> Key: HDFS-14384
> URL: https://issues.apache.org/jira/browse/HDFS-14384
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.2
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-14384.001.patch, HDFS-14384.002.patch, 
> HDFS-14384.003.patch
>
>
> Scenario :
>  1. Write file with one block which is in-progress.
>   2. Open input stream and close the output stream.
>   3. Wait for block token expiration and read the data.
>   4. Last block read take 1~3 sec to read it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14775) Add Timestamp for longest FSN write/read lock held log

2019-11-05 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16967713#comment-16967713
 ] 

Hudson commented on HDFS-14775:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17609 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17609/])
HDFS-14775. Add Timestamp for longest FSN write/read lock held log. (inigoiri: 
rev bfb8f28cc995241e7387ceba8e14791b8c121956)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystemLock.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystemLock.java


> Add Timestamp for longest FSN write/read lock held log
> --
>
> Key: HDFS-14775
> URL: https://issues.apache.org/jira/browse/HDFS-14775
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14775.001.patch, HDFS-14775.002.patch, 
> HDFS-14775.003.patch, HDFS-14775.004.patch, HDFS-14775.005.patch
>
>
> HDFS-13946 improved the log for longest read/write lock held time, it's very 
> useful improvement.
> In some condition, we need to locate the detailed call information(user, ip, 
> path, etc.) for longest lock holder, but the default throttle interval(10s) 
> is too long to find the corresponding audit log. I think we should add the 
> timestamp for the {{longestWriteLockHeldStackTrace}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14891) RBF: namenode links in NameFederation Health page (federationhealth.html) cannot use https scheme

2019-11-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16967269#comment-16967269
 ] 

Hudson commented on HDFS-14891:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17607 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17607/])
HDFS-14891. RBF: namenode links in NameFederation Health page (tasanuma: rev 
79010627074c4b830008444f92d8410aa1717006)
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterNamenodeWebScheme.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/driver/TestStateStoreDriverBase.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/TestStateStoreMembershipState.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/metrics/TestMetricsBase.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/NamenodeStatusReport.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/impl/pb/MembershipStatePBImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/metrics/TestRBFMetrics.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/FederationNamenodeContext.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRPCClientRetries.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/records/TestMembershipState.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/FederationProtocol.proto
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/order/TestLocalResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/FederationTestUtils.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/MembershipState.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MockNamenode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/FederationStateStoreTestUtils.java


> RBF: namenode links in NameFederation Health page (federationhealth.html)  
> cannot use https scheme
> --
>
> Key: HDFS-14891
> URL: https://issues.apache.org/jira/browse/HDFS-14891
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf, ui
>Reporter: Xieming Li
>Assignee: Xieming Li
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14891.001.patch, HDFS-14891.002.patch, 
> HDFS-14891.003.patch, HDFS-14891.004.patch, HDFS-14891.005.patch, 
> HDFS-14891.006.patch, HDFS-14891.007.patch, HDFS-14891.008.patch, 
> HDFS-14891.patch
>
>
> The scheme of links in federationhealth.html are hard coded as 'http'.
> It should be set to 'https' when dfs.http.policy is HTTPS_ONLY 
> (HTTP_AND_HTTPS also, maybe)
>  
> [https://github.com/apache/hadoop/blob/c99a12167ff9566012ef32104a3964887d62c899/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html#L168-L169]
> [https://github.com/apache/hadoop/blob/c99a12167ff9566012ef32104a3964887d62c899/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html#L236]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14938) Add check if excludedNodes contain scope in DFSNetworkTopology#chooseRandomWithStorageType()

2019-11-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16967228#comment-16967228
 ] 

Hudson commented on HDFS-14938:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17606 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17606/])
HDFS-14938. Add check if excludedNodes contain scope in (ayushsaxena: rev 
b643a1cbe8a82ca331ffcd14fccc1dc0d90da5c7)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/net/TestDFSNetworkTopology.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DFSNetworkTopology.java


> Add check if excludedNodes contain scope in 
> DFSNetworkTopology#chooseRandomWithStorageType() 
> -
>
> Key: HDFS-14938
> URL: https://issues.apache.org/jira/browse/HDFS-14938
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14938.001.patch, HDFS-14938.002.patch, 
> HDFS-14938.003.patch, HDFS-14938.004.patch, HDFS-14938.005.patch, 
> HDFS-14938.006.patch, HDFS-14938.007.patch
>
>
> Add check if excludedNodes contain scope in 
> DFSNetworkTopology#chooseRandomWithStorageType().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14946) Erasure Coding: Block recovery failed during decommissioning

2019-11-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16966979#comment-16966979
 ] 

Hudson commented on HDFS-14946:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17603 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17603/])
HDFS-14946. Erasure Coding: Block recovery failed during (ayushsaxena: rev 
2ffec347eb4303ad78643431cd2e517d54bc3282)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommissionWithStriped.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


> Erasure Coding: Block recovery failed during decommissioning
> 
>
> Key: HDFS-14946
> URL: https://issues.apache.org/jira/browse/HDFS-14946
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.3, 3.2.1, 3.1.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14946.001.patch, HDFS-14946.002.patch, 
> HDFS-14946.003.patch, HDFS-14946.004.patch
>
>
> DataNode logs as follow
> {quote}
> org.apache.hadoop.HadoopIllegalArgumentException: No enough valid inputs are 
> provided, not recoverable
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.checkInputBuffers(ByteBufferDecodingState.java:119)
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.(ByteBufferDecodingState.java:47)
>   at 
> org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:86)
>   at 
> org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedBlockReconstructor.reconstructTargets(StripedBlockReconstructor.java:126)
>   at 
> org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedBlockReconstructor.reconstruct(StripedBlockReconstructor.java:97)
>   at 
> org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedBlockReconstructor.run(StripedBlockReconstructor.java:60)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> {quote}
> Block recovery always failed because of srcNodes in the wrong order
> Reproduce steps are:
> # ec block (b0, b1, b2, b3, b4, b5, b6, b7, b8), b[0-8] are on dn[0-8], 
> dn[0-3] are decommissioning
> # dn[1-3] are decommissioned, dn0 are in decommissioning, ec block is 
> [b0(decommissioning), b[1-3](decommissioned), b[4-8](live), b[0-3](live)]
> # dn4 is crash, and b4 will be recovery, ec block is [b0(decommissioning), 
> b[1-3](decommissioned), null, b[5-8](live), b[0-3](live)]
> We can see error log as above, and b4 is not recovery successfuly. Because 
> srcNodes transfered to recovery datanode contains block [b0, b[5-8],b[0-3]], 
> and datanode use [b0, b[5-8], b0](minRequiredSources Readers to reconstruct, 
> minRequiredSources = Math.min(cellsNum, dataBlkNum)) to recovery the missing 
> block.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14945) Revise PacketResponder's log.

2019-11-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16966874#comment-16966874
 ] 

Hudson commented on HDFS-14945:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17601 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17601/])
HDFS-14945. Revise PacketResponder's log. Contributed by Xudong Cao. (weichiu: 
rev eb73ba6ed5f7c5500cc0ef36ca22aae4e71046fa)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java


> Revise PacketResponder's log.
> -
>
> Key: HDFS-14945
> URL: https://issues.apache.org/jira/browse/HDFS-14945
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.1.3
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Minor
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14945.000.patch
>
>
> For a datanode in a pipeline, when its PacketResponder thread encounters an 
> exception, it prints logs like below:
> 2019-10-24 09:22:58,212 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> IOException in *BlockReceiver*.run():
>  
> But this log is incorrect and misleading, the right print shoud be:
> 2019-10-24 09:22:58,212 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> IOException in *PacketResponder*.run():



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14925) rename operation should check nest snapshot

2019-11-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16965158#comment-16965158
 ] 

Hudson commented on HDFS-14925:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17598 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17598/])
HDFS-14925. Rename operation should check nest snapshot (#1670) (weichiu: rev 
de6b8b0c0b1933aab2af3e8adc50a2091d428238)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java


> rename operation should check nest snapshot
> ---
>
> Key: HDFS-14925
> URL: https://issues.apache.org/jira/browse/HDFS-14925
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Junwang Zhao
>Assignee: Junwang Zhao
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
>
> When we do rename operation, If the src directory or any of its descendant
> is snapshottable and the dst directory or any of its ancestors is 
> snapshottable, 
> we consider this as nested snapshot, which should be denied.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13736) BlockPlacementPolicyDefault can not choose favored nodes when 'dfs.namenode.block-placement-policy.default.prefer-local-node' set to false

2019-11-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16965017#comment-16965017
 ] 

Hudson commented on HDFS-13736:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17597 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17597/])
HDFS-13736. BlockPlacementPolicyDefault can not choose favored nodes 
(ayushsaxena: rev 7d7acb004af5095983e99c86deedfc60a0355ff7)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java


> BlockPlacementPolicyDefault can not choose favored nodes when 
> 'dfs.namenode.block-placement-policy.default.prefer-local-node' set to false
> --
>
> Key: HDFS-13736
> URL: https://issues.apache.org/jira/browse/HDFS-13736
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: hu xiaodong
>Assignee: hu xiaodong
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-13736.001.patch, HDFS-13736.002.patch, 
> HDFS-13736.003.patch, HDFS-13736.004.patch, HDFS-13736.005.patch, 
> HDFS-13736.006.patch, HDFS-13736.007.patch
>
>
> BlockPlacementPolicyDefault can not choose favored nodes when 
> 'dfs.namenode.block-placement-policy.default.prefer-local-node' set to false. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14927) RBF: Add metrics for async callers thread pool

2019-11-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16965003#comment-16965003
 ] 

Hudson commented on HDFS-14927:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17596 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17596/])
HDFS-14927. RBF: Add metrics for async callers thread pool. Contributed 
(inigoiri: rev f18bbdd9d84cc1a23d33524f5cb61321cdb1b926)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterClientRejectOverload.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMetrics.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMBean.java


> RBF: Add metrics for async callers thread pool
> --
>
> Key: HDFS-14927
> URL: https://issues.apache.org/jira/browse/HDFS-14927
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14927.001.patch, HDFS-14927.002.patch, 
> HDFS-14927.003.patch, HDFS-14927.004.patch, HDFS-14927.005.patch, 
> HDFS-14927.006.patch, HDFS-14927.007.patch, HDFS-14927.008.patch, 
> HDFS-14927.009.patch
>
>
> It is good to add some monitoring on the async caller thread pool to handle 
> fan-out RPC client requests, so we know the utilization and when to bump up 
> dfs.federation.router.client.thread-size



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14768) EC : Busy DN replica should be consider in live replica check.

2019-11-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16964987#comment-16964987
 ] 

Hudson commented on HDFS-14768:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17594 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17594/])
HDFS-14768. EC : Busy DN replica should be consider in live replica 
(surendralilhore: rev 02009c3bb762393540cdf92cfd9c840807272903)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ErasureCodingWork.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommissionWithStriped.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java


> EC : Busy DN replica should be consider in live replica check.
> --
>
> Key: HDFS-14768
> URL: https://issues.apache.org/jira/browse/HDFS-14768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, erasure-coding, hdfs, namenode
>Affects Versions: 3.0.2
>Reporter: guojh
>Assignee: guojh
>Priority: Major
>  Labels: patch
> Fix For: 3.3.0
>
> Attachments: 1568275810244.jpg, 1568276338275.jpg, 1568771471942.jpg, 
> HDFS-14768.000.patch, HDFS-14768.001.patch, HDFS-14768.002.patch, 
> HDFS-14768.003.patch, HDFS-14768.004.patch, HDFS-14768.005.patch, 
> HDFS-14768.006.patch, HDFS-14768.007.patch, HDFS-14768.008.patch, 
> HDFS-14768.009.patch, HDFS-14768.010.patch, HDFS-14768.011.patch, 
> HDFS-14768.jpg, guojh_UT_after_deomission.txt, 
> guojh_UT_before_deomission.txt, zhaoyiming_UT_after_deomission.txt, 
> zhaoyiming_UT_beofre_deomission.txt
>
>
> Policy is RS-6-3-1024K, version is hadoop 3.0.2;
> We suppose a file's block Index is [0,1,2,3,4,5,6,7,8], And decommission 
> index[3,4], increase the index 6 datanode's
> pendingReplicationWithoutTargets  that make it large than 
> replicationStreamsHardLimit(we set 14). Then, After the method 
> chooseSourceDatanodes of BlockMananger, the liveBlockIndices is 
> [0,1,2,3,4,5,7,8], Block Counter is, Live:7, Decommission:2. 
> In method scheduleReconstruction of BlockManager, the additionalReplRequired 
> is 9 - 7 = 2. After Namenode choose two target Datanode, will assign a 
> erasureCode task to target datanode.
> When datanode get the task will build  targetIndices from liveBlockIndices 
> and target length. the code is blow.
> {code:java}
> // code placeholder
> targetIndices = new short[targets.length];
> private void initTargetIndices() { 
>   BitSet bitset = reconstructor.getLiveBitSet();
>   int m = 0; hasValidTargets = false; 
>   for (int i = 0; i < dataBlkNum + parityBlkNum; i++) {  
> if (!bitset.get) {    
>   if (reconstructor.getBlockLen > 0) {
>        if (m < targets.length) {
>          targetIndices[m++] = (short)i;
>          hasValidTargets = true;
>         }
>       }
>     }
>  }
> {code}
> targetIndices[0]=6, and targetIndices[1] is aways 0 from initial value.
> The StripedReader is  aways create reader from first 6 index block, and is 
> [0,1,2,3,4,5]
> Use the index [0,1,2,3,4,5] to build target index[6,0] will trigger the isal 
> bug. the block index6's data is corruption(all data is zero).
> I write a unit test can stabilize repreduce.
> {code:java}
> // code placeholder
> private int replicationStreamsHardLimit = 
> DFSConfigKeys.DFS_NAMENODE_REPLICATION_STREAMS_HARD_LIMIT_DEFAULT;
> numDNs = dataBlocks + parityBlocks + 10;
> @Test(timeout = 24)
> public void testFileDecommission() throws Exception {
>   LOG.info("Starting test testFileDecommission");
>   final Path ecFile = new Path(ecDir, "testFileDecommission");
>   int writeBytes = cellSize * dataBlocks;
>   writeStripedFile(dfs, ecFile, writeBytes);
>   Assert.assertEquals(0, bm.numOfUnderReplicatedBlocks());
>   FileChecksum fileChecksum1 = dfs.getFileChecksum(ecFile, writeBytes);
>   final INodeFile fileNode = cluster.getNamesystem().getFSDirectory()
>   .getINode4Write(ecFile.toString()).asFile();
>   LocatedBlocks locatedBlocks =
>   StripedFileTestUtil.getLocatedBlocks(ecFile, dfs);
>   LocatedBlock lb = dfs.getClient().getLocatedBlocks(ecFile.toString(), 0)
>   .get(0);
>   DatanodeInfo[] dnLocs = lb.getLocations();
>   LocatedStripedBlock lastBlock =
>   (LocatedStripedBlock)locatedBlocks.getLastLocatedBlock();
>   DatanodeInfo[] storageInfos = lastBlock.getLocations();
>   //
>   DatanodeDescriptor datanodeDescriptor = 
> cluster.getNameNode().getNamesys

[jira] [Commented] (HDFS-14824) [Dynamometer] Dynamometer in org.apache.hadoop.tools does not output the benchmark results.

2019-11-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16964972#comment-16964972
 ] 

Hudson commented on HDFS-14824:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17593 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17593/])
HDFS-14824. [Dynamometer] Dynamometer in org.apache.hadoop.tools does (xkrogen: 
rev 477505ccfc480f2605a7b65de95ea6f6ff5ce090)
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/audit/AuditReplayThread.java
* (add) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/audit/CountTimeWritable.java
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/WorkloadDriver.java
* (edit) hadoop-tools/hadoop-dynamometer/src/site/markdown/Dynamometer.md
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/test/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/TestWorkloadGenerator.java
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/CreateFileMapper.java
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/test/java/org/apache/hadoop/tools/dynamometer/TestDynamometerInfra.java
* (add) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/audit/UserCommandKey.java
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/WorkloadMapper.java
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/java/org/apache/hadoop/tools/dynamometer/Client.java
* (add) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/audit/AuditReplayReducer.java
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/audit/AuditReplayMapper.java


> [Dynamometer] Dynamometer in org.apache.hadoop.tools does not output the 
> benchmark results.
> ---
>
> Key: HDFS-14824
> URL: https://issues.apache.org/jira/browse/HDFS-14824
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Soya Miyoshi
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.3.0
>
>
> According to the latest 
> [document|https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-dynamometer/Dynamometer.html
>  ], the benchmark results will be written in `Dauditreplay.output-path`. 
> However, current org.apache.hadooop.tools hasn't merged [this pull 
> request|https://github.com/linkedin/dynamometer/pull/76 ], so it does not 
> output the benchmark results.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12943) Consistent Reads from Standby Node

2019-10-31 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16964579#comment-16964579
 ] 

Hudson commented on HDFS-12943:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17592 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17592/])
Add 2.10.0 release notes for HDFS-12943 (jhung: rev 
ef9d12df24c0db76fd37a95551db7920d27d740c)
* (edit) 
hadoop-common-project/hadoop-common/src/site/markdown/release/2.10.0/RELEASENOTES.2.10.0.md


> Consistent Reads from Standby Node
> --
>
> Key: HDFS-12943
> URL: https://issues.apache.org/jira/browse/HDFS-12943
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Fix For: 2.10.0, 3.3.0, 3.1.4, 3.2.2
>
> Attachments: ConsistentReadsFromStandbyNode.pdf, 
> ConsistentReadsFromStandbyNode.pdf, HDFS-12943-001.patch, 
> HDFS-12943-002.patch, HDFS-12943-003.patch, HDFS-12943-004.patch, 
> TestPlan-ConsistentReadsFromStandbyNode.pdf
>
>
> StandbyNode in HDFS is a replica of the active NameNode. The states of the 
> NameNodes are coordinated via the journal. It is natural to consider 
> StandbyNode as a read-only replica. As with any replicated distributed system 
> the problem of stale reads should be resolved. Our main goal is to provide 
> reads from standby in a consistent way in order to enable a wide range of 
> existing applications running on top of HDFS.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14920) Erasure Coding: Decommission may hang If one or more datanodes are out of service during decommission

2019-10-31 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16964315#comment-16964315
 ] 

Hudson commented on HDFS-14920:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17590 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17590/])
HDFS-14920. Erasure Coding: Decommission may hang If one or more (ayushsaxena: 
rev 9d25ae7669eed1a047578b574f42bd121b445a3c)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/NumberReplicas.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommissionWithStriped.java


> Erasure Coding: Decommission may hang If one or more datanodes are out of 
> service during decommission  
> ---
>
> Key: HDFS-14920
> URL: https://issues.apache.org/jira/browse/HDFS-14920
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.0.3, 3.2.1, 3.1.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14920.001.patch, HDFS-14920.002.patch, 
> HDFS-14920.003.patch, HDFS-14920.004.patch, HDFS-14920.005.patch
>
>
> Decommission test hangs in our clusters.
> Have seen the messages as follow
> {quote}
> 2019-10-22 15:58:51,514 TRACE 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager: Block 
> blk_-9223372035600425840_372987973 numExpected=9, numLive=5
> 2019-10-22 15:58:51,514 INFO BlockStateChange: Block: 
> blk_-9223372035600425840_372987973, Expected Replicas: 9, live replicas: 5, 
> corrupt replicas: 0, decommissioned replicas: 0, decommissioning replicas: 4, 
> maintenance replicas: 0, live entering maintenance replicas: 0, excess 
> replicas: 0, Is Open File: false, Datanodes having this block: 
> 10.255.43.57:50010 10.255.53.12:50010 10.255.63.12:50010 10.255.62.39:50010 
> 10.255.37.36:50010 10.255.33.15:50010 10.255.69.29:50010 10.255.51.13:50010 
> 10.255.64.15:50010 , Current Datanode: 10.255.69.29:50010, Is current 
> datanode decommissioning: true, Is current datanode entering maintenance: 
> false
> 2019-10-22 15:58:51,514 DEBUG 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager: Node 
> 10.255.69.29:50010 still has 1 blocks to replicate before it is a candidate 
> to finish Decommission In Progress
> {quote}
> After digging the source code and cluster log,  guess it happens as follow 
> steps.
> # Storage strategy is RS-6-3-1024k.
> # EC block b consists of b0, b1, b2, b3, b4, b5, b6, b7, b8, b0 is from 
> datanode dn0, b1 is from datanode dn1, ...etc
> # At the beginning dn0 is in decommission progress, b0 is replicated 
> successfully, and dn0 is staill in decommission progress.
> # Later b1, b2, b3 in decommission progress, and dn4 containing b4 is out of 
> service, so need to reconstruct, and create ErasureCodingWork to do it, in 
> the ErasureCodingWork, additionalReplRequired is 4
> # Because hasAllInternalBlocks is false, Will call 
> ErasureCodingWork#addTaskToDatanode -> 
> DatanodeDescriptor#addBlockToBeErasureCoded, and send 
> BlockECReconstructionInfo task to Datanode
> # DataNode can not reconstruction the block because targets is 4, greater 
> than 3( parity number).
> There is a problem as follow, from BlockManager.java#scheduleReconstruction
> {code}
>   // should reconstruct all the internal blocks before scheduling
>   // replication task for decommissioning node(s).
>   if (additionalReplRequired - numReplicas.decommissioning() -
>   numReplicas.liveEnteringMaintenanceReplicas() > 0) {
> additionalReplRequired = additionalReplRequired -
> numReplicas.decommissioning() -
> numReplicas.liveEnteringMaintenanceReplicas();
>   }
> {code}
> Should reconstruction firstly and then replicate for decommissioning. Because 
> numReplicas.decommissioning() is 4, and additionalReplRequired is 4, that's 
> wrong,
> numReplicas.decommissioning() should be 3, it should exclude live replica. 
> If so, additionalReplRequired will be 1, reconstruction will schedule as 
> expected. After that, decommission goes on.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hado

[jira] [Commented] (HDFS-14936) Add getNumOfChildren() for interface InnerNode

2019-10-31 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16964301#comment-16964301
 ] 

Hudson commented on HDFS-14936:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17589/])
HDFS-14936. Add getNumOfChildren() for interface InnerNode. Contributed 
(ayushsaxena: rev d9fbedc4ae41d3dc688cf6b697f0fb46a28b47c5)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/net/TestNetworkTopology.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/InnerNodeImpl.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/InnerNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DFSTopologyNodeImpl.java


> Add getNumOfChildren() for interface InnerNode
> --
>
> Key: HDFS-14936
> URL: https://issues.apache.org/jira/browse/HDFS-14936
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14936.001.patch, HDFS-14936.002.patch, 
> HDFS-14936.003.patch
>
>
> current code InnerNode subclass InnerNodeImpl and DFSTopologyNodeImpl both 
> have getNumOfChildren(). 
> so Add getNumOfChildren() for interface InnerNode and remove unnessary 
> getNumOfChildren() in DFSTopologyNodeImpl.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14907) [Dynamometer] DataNode can't find junit jar when using Hadoop-3 binary

2019-10-29 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16962125#comment-16962125
 ] 

Hudson commented on HDFS-14907:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17582 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17582/])
HDFS-14907. [Dynamometer] Add JUnit JAR to classpath for (xkrogen: rev 
e32ab5e179bd32f8c18107536c15e577cf93d435)
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/resources/start-component.sh


> [Dynamometer] DataNode can't find junit jar when using Hadoop-3 binary
> --
>
> Key: HDFS-14907
> URL: https://issues.apache.org/jira/browse/HDFS-14907
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.3.0
>
>
> When executing {{start-dynamometer-cluster.sh}} with Hadoop-3 binary, 
> datanodes fail to run with the following log and 
> {{start-dynamometer-cluster.sh}} fails.
> {noformat}
> LogType:stderr
> LogLastModifiedTime:Wed Oct 09 15:03:09 +0900 2019
> LogLength:1386
> LogContents:
> Exception in thread "main" java.lang.NoClassDefFoundError: org/junit/Assert
> at 
> org.apache.hadoop.test.GenericTestUtils.assertExists(GenericTestUtils.java:299)
> at 
> org.apache.hadoop.test.GenericTestUtils.getTestDir(GenericTestUtils.java:243)
> at 
> org.apache.hadoop.test.GenericTestUtils.getTestDir(GenericTestUtils.java:252)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.getBaseDirectory(MiniDFSCluster.java:2982)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.determineDfsBaseDir(MiniDFSCluster.java:2972)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.formatDataNodeDirs(MiniDFSCluster.java:2834)
> at 
> org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.run(SimulatedDataNodes.java:123)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at 
> org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.main(SimulatedDataNodes.java:88)
> Caused by: java.lang.ClassNotFoundException: org.junit.Assert
> at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 9 more
> ./start-component.sh: line 317: kill: (2261) - No such process
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14935) Refactor DFSNetworkTopology#isNodeInScope

2019-10-29 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16961750#comment-16961750
 ] 

Hudson commented on HDFS-14935:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17580 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17580/])
HDFS-14935. Refactor DFSNetworkTopology#isNodeInScope. Contributed by 
(ayushsaxena: rev fa4904cdcaaa294149a1c92465c71359407de93f)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DFSNetworkTopology.java


> Refactor DFSNetworkTopology#isNodeInScope
> -
>
> Key: HDFS-14935
> URL: https://issues.apache.org/jira/browse/HDFS-14935
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14935.001.patch, HDFS-14935.002.patch, 
> HDFS-14935.003.patch
>
>
> Replace "/" with constant {{NodeBase.PATH_SEPARATOR_STR}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14730) Remove unused configuration dfs.web.authentication.filter

2019-10-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16961555#comment-16961555
 ] 

Hudson commented on HDFS-14730:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17579 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17579/])
HDFS-14730.  Removed unused configuration dfs.web.authentication.filter. 
(eyang: rev 30ed24a42112b3225ab2486ed24bd6a5011a7a7f)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSConfigKeys.java


> Remove unused configuration dfs.web.authentication.filter 
> --
>
> Key: HDFS-14730
> URL: https://issues.apache.org/jira/browse/HDFS-14730
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14730.001.patch, HDFS-14730.002.patch
>
>
> After HADOOP-16314, this configuration is not used any where, so I propose to 
> deprecate it to avoid misuse.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14931) hdfs crypto commands limit column width

2019-10-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16961244#comment-16961244
 ] 

Hudson commented on HDFS-14931:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17578 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17578/])
HDFS-14931. hdfs crypto commands limit column width. Contributed by Eric 
(ebadger: rev 9ef6ed9c1c83b9752e772ece7a716a33045752bf)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java


> hdfs crypto commands limit column width
> ---
>
> Key: HDFS-14931
> URL: https://issues.apache.org/jira/browse/HDFS-14931
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: HDFS-14931.001.patch
>
>
> {noformat}
> foo@bar$ hdfs crypto -listZones
> /projects/foo/bar/fizzgig/myprojectdirectorynameorsomethingcool1  encr
>   
> yptio
>   nzon
>   e1
> /projects/foo/bar/fizzgig/myprojectdirectorynameorsomethingcool2  encr
>   
> yptio
>   nzon
>   e2
> /projects/foo/bar/fizzgig/myprojectdirectorynameorsomethingcool3  encr
>   
> yptio
>   nzon
>   e3
> {noformat}
> The command ends up looking something really ugly like this when the path is 
> long. This also makes it very difficult to pipe the output into other 
> utilities, such as awk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14923) Remove dead code from HealthMonitor

2019-10-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16960176#comment-16960176
 ] 

Hudson commented on HDFS-14923:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17576 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17576/])
HDFS-14923. Remove dead code from HealthMonitor. Contributed by Fei Hui. 
(weichiu: rev 7be5508d9b35892f483ba6022b6aced7648b8fa3)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HealthMonitor.java


> Remove dead code from HealthMonitor
> ---
>
> Key: HDFS-14923
> URL: https://issues.apache.org/jira/browse/HDFS-14923
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.0, 3.2.1, 3.1.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14923.001.patch
>
>
> Dig ZKFC source code and find that the dead code as follow
> {code}
> public void removeCallback(Callback cb) {
>callbacks.remove(cb);
> }
> public synchronized void removeServiceStateCallback(ServiceStateCallback cb) {
>serviceStateCallbacks.remove(cb);
> }
> synchronized HAServiceStatus getLastServiceStatus() {
>return lastServiceState;
> }
> {code}
> It's useless, and should be deleted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14308) DFSStripedInputStream curStripeBuf is not freed by unbuffer()

2019-10-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16960045#comment-16960045
 ] 

Hudson commented on HDFS-14308:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17575 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17575/])
HDFS-14308. DFSStripedInputStream curStripeBuf is not freed by (weichiu: rev 
30db895b59d250788d029cb2013bb4712ef9b546)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java


> DFSStripedInputStream curStripeBuf is not freed by unbuffer()
> -
>
> Key: HDFS-14308
> URL: https://issues.apache.org/jira/browse/HDFS-14308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.0.0
>Reporter: Joe McDonnell
>Assignee: Zhao Yi Ming
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: ec_heap_dump.png
>
>
> Some users of HDFS cache opened HDFS file handles to avoid repeated 
> roundtrips to the NameNode. For example, Impala caches up to 20,000 HDFS file 
> handles by default. Recent tests on erasure coded files show that the open 
> file handles can consume a large amount of memory when not in use.
> For example, here is output from Impala's JMX endpoint when 608 file handles 
> are cached
> {noformat}
> {
> "name": "java.nio:type=BufferPool,name=direct",
> "modelerType": "sun.management.ManagementFactoryHelper$1",
> "Name": "direct",
> "TotalCapacity": 1921048960,
> "MemoryUsed": 1921048961,
> "Count": 633,
> "ObjectName": "java.nio:type=BufferPool,name=direct"
> },{noformat}
> This shows direct buffer memory usage of 3MB per DFSStripedInputStream. 
> Attached is output from Eclipse MAT showing that the direct buffers come from 
> DFSStripedInputStream objects. Both Impala and HBase call unbuffer() when a 
> file handle is being cached and potentially unused for significant chunks of 
> time, yet this shows that the memory remains in use.
> To support caching file handles on erasure coded files, DFSStripedInputStream 
> should avoid holding buffers after the unbuffer() call. See HDFS-7694. 
> "unbuffer()" is intended to move an input stream to a lower memory state to 
> support these caching use cases. In particular, the curStripeBuf seems to be 
> allocated from the BUFFER_POOL on a resetCurStripeBuffer(true) call. It is 
> not freed until close().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14933) Fixing a typo in documentation of Observer NameNode

2019-10-24 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16959470#comment-16959470
 ] 

Hudson commented on HDFS-14933:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17573 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17573/])
HDFS-14933. Fixing a typo in documentation of Observer NameNode. (tasanuma: rev 
862526530a376524551805b8e32cc7f66ba6f03e)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ObserverNameNode.md


> Fixing a typo in documentation of Observer NameNode
> ---
>
> Key: HDFS-14933
> URL: https://issues.apache.org/jira/browse/HDFS-14933
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Xieming Li
>Assignee: Xieming Li
>Priority: Trivial
> Fix For: 3.3.0
>
> Attachments: HDFS-14933.001.patch
>
>
> Fix a typo in documentation Observer NameNode
> https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html
> This 
> {code}
>   
>   dfs.ha.tail-edits.period
>   10s
> 
> {code}
> should be changed to 
> {code}
>   
>   dfs.ha.tail-edits.period.backoff-max
>   10s
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14917) Change the ICON of "Decommissioned & dead" datanode on "dfshealth.html"

2019-10-24 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16959356#comment-16959356
 ] 

Hudson commented on HDFS-14917:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17572 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17572/])
HDFS-14917. Change the ICON of "Decommissioned & dead" datanode on (tasanuma: 
rev 0db0f1e3990c4bf93ca8db41858860da6537a9bf)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css


> Change the ICON of "Decommissioned & dead" datanode on "dfshealth.html"
> ---
>
> Key: HDFS-14917
> URL: https://issues.apache.org/jira/browse/HDFS-14917
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ui
>Reporter: Xieming Li
>Assignee: Xieming Li
>Priority: Trivial
> Fix For: 3.3.0
>
> Attachments: HDFS-14917.patch, image-2019-10-21-17-49-10-635.png, 
> image-2019-10-21-17-49-58-759.png, image-2019-10-21-18-03-53-914.png, 
> image-2019-10-21-18-04-52-405.png, image-2019-10-21-18-05-19-160.png, 
> image-2019-10-21-18-13-01-884.png, image-2019-10-21-18-13-54-427.png
>
>
> This is a really simple UI change proposal:
>  The icon of "Decommissioned & dead" datanode could be improved. It can be 
> changed from    !image-2019-10-21-18-05-19-160.png|width=31,height=28! to   
> !image-2019-10-21-18-04-52-405.png|width=32,height=29! so that,
>  #  icon "  !image-2019-10-21-18-13-01-884.png|width=26,height=25! " can be 
> used for all status starts with "decommission" on dfshealth.html, 
>  #  icon "  !image-2019-10-21-18-13-01-884.png|width=26,height=25! " can be 
> differentiated with icon "  !image-2019-10-21-18-13-54-427.png! " on 
> federationhealth.html
> |*DataNode Infomation Legend (now)*
>  dfshealth.html#tab-datanode 
> |!image-2019-10-21-17-49-10-635.png|width=516,height=55!|
> |*DataNode* *Infomation* *Legend (proposed)*
>   dfshealth.html#tab-datanode 
> |!image-2019-10-21-18-03-53-914.png|width=589,height=60!|
> |*NameService Legend*
>  
> federationhealth.htm#tab-namenode|!image-2019-10-21-17-49-58-759.png|width=445,height=43!|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14638) [Dynamometer] Fix scripts to refer to current build structure

2019-10-24 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16959303#comment-16959303
 ] 

Hudson commented on HDFS-14638:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17571 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17571/])
HDFS-14638. [Dynamometer] Fix scripts to refer to current build (weichiu: rev 
b41394eec8552f419aefe452b3fdb8ff2506b9d1)
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-blockgen/src/main/bash/generate-block-lists.sh
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/bash/start-workload.sh
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/bash/start-dynamometer-cluster.sh


> [Dynamometer] Fix scripts to refer to current build structure
> -
>
> Key: HDFS-14638
> URL: https://issues.apache.org/jira/browse/HDFS-14638
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Erik Krogen
>Assignee: Takanobu Asanuma
>Priority: Major
>
> The scripts within the Dynamometer build dirs all refer to the old 
> distribution structure with a single {{bin}} directory and a single {{lib}} 
> directory. We need to update them to refer to the Hadoop-standard layout.
> Also as pointed out by [~pingsutw]:
> {quote}
> Due to dynamometer rename to hadoop-dynamometer in hadoop-tools
> but we still use old name of jar inside the scripts
> {code}
> "$hadoop_cmd" jar "${script_pwd}"/lib/dynamometer-infra-*.jar 
> org.apache.hadoop.tools.dynamometer.Client "$@"
> {code}
> We should rename these jar inside the scripts
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14910) Rename Snapshot with Pre Descendants Fail With IllegalArgumentException.

2019-10-24 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16959196#comment-16959196
 ] 

Hudson commented on HDFS-14910:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17570 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17570/])
HDFS-14910. Rename Snapshot with Pre Descendants Fail With (github: rev 
a1b4eebcc92976a9fb78ad5d3ab70c52cc0a5fa7)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java


> Rename Snapshot with Pre Descendants Fail With IllegalArgumentException.
> 
>
> Key: HDFS-14910
> URL: https://issues.apache.org/jira/browse/HDFS-14910
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Íñigo Goiri
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
>
> TestRenameWithSnapshots#testRename2PreDescendant has been failing 
> consistently.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14921) Remove SuperUser Check in Setting Storage Policy in FileStatus During Listing

2019-10-24 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16958619#comment-16958619
 ] 

Hudson commented on HDFS-14921:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17566 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17566/])
HDFS-14921. Remove SuperUser Check in Setting Storage Policy in (vinayakumarb: 
rev ee699dc26c7b660a5222a30782f3bf5cb1e55085)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java


> Remove SuperUser Check in Setting Storage Policy in FileStatus During Listing
> -
>
> Key: HDFS-14921
> URL: https://issues.apache.org/jira/browse/HDFS-14921
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14921-01.patch
>
>
> Earlier StoragePolicy were part of DFSAdmin and operations of StoragePolicy 
> required SuperUser Check, But that got removed long back, But the Check in 
> getListing was left.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14884) Add sanity check that zone key equals feinfo key while setting Xattrs

2019-10-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957424#comment-16957424
 ] 

Hudson commented on HDFS-14884:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17563 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17563/])
HDFS-14884. Add sanity check that zone key equals feinfo key while (weichiu: 
rev a901405ad80b4efee020e1ddd06104121f26e31f)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java


> Add sanity check that zone key equals feinfo key while setting Xattrs
> -
>
> Key: HDFS-14884
> URL: https://issues.apache.org/jira/browse/HDFS-14884
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, hdfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Attachments: HDFS-14884.001.patch, HDFS-14884.002.patch, 
> HDFS-14884.003.patch, hdfs_distcp.patch
>
>
> Currently, it is possible to set an external attribute where the  zone key is 
> not the same as  feinfo key. This jira will add a precondition before setting 
> this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14915) Move Superuser Check Before Taking Lock For Encryption API

2019-10-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957097#comment-16957097
 ] 

Hudson commented on HDFS-14915:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17562 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17562/])
HDFS-14915. Move Superuser Check Before Taking Lock For Encryption API. 
(ayushsaxena: rev 6020505943fbb6133f7c2747e6d85d79cde788ea)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> Move Superuser Check Before Taking Lock For Encryption API
> --
>
> Key: HDFS-14915
> URL: https://issues.apache.org/jira/browse/HDFS-14915
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14915-01.patch, HDFS-14915-02.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14918) Remove useless getRedundancyThread from BlockManagerTestUtil

2019-10-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957055#comment-16957055
 ] 

Hudson commented on HDFS-14918:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17561 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17561/])
HDFS-14918. Remove useless getRedundancyThread from (ayushsaxena: rev 
19f35cfd5707eb1f2df2b99734acae64935c2148)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java


> Remove useless getRedundancyThread from BlockManagerTestUtil
> 
>
> Key: HDFS-14918
> URL: https://issues.apache.org/jira/browse/HDFS-14918
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14918.001.patch
>
>
> Remove the dead code, it's useless.
> {code}
>  /**
>   * @return redundancy monitor thread instance from block manager.
>   */
>  public static Daemon getRedundancyThread(final BlockManager blockManager) {
>return blockManager.getRedundancyThread();
>  }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13901) INode access time is ignored because of race between open and rename

2019-10-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16956566#comment-16956566
 ] 

Hudson commented on HDFS-13901:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17560 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17560/])
HDFS-13901. INode access time is ignored because of race between open (weichiu: 
rev 72003b19bf4c652b53625984d109542abd0cf20e)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeleteRace.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> INode access time is ignored because of race between open and rename
> 
>
> Key: HDFS-13901
> URL: https://issues.apache.org/jira/browse/HDFS-13901
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-13901.000.patch, HDFS-13901.001.patch, 
> HDFS-13901.002.patch, HDFS-13901.003.patch, HDFS-13901.004.patch, 
> HDFS-13901.005.patch, HDFS-13901.006.patch, HDFS-13901.007.patch
>
>
> That's because in getBlockLocations there is a gap between readUnlock and 
> re-fetch write lock (to update access time). If a rename operation occurs in 
> the gap, the update of access time will be ignored. We can calculate new path 
> from the inode and use the new path to update access time. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12749) DN may not send block report to NN after NN restart

2019-10-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16956471#comment-16956471
 ] 

Hudson commented on HDFS-12749:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17559 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17559/])
HDFS-12749. DN may not send block report to NN after NN restart. (kihwal: rev 
c4e27ef7735acd6f91b73d2ecb0227f8dd75a2e4)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java


> DN may not send block report to NN after NN restart
> ---
>
> Key: HDFS-12749
> URL: https://issues.apache.org/jira/browse/HDFS-12749
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1, 2.8.3, 2.7.5, 3.0.0, 2.9.1
>Reporter: TanYuxin
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-12749-branch-2.7.002.patch, 
> HDFS-12749-trunk.003.patch, HDFS-12749-trunk.004.patch, 
> HDFS-12749-trunk.005.patch, HDFS-12749-trunk.006.patch, HDFS-12749.001.patch
>
>
> Now our cluster have thousands of DN, millions of files and blocks. When NN 
> restart, NN's load is very high.
> After NN restart,DN will call BPServiceActor#reRegister method to register. 
> But register RPC will get a IOException since NN is busy dealing with Block 
> Report.  The exception is caught at BPServiceActor#processCommand.
> Next is the caught IOException:
> {code:java}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Error processing 
> datanode Command
> java.io.IOException: Failed on local exception: java.io.IOException: 
> java.net.SocketTimeoutException: 6 millis timeout while waiting for 
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
> local=/DataNode_IP:Port remote=NameNode_Host/IP:Port]; Host Details : local 
> host is: "DataNode_Host/Datanode_IP"; destination host is: 
> "NameNode_Host":Port;
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:773)
> at org.apache.hadoop.ipc.Client.call(Client.java:1474)
> at org.apache.hadoop.ipc.Client.call(Client.java:1407)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy13.registerDatanode(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.registerDatanode(DatanodeProtocolClientSideTranslatorPB.java:126)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.register(BPServiceActor.java:793)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.reRegister(BPServiceActor.java:926)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:604)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:898)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:711)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:864)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The un-catched IOException breaks BPServiceActor#register, and the Block 
> Report can not be sent immediately. 
> {code}
>   /**
>* Register one bp with the corresponding NameNode
>* 
>* The bpDatanode needs to register with the namenode on startup in order
>* 1) to report which storage it is serving now and 
>* 2) to receive a registrationID
>*  
>* issued by the namenode to recognize registered datanodes.
>* 
>* @param nsInfo current NamespaceInfo
>* @see FSNamesystem#registerDatanode(DatanodeRegistration)
>* @throws IOException
>*/
>   void register(NamespaceInfo nsInfo) throws IOException {
> // The handshake() phase loaded the block pool storage
> // off disk - so update the bpRegistration object from that info
> DatanodeRegistration newBpRegistration = bpos.createRegistration();
> LOG.info(this + " beginning handshake with NN");
> while (shouldRun()) {
>   try {
> // Use returned registration from namenode with updated fields
> newBpRegistration = bpNamenode.registerDatanode(newBpRegistration);
> newBpRegistration.setNamespaceInfo(nsInfo);
> bpRegistration = newBpRegistration;
> break;
>   } catch(EOFException e) {  // namenode might have just restarted
> LOG.info("Problem connecting to server: " + nnAddr + " :"
> + e.getLocalizedMessage());
> sleepAndLogInterrupts(1000, "connecting to server");
>   } catch(SocketTimeoutException e) {  // namenode is busy
> LOG.info("Problem connecting to server: " + nnAddr);
> sleepAndLo

[jira] [Commented] (HDFS-14913) Correct the value of available count in DFSNetworkTopology#chooseRandomWithStorageType()

2019-10-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16956098#comment-16956098
 ] 

Hudson commented on HDFS-14913:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17556 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17556/])
HDFS-14913. Correct the value of available count in (ayushsaxena: rev 
74c2329fc36e0878555342085defb4e474ef1aad)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/net/TestDFSNetworkTopology.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DFSNetworkTopology.java


> Correct the value of available count in 
> DFSNetworkTopology#chooseRandomWithStorageType() 
> -
>
> Key: HDFS-14913
> URL: https://issues.apache.org/jira/browse/HDFS-14913
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14913-01.patch, HDFS-14913-02.patch, 
> HDFS-14913-03.patch
>
>
> Presently if excluded scope is /default/rack1 and excluded node is 
> /default/rack10/node. Then the available count is not deducted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14916) RBF: line breaks are missing from the output of 'hdfs dfsrouteradmin -ls'

2019-10-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16956065#comment-16956065
 ] 

Hudson commented on HDFS-14916:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17555 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17555/])
HDFS-14916. RBF: line breaks are missing from the output of 'hdfs (ayushsaxena: 
rev ff6a492dc9f8e15bde5d1bce3f8841298146)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java


> RBF: line breaks are missing from the output of 'hdfs dfsrouteradmin -ls'
> -
>
> Key: HDFS-14916
> URL: https://issues.apache.org/jira/browse/HDFS-14916
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf, ui
>Reporter: Xieming Li
>Assignee: Xieming Li
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14916.001.patch, HDFS-14916.patch
>
>
> line breaks seem to be missing from the output of "hdfs dfsrouteradmin -ls"
> e.g.:
> The output of "hdfs dfsrouteradmin -ls" now:
> {code:java}
> [sri@nn00070 ~]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> SourceDestinations  Owner 
> Group Mode   Quota/Usage
> /testDir1 subCluster1->/user/user1   hdfs 
>  hdfs  rwxr-xr-x  [NsQuota: -/-, SsQuota: -/-]  /user2
>  subCluster1->/tmp/user2user2  users  
>rwxr-xr-x  [NsQuota: -/-, SsQuota: -/-]  /user/user3   
> subCluster1->/user/user3  user3 hadoop
> rwxr-xr-x  [NsQuota: -/-, SsQuota: -/-]  [sri@nn00070 ~]$
>  {code}
> This should be:
> {code:java}
> [sri@nn00070 ~]$ hdfs dfsrouteradmin -ls -d
> Mount Table Entries:
> SourceDestinations  Owner 
> Group Mode   Quota/Usage
> /testDir1subCluster1->/user/user1   hdfs  
>  hdfs  rwxr-xr-x  [NsQuota: -/-, SsQuota: -/-]
> /user2subCluster1->/tmp/user2user2
> users rwxr-xr-x  [NsQuota: -/-, SsQuota: -/-]
> /user/user3   subCluster1->/user/user3   user3 
> hadoop rwxr-xr-x  [NsQuota: -/-, SsQuota: -/-]
> [sri@nn00070 ~]$
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14847) Erasure Coding: Blocks are over-replicated while EC decommissioning

2019-10-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16955346#comment-16955346
 ] 

Hudson commented on HDFS-14847:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17554 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17554/])
HDFS-14847. Erasure Coding: Blocks are over-replicated while EC (ayushsaxena: 
rev 447f46d9628db54e77f88e2d109587cc7dfd6154)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ErasureCodingWork.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommissionWithStriped.java


> Erasure Coding: Blocks are over-replicated while EC decommissioning
> ---
>
> Key: HDFS-14847
> URL: https://issues.apache.org/jira/browse/HDFS-14847
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.2.0, 3.0.3, 3.1.2, 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Critical
> Fix For: 3.3.0
>
> Attachments: HDFS-14847.001.patch, HDFS-14847.002.patch, 
> HDFS-14847.003.patch, HDFS-14847.004.patch, HDFS-14847.005.patch
>
>
> Found that Some blocks are over-replicated while ec decommissioning. Messages 
> in log as follow
> {quote}
> INFO BlockStateChange: Block: blk_-9223372035714984112_363779142, Expected 
> Replicas: 9, live replicas: 8, corrupt replicas: 0, decommissioned replicas: 
> 0, decommissioning replicas: 3, maintenance replicas: 0, live entering 
> maintenance replicas: 0, excess replicas: 0, Is Open File: false, Datanodes 
> having this block: 10.254.41.34:50010 10.254.54.53:50010 10.254.28.53:50010 
> 10.254.56.55:50010 10.254.32.21:50010 10.254.33.19:50010 10.254.63.17:50010 
> 10.254.31.19:50010 10.254.35.29:50010 10.254.51.57:50010 10.254.40.58:50010 
> 10.254.69.31:50010 10.254.47.18:50010 10.254.51.18:50010 10.254.43.57:50010 
> 10.254.50.47:50010 10.254.42.37:50010 10.254.57.29:50010 10.254.67.40:50010 
> 10.254.44.16:50010 10.254.59.38:50010 10.254.53.56:50010 10.254.45.11:50010 
> 10.254.39.22:50010 10.254.30.16:50010 10.254.35.53:50010 10.254.22.30:50010 
> 10.254.26.34:50010 10.254.17.58:50010 10.254.65.53:50010 10.254.60.39:50010 
> 10.254.61.20:50010 10.254.64.23:50010 10.254.21.13:50010 10.254.37.35:50010 
> 10.254.68.30:50010 10.254.62.37:50010 10.254.25.58:50010 10.254.52.54:50010 
> 10.254.58.31:50010 10.254.49.11:50010 10.254.55.52:50010 10.254.19.19:50010 
> 10.254.36.40:50010 10.254.18.30:50010 10.254.20.39:50010 10.254.66.52:50010 
> 10.254.56.32:50010 10.254.24.55:50010 10.254.34.11:50010 10.254.29.58:50010 
> 10.254.27.40:50010 10.254.46.33:50010 10.254.23.19:50010 10.254.74.12:50010 
> 10.254.74.13:50010 10.254.41.35:50010 10.254.67.58:50010 10.254.54.11:50010 
> 10.254.68.14:50010 10.254.27.14:50010 10.254.51.29:50010 10.254.45.21:50010 
> 10.254.50.56:50010 10.254.47.31:50010 10.254.40.14:50010 10.254.65.21:50010 
> 10.254.62.22:50010 10.254.57.16:50010 10.254.36.52:50010 10.254.30.13:50010 
> 10.254.35.12:50010 10.254.69.34:50010 10.254.34.58:50010 10.254.17.50:50010 
> 10.254.63.12:50010 10.254.28.21:50010 10.254.58.30:50010 10.254.24.57:50010 
> 10.254.33.50:50010 10.254.44.52:50010 10.254.32.48:50010 10.254.43.39:50010 
> 10.254.20.37:50010 10.254.56.59:50010 10.254.22.33:50010 10.254.60.34:50010 
> 10.254.49.19:50010 10.254.52.21:50010 10.254.23.59:50010 10.254.21.16:50010 
> 10.254.42.55:50010 10.254.29.33:50010 10.254.53.17:50010 10.254.19.14:50010 
> 10.254.64.51:50010 10.254.46.20:50010 10.254.66.22:50010 10.254.18.38:50010 
> 10.254.39.17:50010 10.254.37.57:50010 10.254.31.54:50010 10.254.55.33:50010 
> 10.254.25.17:50010 10.254.61.33:50010 10.254.26.40:50010 10.254.59.23:50010 
> 10.254.59.35:50010 10.254.66.48:50010 10.254.41.15:50010 10.254.54.31:50010 
> 10.254.61.50:50010 10.254.62.31:50010 10.254.17.56:50010 10.254.29.18:50010 
> 10.254.45.16:50010 10.254.63.48:50010 10.254.22.34:50010 10.254.37.51:50010 
> 10.254.65.49:50010 10.254.58.21:50010 10.254.42.12:50010 10.254.55.17:50010 
> 10.254.27.13:50010 10.254.57.17:50010 10.254.67.18:50010 10.254.31.31:50010 
> 10.254.28.12:50010 10.254.36.12:50010 10.254.21.59:50010 10.254.30.30:50010 
> 10.254.26.50:50010 10.254.40.40:50010 10.254.32.17:50010 10.254.47.55:50010 
> 1

[jira] [Commented] (HDFS-14887) RBF: In Router Web UI, Observer Namenode Information displaying as Unavailable

2019-10-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16955284#comment-16955284
 ] 

Hudson commented on HDFS-14887:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17553 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17553/])
HDFS-14887. RBF: In Router Web UI, Observer Namenode Information (inigoiri: rev 
e6f95eb0f7ce2c560bf0e72516fa709730b518c6)
* (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/static/rbf.css
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/metrics/TestMetricsBase.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/FederationNamenodeServiceState.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/TestStateStoreMembershipState.java


> RBF: In Router Web UI, Observer Namenode Information displaying as Unavailable
> --
>
> Key: HDFS-14887
> URL: https://issues.apache.org/jira/browse/HDFS-14887
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: 14887.008.png, 14887.after.png, 14887.before.png, 
> HDFS-14887.001.patch, HDFS-14887.002.patch, HDFS-14887.003.patch, 
> HDFS-14887.004.patch, HDFS-14887.005.patch, HDFS-14887.006.patch, 
> HDFS-14887.007.patch, HDFS-14887.008.patch, HDFS-14887.009.patch
>
>
> In Router Web UI, Observer Namenode Information displaying as Unavailable.
> We should show a proper icon for them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14909) DFSNetworkTopology#chooseRandomWithStorageType() should not decrease storage count for excluded node which is already part of excluded scope

2019-10-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953943#comment-16953943
 ] 

Hudson commented on HDFS-14909:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17546 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17546/])
HDFS-14909. DFSNetworkTopology#chooseRandomWithStorageType() should not 
(surendralilhore: rev 54dc6b7d720851eb6017906d664aa0fda2698225)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DFSNetworkTopology.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMissingBlocksAlert.java


> DFSNetworkTopology#chooseRandomWithStorageType() should not decrease storage 
> count for excluded node which is already part of excluded scope 
> -
>
> Key: HDFS-14909
> URL: https://issues.apache.org/jira/browse/HDFS-14909
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-14909.001.patch, HDFS-14909.002.patch, 
> HDFS-14909.003.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14810) Review FSNameSystem editlog sync

2019-10-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953914#comment-16953914
 ] 

Hudson commented on HDFS-14810:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17545 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17545/])
HDFS-14810. Review FSNameSystem editlog sync. Contributed by Xiaoqiao 
(ayushsaxena: rev 5527d79adb9b1e2f2779c283f81d6a3d5447babc)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> Review FSNameSystem editlog sync
> 
>
> Key: HDFS-14810
> URL: https://issues.apache.org/jira/browse/HDFS-14810
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14810.001.patch, HDFS-14810.002.patch, 
> HDFS-14810.003.patch, HDFS-14810.004.patch
>
>
> refactor and unified type of edit log sync in FSNamesystem as HDFS-11246 
> mentioned.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14739) RBF: LS command for mount point shows wrong owner and permission information.

2019-10-16 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16952877#comment-16952877
 ] 

Hudson commented on HDFS-14739:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17541 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17541/])
HDFS-14739. RBF: LS command for mount point shows wrong owner and (ayushsaxena: 
rev 375224edebb1c937afe4bbea8fe884499ca8ece5)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestDisableNameservices.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTable.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/RouterResolveException.java


> RBF: LS command for mount point shows wrong owner and permission information.
> -
>
> Key: HDFS-14739
> URL: https://issues.apache.org/jira/browse/HDFS-14739
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14739-trunk-001.patch, HDFS-14739-trunk-002.patch, 
> HDFS-14739-trunk-003.patch, HDFS-14739-trunk-004.patch, 
> HDFS-14739-trunk-005.patch, HDFS-14739-trunk-006.patch, 
> HDFS-14739-trunk-007.patch, HDFS-14739-trunk-008.patch, 
> HDFS-14739-trunk-009.patch, HDFS-14739-trunk-010.patch, 
> HDFS-14739-trunk-011.patch, image-2019-08-16-17-15-50-614.png, 
> image-2019-08-16-17-16-00-863.png, image-2019-08-16-17-16-34-325.png
>
>
> ||source||target namespace||destination||owner||group||permission||
> |/mnt|ns0|/mnt|mnt|mnt_group|755|
> |/mnt/test1|ns1|/mnt/test1|mnt_test1|mnt_test1_group|755|
> |/test1|ns1|/test1|test1|test1_group|755|
> When do getListing("/mnt"), the owner of  */mnt/test1* should be *mnt_test1* 
> instead of *test1* in result.
>  
> And if the mount table as blew, we should support getListing("/mnt") instead 
> of throw IOException when dfs.federation.router.default.nameservice.enable is 
> false.
> ||source||target namespace||destination||owner||group||permission||
> |/mnt/test1|ns0|/mnt/test1|test1|test1|755|
> |/mnt/test2|ns1|/mnt/test2|test2|test2|755|
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14886) In NameNode Web UI's Startup Progress page, Loading edits always shows 0 sec

2019-10-14 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16951227#comment-16951227
 ] 

Hudson commented on HDFS-14886:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17534 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17534/])
HDFS-14886. In NameNode Web UI's Startup Progress page, Loading edits 
(surendralilhore: rev 336abbd8737f3dff38f7bdad9721511c711c522b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java


> In NameNode Web UI's Startup Progress page, Loading edits always shows 0 sec
> 
>
> Key: HDFS-14886
> URL: https://issues.apache.org/jira/browse/HDFS-14886
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14886.001.patch, HDFS-14886.002.patch, 
> HDFS-14886.003.patch, HDFS-14886_After.png, HDFS-14886_before.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14856) Add ability to import file ACLs from remote store

2019-10-14 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16951157#comment-16951157
 ] 

Hudson commented on HDFS-14856:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17533 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17533/])
HDFS-14856. Fetch file ACLs while mounting external store. (#1478) (virajith: 
rev fabd41fa480303f86bfe7b6ae0277bc0b6015f80)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/UGIResolver.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSingleUGIResolver.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreePath.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreeWalk.java
* (add) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSTreeWalk.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSTreeWalk.java
* (edit) 
hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/SingleUGIResolver.java


> Add ability to import file ACLs from remote store
> -
>
> Key: HDFS-14856
> URL: https://issues.apache.org/jira/browse/HDFS-14856
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ashvin Agrawal
>Assignee: Ashvin Agrawal
>Priority: Major
>
> Provided storage (HDFS-9806) allows data on external storage systems to 
> seamlessly appear as files on HDFS. However, in the implementation today, the 
> external store scanner, {{FsTreeWalk,}} ignores any ACLs on the data. In a 
> secure HDFS setup where external storage system and HDFS belong to the same 
> security domain, uniform enforcement of the authorization policies may be 
> desired. This task aims to extend the ability of the external store scanner 
> to support this use case. When configured, the scanner should attempt to 
> fetch ACLs and provide it to the consumer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14238) A log in NNThroughputBenchmark should change log level to "INFO" instead of "ERROR"

2019-10-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16950076#comment-16950076
 ] 

Hudson commented on HDFS-14238:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17529 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17529/])
HDFS-14238. A log in NNThroughputBenchmark should change log level to 
(ayushsaxena: rev 5f4641a120331d049a55c519a0d15da18c820fed)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java


> A log in NNThroughputBenchmark should  change log level to "INFO" instead of 
> "ERROR"
> 
>
> Key: HDFS-14238
> URL: https://issues.apache.org/jira/browse/HDFS-14238
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14238.patch
>
>
> In NNThroughputBenchmark#150, LOG.error("Log level = " + logLevel.toString());
> this loglevel should be changed to “LOG.info()” ,since no error occurs here, 
> just tell us namenode log level has changed .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14899) Use Relative URLS in Hadoop HDFS RBF

2019-10-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16949960#comment-16949960
 ] 

Hudson commented on HDFS-14899:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17528 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17528/])
HDFS-14899. Use Relative URLS in Hadoop HDFS RBF. Contributed by David 
(ayushsaxena: rev 6e5cd5273f1107635867ee863cb0e17ef7cc4afa)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js


> Use Relative URLS in Hadoop HDFS RBF
> 
>
> Key: HDFS-14899
> URL: https://issues.apache.org/jira/browse/HDFS-14899
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14899.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2213) Reduce key provider loading log level in OzoneFileSystem#getAdditionalTokenIssuers

2019-10-11 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16949672#comment-16949672
 ] 

Hudson commented on HDDS-2213:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17526 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17526/])
HDDS-2213.Reduce key provider loading log level in (arp7: rev 
c561a70c49dd62d8ca563182af17ac21479a87de)
* (edit) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneFileSystem.java


> Reduce key provider loading log level in 
> OzoneFileSystem#getAdditionalTokenIssuers
> --
>
> Key: HDDS-2213
> URL: https://issues.apache.org/jira/browse/HDDS-2213
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Shweta
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> OzoneFileSystem#getAdditionalTokenIssuers log an error when secure client 
> tries to collect ozone delegation token to run MR/Spark jobs but ozone file 
> system does not have a kms provider configured. In this case, we simply 
> return null provider here in the code below. This is a benign error and we 
> should reduce the log level to debug level.
> {code:java}
> KeyProvider keyProvider;
>  try {
>   keyProvider = getKeyProvider(); }
> catch (IOException ioe) {
>   LOG.error("Error retrieving KeyProvider.", ioe);
>   return null;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2282) scmcli pipeline list command throws NullPointerException

2019-10-10 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16949166#comment-16949166
 ] 

Hudson commented on HDDS-2282:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17523 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17523/])
HDDS-2282. scmcli pipeline list command throws NullPointerException. (bharat: 
rev f267917ce3cf282b32166e39af871a8d1231d090)
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/SCMCLI.java
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure/test.sh
* (add) hadoop-ozone/dist/src/main/smoketest/scmcli/pipeline.robot


> scmcli pipeline list command throws NullPointerException
> 
>
> Key: HDDS-2282
> URL: https://issues.apache.org/jira/browse/HDDS-2282
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> ozone scmcli pipeline list
> {noformat}
> java.lang.NullPointerException
>   at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientManager.(XceiverClientManager.java:98)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientManager.(XceiverClientManager.java:83)
>   at 
> org.apache.hadoop.hdds.scm.cli.SCMCLI.createScmClient(SCMCLI.java:139)
>   at 
> org.apache.hadoop.hdds.scm.cli.pipeline.ListPipelinesSubcommand.call(ListPipelinesSubcommand.java:55)
>   at 
> org.apache.hadoop.hdds.scm.cli.pipeline.ListPipelinesSubcommand.call(ListPipelinesSubcommand.java:30)
>   at picocli.CommandLine.execute(CommandLine.java:1173)
>   at picocli.CommandLine.access$800(CommandLine.java:141)
>   at picocli.CommandLine$RunLast.handle(CommandLine.java:1367)
>   at picocli.CommandLine$RunLast.handle(CommandLine.java:1335)
>   at 
> picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:1243)
>   at picocli.CommandLine.parseWithHandlers(CommandLine.java:1526)
>   at picocli.CommandLine.parseWithHandler(CommandLine.java:1465)
>   at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:65)
>   at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:56)
>   at org.apache.hadoop.hdds.scm.cli.SCMCLI.main(SCMCLI.java:101){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1986) Fix listkeys API

2019-10-10 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16949032#comment-16949032
 ] 

Hudson commented on HDDS-1986:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17522 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17522/])
HDDS-1986. Fix listkeys API. (#1588) (github: rev 
9c72bf462196e1d71a243903b74e3c4673f29efb)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMRequestUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestOmMetadataManager.java


> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1984) Fix listBucket API

2019-10-10 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16949003#comment-16949003
 ] 

Hudson commented on HDDS-1984:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17521 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17521/])
HDDS-1984. Fix listBucket API. (#1555) (github: rev 
957253fea682b6389b02b0191b71b9e12087bd72)
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestOmMetadataManager.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/CacheKey.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/TableCacheImpl.java


> Fix listBucket API
> --
>
> Key: HDDS-1984
> URL: https://issues.apache.org/jira/browse/HDDS-1984
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> This Jira is to fix listBucket API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listBuckets, it should use both 
> in-memory cache and rocksdb bucket table to list buckets in a volume.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2269) Provide config for fair/non-fair for OM RW Lock

2019-10-10 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16948841#comment-16948841
 ] 

Hudson commented on HDDS-2269:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17520 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17520/])
HDDS-2269. Provide config for fair/non-fair for OM RW Lock. (#1623) (nanda: rev 
4850b3aa86970f7af8f528564f2573becbd8e434)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/ActiveLock.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/PooledLockFactory.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/LockManager.java


> Provide config for fair/non-fair for OM RW Lock
> ---
>
> Key: HDDS-2269
> URL: https://issues.apache.org/jira/browse/HDDS-2269
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Provide config in OzoneManager Lock for fair/non-fair for OM RW Lock.
> Created based on review comments during HDDS-2244.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14900) Fix build failure of hadoop-hdfs-native-client

2019-10-10 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16948752#comment-16948752
 ] 

Hudson commented on HDFS-14900:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17518 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17518/])
HDFS-14900. Fix build failure of hadoop-hdfs-native-client. Contributed 
(ayushsaxena: rev 104ccca916997bbf3c37d87adbae673f4dd42036)
* (edit) dev-support/docker/Dockerfile
* (edit) BUILDING.txt


>  Fix build failure of hadoop-hdfs-native-client
> ---
>
> Key: HDFS-14900
> URL: https://issues.apache.org/jira/browse/HDFS-14900
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14900.001.patch, HDFS-14900.002.patch, 
> HDFS-14900.003.patch
>
>
> HADOOP-16558 removed protocol buffers from build requirements but libhdfspp 
> requires libprotobuf and libprotoc. {{-Pnative}} build fails if protocol 
> buffers is not installed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2266) Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (Ozone)

2019-10-10 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16948383#comment-16948383
 ] 

Hudson commented on HDDS-2266:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17517 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17517/])
HDDS-2266. Avoid evaluation of LOG.trace and LOG.debug statement in the 
(shashikant: rev a031388a2e8b7ac60ebca5a08216e2dd19ea6933)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OpenKeyCleanupService.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/PrefixManagerImpl.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/exception/OS3ExceptionMapper.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSelector.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneClientProducer.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/AWSV4AuthParser.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeSetAclRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisClient.java
* (edit) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketSetAclRequest.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/BlockOutputStreamEntryPool.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneSecretManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerHARequestHandlerImpl.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/ha/OMFailoverProxyProvider.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/security/acl/OzoneNativeAuthorizer.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSecretManager.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneBlockTokenSecretManager.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/pipeline/Pipeline.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OMRatisHelper.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/S3SecretManagerImpl.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java


> Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path 
> (Ozone)
> 
>
> Key: HDDS-2266
> URL: https://issues.apache.org/jira/browse/HDDS-2266
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI, Ozone Manager
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> LOG.trace and LOG.debug with logging information will be evaluated even when 
> debug/trace logging is disabled. This jira proposes to wrap all the 
> trace/debug logging with
> LOG.isDebugEnabled and LOG.isTraceEnabled to prevent the logging.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoo

[jira] [Commented] (HDFS-14898) Use Relative URLS in Hadoop HDFS HTTP FS

2019-10-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16948181#comment-16948181
 ] 

Hudson commented on HDFS-14898:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17516 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17516/])
HDFS-14898. Use Relative URLS in Hadoop HDFS HTTP FS. Contributed by 
(ayushsaxena: rev eeb58a07e24e6a1abdf32e1c198a5a1e9c2a8f1a)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/resources/webapps/static/index.html


> Use Relative URLS in Hadoop HDFS HTTP FS
> 
>
> Key: HDFS-14898
> URL: https://issues.apache.org/jira/browse/HDFS-14898
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14898.1.patch, HDFS-14898.2.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14754) Erasure Coding : The number of Under-Replicated Blocks never reduced

2019-10-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16947912#comment-16947912
 ] 

Hudson commented on HDFS-14754:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17515 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17515/])
HDFS-14754. Erasure Coding : The number of Under-Replicated Blocks never 
(surendralilhore: rev d76e2655ace56490a92da70bde9e651ce515f80c)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestRedudantBlocks.java


> Erasure Coding :  The number of Under-Replicated Blocks never reduced
> -
>
> Key: HDFS-14754
> URL: https://issues.apache.org/jira/browse/HDFS-14754
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Critical
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14754-addendum.001.patch, 
> HDFS-14754-addendum.002.patch, HDFS-14754-addendum.003.patch, 
> HDFS-14754.001.patch, HDFS-14754.002.patch, HDFS-14754.003.patch, 
> HDFS-14754.004.patch, HDFS-14754.005.patch, HDFS-14754.006.patch, 
> HDFS-14754.007.patch, HDFS-14754.008.patch, HDFS-14754.branch-3.1.patch
>
>
> Using EC RS-3-2, 6 DN 
> We came accross a scenario where in the EC 5 blocks , same block is 
> replicated thrice and two blocks got missing
> Replicated block was not deleting and missing block is not able to ReConstruct



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2265) integration.sh may report false negative

2019-10-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16947755#comment-16947755
 ] 

Hudson commented on HDDS-2265:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17513 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17513/])
HDDS-2265. integration.sh may report false negative (elek: rev 
2d81abce5ecfec555eda4819a6e2f5b22e1cd9b8)
* (edit) hadoop-ozone/dev-support/checks/_mvn_unit_report.sh


> integration.sh may report false negative
> 
>
> Key: HDDS-2265
> URL: https://issues.apache.org/jira/browse/HDDS-2265
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build, test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Sometimes integration test run gets killed, and {{integration.sh}} 
> incorrectly reports "success".  Example:
> {noformat:title=https://github.com/elek/ozone-ci-q4/tree/ae930d6f7f10c7d2aeaf1f2f21b18ada954ea444/pr/pr-hdds-2259-hlwmv/integration/result}
> success
> {noformat}
> {noformat:title=https://github.com/elek/ozone-ci-q4/blob/ae930d6f7f10c7d2aeaf1f2f21b18ada954ea444/pr/pr-hdds-2259-hlwmv/integration/output.log#L2457}
> /workdir/hadoop-ozone/dev-support/checks/integration.sh: line 22:   369 
> Killed  mvn -B -fn test -f pom.ozone.xml -pl 
> :hadoop-ozone-integration-test,:hadoop-ozone-filesystem,:hadoop-ozone-tools 
> -Dtest=\!TestMiniChaosOzoneCluster "$@"
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2217) Remove log4j and audit configuration from the docker-config files

2019-10-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16947716#comment-16947716
 ] 

Hudson commented on HDDS-2217:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17512 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17512/])
HDDS-2217. Remove log4j and audit configuration from the docker-config (elek: 
rev 4b0a5bca465c84265b8305e001809fd1f986e8da)
* (edit) hadoop-ozone/dev-support/checks/_mvn_unit_report.sh


> Remove log4j and audit configuration from the docker-config files
> -
>
> Key: HDDS-2217
> URL: https://issues.apache.org/jira/browse/HDDS-2217
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: docker
>Reporter: Marton Elek
>Assignee: Chris Teoh
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Log4j configuration lines are added to the docker-config under 
> hadoop-ozone/dist/src/main/compose/...
> Mainly to make it easier to reconfigure the log level of any components.
> As we already have a "ozone insight" tool which can help us to modify the log 
> level at runtime we don't need these lines any more.
> {code:java}
> LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout
> LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender
> LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
> LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{-MM-dd 
> HH:mm:ss} %-5p %c{1}:%L - %m%n
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.grpc.client.GrpcClientProtocolClient=WARN
> LOG4J.PROPERTIES_log4j.logger.http.requests.s3gateway=INFO,s3gatewayrequestlog
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.Filename=/tmp/jetty-s3gateway-_mm_dd.log
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.RetainDays=3 {code}
> We can remove them together with the audit log entries as we already have a 
> default log4j.propertes / audit log4j2 config.
> After the remove the clusters should be tested: Ozone CLI should not print 
> and confusing log messages (such as NativeLib is missing or anything else). 
> AFAIK they are already turned off in the etc/hadoop/etc log4j.properties.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2217) Remove log4j and audit configuration from the docker-config files

2019-10-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16947700#comment-16947700
 ] 

Hudson commented on HDDS-2217:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17511 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17511/])
HDDS-2217. Remove log4j and audit configuration from the docker-config (elek: 
rev 1f954e679895f68d6ce9e822498daa2b142e7e46)
* (edit) hadoop-ozone/dist/src/main/compose/ozone-recon/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozone-hdfs/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozoneblockade/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozones3/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozoneperf/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozone/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozones3-haproxy/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozone-mr/common-config
* (edit) hadoop-ozone/dist/src/main/compose/ozone-topology/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozonescripts/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozone-om-ha/docker-config


> Remove log4j and audit configuration from the docker-config files
> -
>
> Key: HDDS-2217
> URL: https://issues.apache.org/jira/browse/HDDS-2217
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: docker
>Reporter: Marton Elek
>Assignee: Chris Teoh
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Log4j configuration lines are added to the docker-config under 
> hadoop-ozone/dist/src/main/compose/...
> Mainly to make it easier to reconfigure the log level of any components.
> As we already have a "ozone insight" tool which can help us to modify the log 
> level at runtime we don't need these lines any more.
> {code:java}
> LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout
> LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender
> LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
> LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{-MM-dd 
> HH:mm:ss} %-5p %c{1}:%L - %m%n
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.grpc.client.GrpcClientProtocolClient=WARN
> LOG4J.PROPERTIES_log4j.logger.http.requests.s3gateway=INFO,s3gatewayrequestlog
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.Filename=/tmp/jetty-s3gateway-_mm_dd.log
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.RetainDays=3 {code}
> We can remove them together with the audit log entries as we already have a 
> default log4j.propertes / audit log4j2 config.
> After the remove the clusters should be tested: Ozone CLI should not print 
> and confusing log messages (such as NativeLib is missing or anything else). 
> AFAIK they are already turned off in the etc/hadoop/etc log4j.properties.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2233) Remove ByteStringHelper and refactor the code to the place where it used

2019-10-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16947383#comment-16947383
 ] 

Hudson commented on HDDS-2233:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17507 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17507/])
HDDS-2233 - Remove ByteStringHelper and refactor the code to the place 
(shashikant: rev 1d279304079cb898e84c8f37ec40fb0e5cfb92ae)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/ChunkManagerImpl.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/interfaces/ChunkManager.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/ChunkManagerDummyImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/BlockOutputStreamEntryPool.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ByteStringConversion.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestChunkManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BufferPool.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ByteStringHelper.java


> Remove ByteStringHelper and refactor the code to the place where it used
> 
>
> Key: HDDS-2233
> URL: https://issues.apache.org/jira/browse/HDDS-2233
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Istvan Fajth
>Assignee: Istvan Fajth
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> See HDDS-2203 where there is a race condition reported by me.
> Later in the discussion we agreed that it is better to refactor the code and 
> remove the class completely for now, and that would also resolve the race 
> condition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2244) Use new ReadWrite lock in OzoneManager

2019-10-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16947166#comment-16947166
 ] 

Hudson commented on HDDS-2244:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17506 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17506/])
HDDS-2244. Use new ReadWrite lock in OzoneManager. (#1589) (github: rev 
87d9f3668ce00171d7c2dfbbaf84acb482317b67)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadCommitPartRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/prefix/OMPrefixAclRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3InitiateMultipartUploadRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRenameRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeCreateRequest.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadCompleteRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/security/S3GetSecretRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketDeleteRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeDeleteRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketAclRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetQuotaRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketSetPropertyRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMDirectoryCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeAclRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadAbortRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeyAclRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketDeleteRequest.java


> Use new ReadWrite lock in OzoneManager
> --
>
> Key: HDDS-2244
> URL: https://issues.apache.org/jira/browse/HDDS-2244
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Use new ReadWriteLock added in HDDS-2223.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14509) DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 3.x

2019-10-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16947133#comment-16947133
 ] 

Hudson commented on HDFS-14509:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17505 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17505/])
HDFS-14509. DN throws InvalidToken due to inequality of password when (cliang: 
rev 72ae371e7a6695f45f0d9cea5ae9aae83941d360)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenIdentifier.java


> DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 
> 3.x
> ---
>
> Key: HDFS-14509
> URL: https://issues.apache.org/jira/browse/HDFS-14509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuxuan Wang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-14509-001.patch, HDFS-14509-002.patch, 
> HDFS-14509-003.patch
>
>
> According to the doc, if we want to upgrade cluster from 2.x to 3.x, we need 
> upgrade NN first. And there will be a intermediate state that NN is 3.x and 
> DN is 2.x. At that moment, if a client reads (or writes) a block, it will get 
> a block token from NN and then deliver the token to DN who can verify the 
> token. But the verification in the code now is :
> {code:title=BlockTokenSecretManager.java|borderStyle=solid}
> public void checkAccess(...)
> {
> ...
> id.readFields(new DataInputStream(new 
> ByteArrayInputStream(token.getIdentifier(;
> ...
> if (!Arrays.equals(retrievePassword(id), token.getPassword())) {
>   throw new InvalidToken("Block token with " + id.toString()
>   + " doesn't have the correct token password");
> }
> }
> {code} 
> And {{retrievePassword(id)}} is:
> {code} 
> public byte[] retrievePassword(BlockTokenIdentifier identifier)
> {
> ...
> return createPassword(identifier.getBytes(), key.getKey());
> }
> {code} 
> So, if NN's identifier add new fields, DN will lose the fields and compute 
> wrong password.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2260) Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS)

2019-10-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16947109#comment-16947109
 ] 

Hudson commented on HDDS-2260:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17503 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17503/])
HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the 
(bharat: rev 15a9beed1b0a14e8e1f0537294bdac13c9340465)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/AbstractContainerReportHandler.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/HddsDispatcher.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/ThrottledAsyncChecker.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/ratis/RatisHelper.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/token/BlockTokenVerifier.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerCommandRequestPBHelper.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/HddsConfServlet.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/HddsVolumeChecker.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/SCMBlockDeletingService.java
* (edit) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventQueue.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineReportHandler.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockInputStream.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineUtils.java
* (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerDataScanner.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueBlockIterator.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/CommitWatcher.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerAttribute.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/EndpointStateMachine.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/RandomContainerDeletionChoosingPolicy.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientRatis.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/LevelDBStore.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/tracing/StringCodec.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/TopNOrderedContainerDeletionChoosingPolicy.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/pipeline/Pipeline.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lease/LeaseManager.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/ChunkManagerImpl.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CloseContainerCommandHandler.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/IncrementalContainerReportHandler.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/PKIProfiles/DefaultProfile.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMContainerPlacementRackAware.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/u

[jira] [Commented] (HDFS-14859) Prevent unnecessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-10-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16946649#comment-16946649
 ] 

Hudson commented on HDFS-14859:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17502 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17502/])
HDFS-14859. Prevent unnecessary evaluation of costly operation (ayushsaxena: 
rev 91320b446171013ad47783d7400d646d2d71ca3d)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerSafeMode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManagerSafeMode.java


> Prevent unnecessary evaluation of costly operation getNumLiveDataNodes when 
> dfs.namenode.safemode.min.datanodes is not zero
> ---
>
> Key: HDFS-14859
> URL: https://issues.apache.org/jira/browse/HDFS-14859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0, 3.3.0, 3.1.4
>Reporter: Srinivasu Majeti
>Assignee: Srinivasu Majeti
>Priority: Major
>  Labels: block
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14859.001.patch, HDFS-14859.002.patch, 
> HDFS-14859.003.patch, HDFS-14859.004.patch, HDFS-14859.005.patch, 
> HDFS-14859.006.patch, HDFS-14859.007.patch
>
>
> There have been improvements like HDFS-14171 and HDFS-14632 to the 
> performance issue caused from getNumLiveDataNodes calls per block. The 
> improvement has been only done w.r.t dfs.namenode.safemode.min.datanodes 
> paramter being set to 0 or not.
> {code}
>private boolean areThresholdsMet() {
>  assert namesystem.hasWriteLock();
> -int datanodeNum = 
> blockManager.getDatanodeManager().getNumLiveDataNodes();
> +// Calculating the number of live datanodes is time-consuming
> +// in large clusters. Skip it when datanodeThreshold is zero.
> +int datanodeNum = 0;
> +if (datanodeThreshold > 0) {
> +  datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
> +}
>  synchronized (this) {
>return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>  }
> {code}
> I feel above logic would create similar situation of un-necessary evaluations 
> of getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes paramter is 
> set > 0 even though "blockSafe >= blockThreshold" is false for most of the 
> time in NN startup safe mode. We could do something like below to avoid this
> {code}
> private boolean areThresholdsMet() {
> assert namesystem.hasWriteLock();
> synchronized (this) {
>   return blockSafe >= blockThreshold && (datanodeThreshold > 0)?
>   blockManager.getDatanodeManager().getNumLiveDataNodes() >= 
> datanodeThreshold : true;
> }
>   } 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14814) RBF: RouterQuotaUpdateService supports inherited rule.

2019-10-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16946640#comment-16946640
 ] 

Hudson commented on HDFS-14814:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17501 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17501/])
HDFS-14814. RBF: RouterQuotaUpdateService supports inherited rule. 
(ayushsaxena: rev 761594549ec0c6bab50a28a7eb6c741aec7239d7)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestDisableRouterQuota.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUpdateService.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaManager.java


> RBF: RouterQuotaUpdateService supports inherited rule.
> --
>
> Key: HDFS-14814
> URL: https://issues.apache.org/jira/browse/HDFS-14814
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14814.001.patch, HDFS-14814.002.patch, 
> HDFS-14814.003.patch, HDFS-14814.004.patch, HDFS-14814.005.patch, 
> HDFS-14814.006.patch, HDFS-14814.007.patch, HDFS-14814.008.patch, 
> HDFS-14814.009.patch, HDFS-14814.010.patch, HDFS-14814.011.patch
>
>
> I want to add a rule *'The quota should be set the same as the nearest 
> parent'* to Global Quota. Supposing we have the mount table below.
> M1: /dir-a                            ns0->/dir-a     \{nquota=10,squota=20}
> M2: /dir-a/dir-b                 ns1->/dir-b     \{nquota=-1,squota=30}
> M3: /dir-a/dir-b/dir-c       ns2->/dir-c     \{nquota=-1,squota=-1}
> M4: /dir-d                           ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota for the remote locations on the namespaces should be:
>  ns0->/dir-a     \{nquota=10,squota=20}
>  ns1->/dir-b     \{nquota=10,squota=30}
>  ns2->/dir-c      \{nquota=10,squota=30}
>  ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota of the remote location is set the same as the corresponding 
> MountTable, and if there is no quota of the MountTable then the quota is set 
> to the nearest parent MountTable with quota.
>  
> It's easy to implement it. In RouterQuotaUpdateService each time we compute 
> the currentQuotaUsage, we can get the quota info for each MountTable. We can 
> do a
>  check and fix all the MountTable which's quota doesn't match the rule above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



<    1   2   3   4   5   6   7   8   9   10   >