[jira] [Work logged] (HDFS-15795) EC: Returned wrong checksum when reconstruction was failed by exception

2021-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15795?focusedWorklogId=544104&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-544104
 ]

ASF GitHub Bot logged work on HDFS-15795:
-

Author: ASF GitHub Bot
Created on: 29/Jan/21 08:17
Start Date: 29/Jan/21 08:17
Worklog Time Spent: 10m 
  Work Description: crossfire commented on pull request #2657:
URL: https://github.com/apache/hadoop/pull/2657#issuecomment-769652598


   @sodonnel Thanks for taking a look! It sounds good. 
   Let me fix this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 544104)
Time Spent: 1h 10m  (was: 1h)

> EC: Returned wrong checksum when reconstruction was failed by exception
> ---
>
> Key: HDFS-15795
> URL: https://issues.apache.org/jira/browse/HDFS-15795
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, ec, erasure-coding
>Reporter: Yushi Hayasaka
>Assignee: Yushi Hayasaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> If the reconstruction task is failed on StripedBlockChecksumReconstructor by 
> exception, the checksum becomes wrong one because it is calculated with 
> blocks except a failure one.
> It is caused by catching exception with not appropriate way. As a result, the 
> failed block is not fetched again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15801) Backport HDFS-14582 to branch-2.10 (Failed to start DN with ArithmeticException when NULL checksum used)

2021-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15801?focusedWorklogId=544168&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-544168
 ]

ASF GitHub Bot logged work on HDFS-15801:
-

Author: ASF GitHub Bot
Created on: 29/Jan/21 09:45
Start Date: 29/Jan/21 09:45
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2659:
URL: https://github.com/apache/hadoop/pull/2659#issuecomment-769697983


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ branch-2.10 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  16m 14s |  branch-2.10 passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  branch-2.10 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  branch-2.10 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  branch-2.10 passed  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  branch-2.10 passed  |
   | +0 :ok: |  spotbugs  |   3m  8s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  4s |  branch-2.10 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  6s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m  6s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m  7s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  87m 38s |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 125m 30s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestEditLogRace |
   |   | 
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2659/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2659 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3fed3b50a229 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-2.10 / c1a3e81 |
   | Default Java | Oracle Corporation-1.7.0_95-b00 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2659/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2659/1/testReport/ |
   | Max. process+thread count | 2230 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2659/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 544168)
Time Spent: 20m  (was: 10m)

> Backport HDFS-14582 to branch-2.10 (Failed to start DN with 
> ArithmeticException when NULL checksum used)
> 
>
> Key: HDFS-15801
> URL: https://issues.apache.org/jira/browse/HDFS-15801
> Project: Hadoop HDFS
> 

[jira] [Commented] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible

2021-01-29 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17274273#comment-17274273
 ] 

Xiaoqiao He commented on HDFS-15764:


Thanks [~hadoop_yangyun], [~ayushtkn] and [~elgoiri] discuss this improvement 
here. I agree that it will postpone awareness about deleted or corrupt blocks 
for NN rely on FBR. However I am concerned about if there will be RPC request 
flood (blockReceivedAndDeleted) to NN when one disk failed before found by 
FailVolumeScanner.
A. Send one RPC request to NN (blockReport) 
B. Send many RPC request NN (blockReceivedAndDeleted), the amount is related 
number of blocks located at the failed volume.
Please correct me if I don't understand correctly. Thanks.

> Notify Namenode missing or new block on disk as soon as possible
> 
>
> Key: HDFS-15764
> URL: https://issues.apache.org/jira/browse/HDFS-15764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch
>
>
> When a bock file is deleted on disk or copied back to the disk, the 
> DirectoryScanner can find the change, but the namenode know the change only 
> untill the next full report. And in big cluster the period of full report is 
> set to long time invterval.
> Call notifyNamenodeDeletedBlock if block files are deleted and call 
> notifyNamenodeReceivedBlock if the block files is found agian. So the 
> Incremental block report can send the change to namenode in next heartbeat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible

2021-01-29 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17274302#comment-17274302
 ] 

Yang Yun commented on HDFS-15764:
-

Thanks [~hexiaoqiao] for your comments,

>From I understanding, When call notifyNamenodeDeletedBlock, it add the block 
>info to the map 'pendingIBRs',  a batch of blocks info will be sent at next 
>hearbeat as mentioned by [~elgoiri]. So the RPC request  to NN is not related 
>to the number of blocks located at the failed volume.

> Notify Namenode missing or new block on disk as soon as possible
> 
>
> Key: HDFS-15764
> URL: https://issues.apache.org/jira/browse/HDFS-15764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch
>
>
> When a bock file is deleted on disk or copied back to the disk, the 
> DirectoryScanner can find the change, but the namenode know the change only 
> untill the next full report. And in big cluster the period of full report is 
> set to long time invterval.
> Call notifyNamenodeDeletedBlock if block files are deleted and call 
> notifyNamenodeReceivedBlock if the block files is found agian. So the 
> Incremental block report can send the change to namenode in next heartbeat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible

2021-01-29 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17274302#comment-17274302
 ] 

Yang Yun edited comment on HDFS-15764 at 1/29/21, 10:26 AM:


Thanks [~hexiaoqiao] for your comments,

>From my understanding, When call notifyNamenodeDeletedBlock, it add the block 
>info to the map 'pendingIBRs',  a batch of blocks info will be sent at next 
>hearbeat as mentioned by [~elgoiri]. So the RPC request  to NN is not related 
>to the number of blocks located at the failed volume.


was (Author: hadoop_yangyun):
Thanks [~hexiaoqiao] for your comments,

>From I understanding, When call notifyNamenodeDeletedBlock, it add the block 
>info to the map 'pendingIBRs',  a batch of blocks info will be sent at next 
>hearbeat as mentioned by [~elgoiri]. So the RPC request  to NN is not related 
>to the number of blocks located at the failed volume.

> Notify Namenode missing or new block on disk as soon as possible
> 
>
> Key: HDFS-15764
> URL: https://issues.apache.org/jira/browse/HDFS-15764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch
>
>
> When a bock file is deleted on disk or copied back to the disk, the 
> DirectoryScanner can find the change, but the namenode know the change only 
> untill the next full report. And in big cluster the period of full report is 
> set to long time invterval.
> Call notifyNamenodeDeletedBlock if block files are deleted and call 
> notifyNamenodeReceivedBlock if the block files is found agian. So the 
> Incremental block report can send the change to namenode in next heartbeat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible

2021-01-29 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17274316#comment-17274316
 ] 

Xiaoqiao He commented on HDFS-15764:


It is actually a good choice to rely on pendingIBR, if that we should tune 
interval of IBR (IIRC, the default value is 0 currently) or add another 
separate configuration to avoid RPC flood IMO.

> Notify Namenode missing or new block on disk as soon as possible
> 
>
> Key: HDFS-15764
> URL: https://issues.apache.org/jira/browse/HDFS-15764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch
>
>
> When a bock file is deleted on disk or copied back to the disk, the 
> DirectoryScanner can find the change, but the namenode know the change only 
> untill the next full report. And in big cluster the period of full report is 
> set to long time invterval.
> Call notifyNamenodeDeletedBlock if block files are deleted and call 
> notifyNamenodeReceivedBlock if the block files is found agian. So the 
> Incremental block report can send the change to namenode in next heartbeat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15792) ClasscastException while loading FSImage

2021-01-29 Thread Renukaprasad C (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17274332#comment-17274332
 ] 

Renukaprasad C commented on HDFS-15792:
---

Above test failures are not related to the code changes done.

> ClasscastException while loading FSImage
> 
>
> Key: HDFS-15792
> URL: https://issues.apache.org/jira/browse/HDFS-15792
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nn
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
> Attachments: HDFS-15792.001.patch, HDFS-15792.002.patch, 
> image-2021-01-27-12-00-34-846.png
>
>
> FSImage loading has failed with ClasscastException - 
> java.lang.ClassCastException: java.util.HashMap$Node cannot be cast to 
> java.util.HashMap$TreeNode.
> This is the usage issue with Hashmap in concurrent scenarios.
> Same issue has been reported on Java & closed as usage issue.  - 
> https://bugs.openjdk.java.net/browse/JDK-8173671
> 2020-12-28 11:36:26,127 | ERROR | main | An exception occurred when loading 
> INODE from fsiamge. | FSImageFormatProtobuf.java:442
> java.lang.
> : java.util.HashMap$Node cannot be cast to java.util.HashMap$TreeNode
>   at java.util.HashMap$TreeNode.moveRootToFront(HashMap.java:1835)
>   at java.util.HashMap$TreeNode.treeify(HashMap.java:1951)
>   at java.util.HashMap.treeifyBin(HashMap.java:772)
>   at java.util.HashMap.putVal(HashMap.java:644)
>   at java.util.HashMap.put(HashMap.java:612)
>   at 
> org.apache.hadoop.hdfs.util.ReferenceCountMap.put(ReferenceCountMap.java:53)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AclStorage.addAclFeature(AclStorage.java:391)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeWithAdditionalFields.addAclFeature(INodeWithAdditionalFields.java:349)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeDirectory(FSImageFormatPBINode.java:225)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINode(FSImageFormatPBINode.java:406)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.readPBINodes(FSImageFormatPBINode.java:367)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeSection(FSImageFormatPBINode.java:342)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader$2.call(FSImageFormatProtobuf.java:469)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> 2020-12-28 11:36:26,130 | ERROR | main | Failed to load image from 
> FSImageFile(file=/srv/BigData/namenode/current/fsimage_00198227480, 
> cpktTxId=00198227480) | FSImage.java:738
> java.io.IOException: java.lang.ClassCastException: java.util.HashMap$Node 
> cannot be cast to java.util.HashMap$TreeNode
>   at 
> org.apache.hadoop.io.MultipleIOException$Builder.add(MultipleIOException.java:68)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.runLoaderTasks(FSImageFormatProtobuf.java:444)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:360)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:263)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:227)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:971)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:955)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:820)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:733)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:331)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1113)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:730)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:648)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:710)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:953)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:926)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1665)
>   at 
> org.apache.hadoop.hdfs.ser

[jira] [Work logged] (HDFS-15683) Allow configuring DISK/ARCHIVE capacity for individual volumes

2021-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15683?focusedWorklogId=544223&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-544223
 ]

ASF GitHub Bot logged work on HDFS-15683:
-

Author: ASF GitHub Bot
Created on: 29/Jan/21 11:31
Start Date: 29/Jan/21 11:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2625:
URL: https://github.com/apache/hadoop/pull/2625#issuecomment-769750535


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  4s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 36s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 11s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  9s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  javac  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 592 unchanged - 1 
fixed = 592 total (was 593)  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  13m 15s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 15s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 219m 18s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2625/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 307m  6s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestObserverNode |
   |   | hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
   |   | hadoop.hdfs.TestAclsEndToEnd |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2625/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2625 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 3918481bb8c9 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fa15594ae60 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 |
   | Multi-JDK v

[jira] [Commented] (HDFS-14343) RBF: Fix renaming folders spread across multiple subclusters

2021-01-29 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17274357#comment-17274357
 ] 

Ayush Saxena commented on HDFS-14343:
-

Hi [~zhengchenyu], 
For your case-1 :
bq. If /user/userA is mountable which mounts two nameservice: ns1, ns2. But if 
both hdfs://ns1/user/userA/a.log and hdfs://ns2/user/userA/a.log exists. I want 
to remove hdfs://ns-fed/user/userA/a.log (Note: a.log is file) to trash, then 
only one nameservice take effect.

In multiple destination scenarios, same Directories can be there in both 
namespaces, but not files, Try creating a file through router it won't allow 
you to create in another namespace, if the same path exists in one namespace. 
We have a check during the create call in case there are multiple destinations, 
check if the file already exists.
Such a scenario can only happen if someone directly goes to the namenode and 
create the file instead of going through Router, And that isn't a valid use 
case, If you create through router you would be able to delete through it 
without any issues.

case-2
bq. In other way, if we hdfs://ns-fed/user/userA/dirA (Note: dirA is 
directroy.) If hdfs://ns1/user/userA/dirA's permission is not same with 
hdfs://ns2/user/userA/dirA's permission.
Again if you set permission through router, permissions would be same in both 
namespaces, 

Case-3
 In case one nameservice is down, the call will fail unless fault tolerance 
isn't turned on. That is another broad subject...

> RBF: Fix renaming folders spread across multiple subclusters
> 
>
> Key: HDFS-14343
> URL: https://issues.apache.org/jira/browse/HDFS-14343
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14343-HDFS-13891-01.patch, 
> HDFS-14343-HDFS-13891-02.patch, HDFS-14343-HDFS-13891-03.patch, 
> HDFS-14343-HDFS-13891-04.patch, HDFS-14343-HDFS-13891-05.patch
>
>
> The {{RouterClientProtocol#rename()}} function assumes that we are renaming 
> files and only renames one of them (i.e., {{invokeSequential()}}). In the 
> case of folders which are in all subclusters (e.g., HASH_ALL) we should 
> rename all locations (i.e., {{invokeAll()}}).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible

2021-01-29 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17274370#comment-17274370
 ] 

Yang Yun commented on HDFS-15764:
-

If the default value is 0, it will use the remaining heartbeat time.
{code:java}
synchronized void waitTillNextIBR(long waitTime) {
  if (waitTime > 0 && !sendImmediately()) {
try {
  wait(ibrInterval > 0 && ibrInterval < waitTime? ibrInterval: waitTime);
} catch (InterruptedException ie) {
  LOG.warn(getClass().getSimpleName() + " interrupted");
}
  }
}{code}

> Notify Namenode missing or new block on disk as soon as possible
> 
>
> Key: HDFS-15764
> URL: https://issues.apache.org/jira/browse/HDFS-15764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch
>
>
> When a bock file is deleted on disk or copied back to the disk, the 
> DirectoryScanner can find the change, but the namenode know the change only 
> untill the next full report. And in big cluster the period of full report is 
> set to long time invterval.
> Call notifyNamenodeDeletedBlock if block files are deleted and call 
> notifyNamenodeReceivedBlock if the block files is found agian. So the 
> Incremental block report can send the change to namenode in next heartbeat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15792) ClasscastException while loading FSImage

2021-01-29 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17274382#comment-17274382
 ] 

Xiaoqiao He commented on HDFS-15792:


Thanks [~prasad-acit] for your report, Would you mind to offer which version do 
you deploy?

> ClasscastException while loading FSImage
> 
>
> Key: HDFS-15792
> URL: https://issues.apache.org/jira/browse/HDFS-15792
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nn
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
> Attachments: HDFS-15792.001.patch, HDFS-15792.002.patch, 
> image-2021-01-27-12-00-34-846.png
>
>
> FSImage loading has failed with ClasscastException - 
> java.lang.ClassCastException: java.util.HashMap$Node cannot be cast to 
> java.util.HashMap$TreeNode.
> This is the usage issue with Hashmap in concurrent scenarios.
> Same issue has been reported on Java & closed as usage issue.  - 
> https://bugs.openjdk.java.net/browse/JDK-8173671
> 2020-12-28 11:36:26,127 | ERROR | main | An exception occurred when loading 
> INODE from fsiamge. | FSImageFormatProtobuf.java:442
> java.lang.
> : java.util.HashMap$Node cannot be cast to java.util.HashMap$TreeNode
>   at java.util.HashMap$TreeNode.moveRootToFront(HashMap.java:1835)
>   at java.util.HashMap$TreeNode.treeify(HashMap.java:1951)
>   at java.util.HashMap.treeifyBin(HashMap.java:772)
>   at java.util.HashMap.putVal(HashMap.java:644)
>   at java.util.HashMap.put(HashMap.java:612)
>   at 
> org.apache.hadoop.hdfs.util.ReferenceCountMap.put(ReferenceCountMap.java:53)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AclStorage.addAclFeature(AclStorage.java:391)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeWithAdditionalFields.addAclFeature(INodeWithAdditionalFields.java:349)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeDirectory(FSImageFormatPBINode.java:225)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINode(FSImageFormatPBINode.java:406)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.readPBINodes(FSImageFormatPBINode.java:367)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeSection(FSImageFormatPBINode.java:342)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader$2.call(FSImageFormatProtobuf.java:469)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> 2020-12-28 11:36:26,130 | ERROR | main | Failed to load image from 
> FSImageFile(file=/srv/BigData/namenode/current/fsimage_00198227480, 
> cpktTxId=00198227480) | FSImage.java:738
> java.io.IOException: java.lang.ClassCastException: java.util.HashMap$Node 
> cannot be cast to java.util.HashMap$TreeNode
>   at 
> org.apache.hadoop.io.MultipleIOException$Builder.add(MultipleIOException.java:68)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.runLoaderTasks(FSImageFormatProtobuf.java:444)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:360)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:263)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:227)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:971)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:955)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:820)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:733)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:331)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1113)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:730)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:648)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:710)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:953)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:926)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1665)
>   at 
> o

[jira] [Commented] (HDFS-15790) Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist

2021-01-29 Thread Vinayakumar B (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17274396#comment-17274396
 ] 

Vinayakumar B commented on HDFS-15790:
--

Adding a new RpcKind makes it difficult to maintain multiple implementations of 
the server side protocol to support the same functionality. Because its equally 
important to serve requests of older clients which still send the requests with 
RpcKind.PROTOCOL_BUFFERS.

Instead, I have an approach where ProtobufRpcEngine and ProtobufRpcEngine2 can 
co-exist.

ProtobufRpcEngine: Supports existing implementations based in protobuf 2.5.0 in 
both client side and server side. No code changes required in downstreams use 
this.
ProtobufRpcEngine2: Uses shaded protobuf of 3.7.1 version and supports client 
side and server side implementations based on shaded protobuf 3.7.1


In the below change, ProtobufRpcEngine2 itself will handle both versions of 
requests for RpcKind.PROTOCOL_BUFFERS. ProtobufRpcEngine2 will handover the 
processing to ProtobufRpcEngine if implemenation found to be using older 
version of protobuf (2.5.0).

So no conflict is raised for co-existence.
Please verify this change if possible.

[https://github.com/vinayakumarb/hadoop/tree/bugs/HDFS-15790]

 

> Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist
> --
>
> Key: HDFS-15790
> URL: https://issues.apache.org/jira/browse/HDFS-15790
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Changing from Protobuf 2 to Protobuf 3 broke some stuff in Apache Hive 
> project.  This was not an awesome thing to do between minor versions in 
> regards to backwards compatibility for downstream projects.
> Additionally, these two frameworks are not drop-in replacements, they have 
> some differences.  Also, Protobuf 2 is not deprecated or anything so let us 
> have both protocols available at the same time.  In Hadoop 4.x Protobuf 2 
> support can be dropped.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15740) Make basename cross-platform

2021-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15740?focusedWorklogId=544417&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-544417
 ]

ASF GitHub Bot logged work on HDFS-15740:
-

Author: ASF GitHub Bot
Created on: 29/Jan/21 18:11
Start Date: 29/Jan/21 18:11
Worklog Time Spent: 10m 
  Work Description: goiri merged pull request #2567:
URL: https://github.com/apache/hadoop/pull/2567


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 544417)
Remaining Estimate: 15h 50m  (was: 16h)
Time Spent: 8h 10m  (was: 8h)

> Make basename cross-platform
> 
>
> Key: HDFS-15740
> URL: https://issues.apache.org/jira/browse/HDFS-15740
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 24h
>  Time Spent: 8h 10m
>  Remaining Estimate: 15h 50m
>
> The *basename* function isn't available on Visual Studio 2019 compiler. We 
> need to make it cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15740) Make basename cross-platform

2021-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15740?focusedWorklogId=544416&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-544416
 ]

ASF GitHub Bot logged work on HDFS-15740:
-

Author: ASF GitHub Bot
Created on: 29/Jan/21 18:11
Start Date: 29/Jan/21 18:11
Worklog Time Spent: 10m 
  Work Description: goiri commented on pull request #2567:
URL: https://github.com/apache/hadoop/pull/2567#issuecomment-769963982


   I think this is safe enough right now.
   Going ahead with the merge.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 544416)
Remaining Estimate: 16h  (was: 16h 10m)
Time Spent: 8h  (was: 7h 50m)

> Make basename cross-platform
> 
>
> Key: HDFS-15740
> URL: https://issues.apache.org/jira/browse/HDFS-15740
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 24h
>  Time Spent: 8h
>  Remaining Estimate: 16h
>
> The *basename* function isn't available on Visual Studio 2019 compiler. We 
> need to make it cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15740) Make basename cross-platform

2021-01-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-15740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-15740:
---
   Fix Version/s: 3.3.1
Hadoop Flags: Reviewed
Target Version/s:   (was: 3.4.0)
  Resolution: Fixed
  Status: Resolved  (was: Patch Available)

> Make basename cross-platform
> 
>
> Key: HDFS-15740
> URL: https://issues.apache.org/jira/browse/HDFS-15740
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>   Original Estimate: 24h
>  Time Spent: 8h 10m
>  Remaining Estimate: 15h 50m
>
> The *basename* function isn't available on Visual Studio 2019 compiler. We 
> need to make it cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15740) Make basename cross-platform

2021-01-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-15740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-15740:
---
Status: Patch Available  (was: Open)

> Make basename cross-platform
> 
>
> Key: HDFS-15740
> URL: https://issues.apache.org/jira/browse/HDFS-15740
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 24h
>  Time Spent: 8h 10m
>  Remaining Estimate: 15h 50m
>
> The *basename* function isn't available on Visual Studio 2019 compiler. We 
> need to make it cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15683) Allow configuring DISK/ARCHIVE capacity for individual volumes

2021-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15683?focusedWorklogId=544438&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-544438
 ]

ASF GitHub Bot logged work on HDFS-15683:
-

Author: ASF GitHub Bot
Created on: 29/Jan/21 19:05
Start Date: 29/Jan/21 19:05
Worklog Time Spent: 10m 
  Work Description: Jing9 commented on a change in pull request #2625:
URL: https://github.com/apache/hadoop/pull/2625#discussion_r567032636



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/MountVolumeInfo.java
##
@@ -102,9 +116,28 @@ boolean addVolume(FsVolumeImpl volume) {
 return true;
   }
 
-
   void removeVolume(FsVolumeImpl target) {
 storageTypeVolumeMap.remove(target.getStorageType());
+capacityRatioMap.remove(target.getStorageType());
+  }
+
+  /**
+   * Set customize capacity ratio for a storage type.
+   * Return false if the value is too big.
+   */
+  boolean setCapacityRatio(StorageType storageType,
+  double capacityRatio) {
+double leftover = 1;
+for (Map.Entry e : capacityRatioMap.entrySet()) {
+  if (e.getKey() != storageType) {
+leftover -= e.getValue();
+  }
+}
+if (leftover < capacityRatio) {
+  return false;
+}
+capacityRatioMap.put(storageType, capacityRatio);

Review comment:
   Is it possible that this setCapacityRatio call is triggered by 
refreshVolumes op? In that case if we do not reload capacity ratio 
configuration for refreshVolumes, we can have inconsistency here. So I think we 
need to make sure this new feature works well along with refreshVolumes. What 
do you think?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 544438)
Time Spent: 1h 50m  (was: 1h 40m)

> Allow configuring DISK/ARCHIVE capacity for individual volumes
> --
>
> Key: HDFS-15683
> URL: https://issues.apache.org/jira/browse/HDFS-15683
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> This is a follow-up task for https://issues.apache.org/jira/browse/HDFS-15548
> In case that the datanode disks are not unified, we should allow admins to 
> configure capacity for individual volumes on top of the default one.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15795) EC: Returned wrong checksum when reconstruction was failed by exception

2021-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15795?focusedWorklogId=544477&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-544477
 ]

ASF GitHub Bot logged work on HDFS-15795:
-

Author: ASF GitHub Bot
Created on: 29/Jan/21 20:32
Start Date: 29/Jan/21 20:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2657:
URL: https://github.com/apache/hadoop/pull/2657#issuecomment-770034361


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 24s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 40s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  4s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  1s |  |  trunk passed  |
   | -0 :warning: |  patch  |   3m 20s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  javac  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  12m 48s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m  6s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 189m 18s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2657/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 274m 21s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2657/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2657 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 158449faff31 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fa15594ae60 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~

[jira] [Commented] (HDFS-15798) EC: Reconstruct task failed, and It would be XmitsInProgress of DN has negative number

2021-01-29 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17275345#comment-17275345
 ] 

Stephen O'Donnell commented on HDFS-15798:
--

The 002 patch LGTM, +1. I will commit on Monday if nobody objects.

> EC: Reconstruct task failed, and It would be XmitsInProgress of DN has 
> negative number
> --
>
> Key: HDFS-15798
> URL: https://issues.apache.org/jira/browse/HDFS-15798
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: huhaiyang
>Assignee: huhaiyang
>Priority: Major
> Attachments: HDFS-15798.001.patch, HDFS-15798.002.patch
>
>
> The EC reconstruct task failed, and the decrementXmitsInProgress of 
> processErasureCodingTasks operation abnormal value ;
>  It would be XmitsInProgress of DN has negative number, it affects NN chooses 
> pending tasks based on the ratio between the lengths of replication and 
> erasure-coded block queues.
> {code:java}
> // 1.ErasureCodingWorker.java
> public void processErasureCodingTasks(
> Collection ecTasks) {
>   for (BlockECReconstructionInfo reconInfo : ecTasks) {
> int xmitsSubmitted = 0;
> try {
>   ...
>   // It may throw IllegalArgumentException from task#stripedReader
>   // constructor.
>   final StripedBlockReconstructor task =
>   new StripedBlockReconstructor(this, stripedReconInfo);
>   if (task.hasValidTargets()) {
> // See HDFS-12044. We increase xmitsInProgress even the task is only
> // enqueued, so that
> //   1) NN will not send more tasks than what DN can execute and
> //   2) DN will not throw away reconstruction tasks, and instead keeps
> //  an unbounded number of tasks in the executor's task queue.
> xmitsSubmitted = Math.max((int)(task.getXmits() * xmitWeight), 1);
> getDatanode().incrementXmitsInProcess(xmitsSubmitted); //  task start 
> increment
> stripedReconstructionPool.submit(task);
>   } else {
> LOG.warn("No missing internal block. Skip reconstruction for task:{}",
> reconInfo);
>   }
> } catch (Throwable e) {
>   getDatanode().decrementXmitsInProgress(xmitsSubmitted); //  task failed 
> decrement,  XmitsInProgress is decremented by the previous value
>   LOG.warn("Failed to reconstruct striped block {}",
>   reconInfo.getExtendedBlock().getLocalBlock(), e);
> }
>   }
> }
> // 2.StripedBlockReconstructor.java
> public void run() {
>   try {
> initDecoderIfNecessary();
>...
>   } catch (Throwable e) {
> LOG.warn("Failed to reconstruct striped block: {}", getBlockGroup(), e);
> getDatanode().getMetrics().incrECFailedReconstructionTasks();
>   } finally {
> float xmitWeight = getErasureCodingWorker().getXmitWeight();
> // if the xmits is smaller than 1, the xmitsSubmitted should be set to 1
> // because if it set to zero, we cannot to measure the xmits submitted
> int xmitsSubmitted = Math.max((int) (getXmits() * xmitWeight), 1);
> getDatanode().decrementXmitsInProgress(xmitsSubmitted); // task complete 
> decrement
> ...
>   }
> }{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15683) Allow configuring DISK/ARCHIVE capacity for individual volumes

2021-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15683?focusedWorklogId=544527&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-544527
 ]

ASF GitHub Bot logged work on HDFS-15683:
-

Author: ASF GitHub Bot
Created on: 29/Jan/21 22:28
Start Date: 29/Jan/21 22:28
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2625:
URL: https://github.com/apache/hadoop/pull/2625#issuecomment-770084009


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 36s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  4s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  1s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  javac  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 592 unchanged - 1 
fixed = 592 total (was 593)  |
   | +1 :green_heart: |  mvnsite  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  13m 11s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m  6s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 220m 41s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2625/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 306m 56s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
   |   | hadoop.hdfs.TestDFSInotifyEventInputStream |
   |   | hadoop.hdfs.TestDFSStorageStateRecovery |
   |   | hadoop.hdfs.TestDFSClientRetries |
   |   | hadoop.hdfs.TestWriteReadStripedFile |
   |   | hadoop.hdfs.TestEncryptedTransfer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2625/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2625 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 823be6eb706c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh

[jira] [Work logged] (HDFS-15683) Allow configuring DISK/ARCHIVE capacity for individual volumes

2021-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15683?focusedWorklogId=544529&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-544529
 ]

ASF GitHub Bot logged work on HDFS-15683:
-

Author: ASF GitHub Bot
Created on: 29/Jan/21 22:30
Start Date: 29/Jan/21 22:30
Worklog Time Spent: 10m 
  Work Description: LeonGao91 commented on a change in pull request #2625:
URL: https://github.com/apache/hadoop/pull/2625#discussion_r567129097



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/MountVolumeInfo.java
##
@@ -102,9 +116,28 @@ boolean addVolume(FsVolumeImpl volume) {
 return true;
   }
 
-
   void removeVolume(FsVolumeImpl target) {
 storageTypeVolumeMap.remove(target.getStorageType());
+capacityRatioMap.remove(target.getStorageType());
+  }
+
+  /**
+   * Set customize capacity ratio for a storage type.
+   * Return false if the value is too big.
+   */
+  boolean setCapacityRatio(StorageType storageType,
+  double capacityRatio) {
+double leftover = 1;
+for (Map.Entry e : capacityRatioMap.entrySet()) {
+  if (e.getKey() != storageType) {
+leftover -= e.getValue();
+  }
+}
+if (leftover < capacityRatio) {
+  return false;
+}
+capacityRatioMap.put(storageType, capacityRatio);

Review comment:
   Yes, I agree. This is a good point. We need to refresh the capacity 
ratio as well when calling refreshVolumes to make this a complete feature. Let 
me spend some time on it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 544529)
Time Spent: 2h 10m  (was: 2h)

> Allow configuring DISK/ARCHIVE capacity for individual volumes
> --
>
> Key: HDFS-15683
> URL: https://issues.apache.org/jira/browse/HDFS-15683
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> This is a follow-up task for https://issues.apache.org/jira/browse/HDFS-15548
> In case that the datanode disks are not unified, we should allow admins to 
> configure capacity for individual volumes on top of the default one.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15757) RBF: Improving Router Connection Management

2021-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15757?focusedWorklogId=544617&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-544617
 ]

ASF GitHub Bot logged work on HDFS-15757:
-

Author: ASF GitHub Bot
Created on: 30/Jan/21 01:25
Start Date: 30/Jan/21 01:25
Worklog Time Spent: 10m 
  Work Description: fengnanli commented on pull request #2651:
URL: https://github.com/apache/hadoop/pull/2651#issuecomment-770130994


   @goiri Added tests and addressed some comments.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 544617)
Time Spent: 1h  (was: 50m)

> RBF: Improving Router Connection Management
> ---
>
> Key: HDFS-15757
> URL: https://issues.apache.org/jira/browse/HDFS-15757
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: RBF_ Improving Router Connection Management_v2.pdf, RBF_ 
> Router Connection Management.pdf
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> We have seen high number of connections from Router to namenodes, leaving 
> namenodes unstable.
> This ticket is trying to reduce connections through some changes. Please take 
> a look at the design and leave comments. 
> Thanks!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15757) RBF: Improving Router Connection Management

2021-01-29 Thread Fengnan Li (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17275455#comment-17275455
 ] 

Fengnan Li commented on HDFS-15757:
---

Updated the latest patch in github.
We saw ~50% less connections with min ratio as 50%. Some improvement in 
ProxyTime since it contains the getConnection. I will update with more data.
Please try with your set up.

> RBF: Improving Router Connection Management
> ---
>
> Key: HDFS-15757
> URL: https://issues.apache.org/jira/browse/HDFS-15757
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: RBF_ Improving Router Connection Management_v2.pdf, RBF_ 
> Router Connection Management.pdf
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> We have seen high number of connections from Router to namenodes, leaving 
> namenodes unstable.
> This ticket is trying to reduce connections through some changes. Please take 
> a look at the design and leave comments. 
> Thanks!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15757) RBF: Improving Router Connection Management

2021-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15757?focusedWorklogId=544639&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-544639
 ]

ASF GitHub Bot logged work on HDFS-15757:
-

Author: ASF GitHub Bot
Created on: 30/Jan/21 02:37
Start Date: 30/Jan/21 02:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2651:
URL: https://github.com/apache/hadoop/pull/2651#issuecomment-770142137


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   2m  1s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m  7s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 19s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 16s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 38s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m 16s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 12s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 109m 42s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2651/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2651 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e92d69011a59 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ad483fd66e8 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2651/4/testReport/ |
   | Max. process+thread count | 2609 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2651/4/console |
   | versions | git=2.25.1 maven=3.6.3 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https:

[jira] [Created] (HDFS-15802) Address build failure of hadoop-hdfs-native-client on RHEL/CentOS 8

2021-01-29 Thread Masatake Iwasaki (Jira)
Masatake Iwasaki created HDFS-15802:
---

 Summary: Address build failure of hadoop-hdfs-native-client on 
RHEL/CentOS 8
 Key: HDFS-15802
 URL: https://issues.apache.org/jira/browse/HDFS-15802
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki


Building environment described in BUILDING.txt does not work for RHEL CentOS 8 
due to HDFS-15740.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15802) Address build failure of hadoop-hdfs-native-client on RHEL/CentOS 8

2021-01-29 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17275474#comment-17275474
 ] 

Masatake Iwasaki commented on HDFS-15802:
-

For gcc, using gcc-devtoolset-9-toolchain of AppStream of adding {{-lstdc++fs}} 
on linking might be an option. For cmake, installing CMake 3.19 or above from 
source might be needed.

> Address build failure of hadoop-hdfs-native-client on RHEL/CentOS 8
> ---
>
> Key: HDFS-15802
> URL: https://issues.apache.org/jira/browse/HDFS-15802
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>
> Building environment described in BUILDING.txt does not work for RHEL CentOS 
> 8 due to HDFS-15740.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15802) Address build failure of hadoop-hdfs-native-client on RHEL/CentOS 8

2021-01-29 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17275474#comment-17275474
 ] 

Masatake Iwasaki edited comment on HDFS-15802 at 1/30/21, 5:07 AM:
---

For gcc, using gcc-toolset-9 of AppStream of adding {{-lstdc++fs}} on linking 
might be an option. For cmake, installing CMake 3.19 or above from source might 
be needed.


was (Author: iwasakims):
For gcc, using gcc-devtoolset-9-toolchain of AppStream of adding {{-lstdc++fs}} 
on linking might be an option. For cmake, installing CMake 3.19 or above from 
source might be needed.

> Address build failure of hadoop-hdfs-native-client on RHEL/CentOS 8
> ---
>
> Key: HDFS-15802
> URL: https://issues.apache.org/jira/browse/HDFS-15802
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>
> Building environment described in BUILDING.txt does not work for RHEL CentOS 
> 8 due to HDFS-15740.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15792) ClasscastException while loading FSImage

2021-01-29 Thread Renukaprasad C (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17275491#comment-17275491
 ] 

Renukaprasad C commented on HDFS-15792:
---

Thanks [~hexiaoqiao],
We found this issue with 3.1.1, same applicable to trunk as well. We can push 
to all these branches - 3.1.5, 3.2.3, 3.3.1, 3.4.0.


> ClasscastException while loading FSImage
> 
>
> Key: HDFS-15792
> URL: https://issues.apache.org/jira/browse/HDFS-15792
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nn
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
> Attachments: HDFS-15792.001.patch, HDFS-15792.002.patch, 
> image-2021-01-27-12-00-34-846.png
>
>
> FSImage loading has failed with ClasscastException - 
> java.lang.ClassCastException: java.util.HashMap$Node cannot be cast to 
> java.util.HashMap$TreeNode.
> This is the usage issue with Hashmap in concurrent scenarios.
> Same issue has been reported on Java & closed as usage issue.  - 
> https://bugs.openjdk.java.net/browse/JDK-8173671
> 2020-12-28 11:36:26,127 | ERROR | main | An exception occurred when loading 
> INODE from fsiamge. | FSImageFormatProtobuf.java:442
> java.lang.
> : java.util.HashMap$Node cannot be cast to java.util.HashMap$TreeNode
>   at java.util.HashMap$TreeNode.moveRootToFront(HashMap.java:1835)
>   at java.util.HashMap$TreeNode.treeify(HashMap.java:1951)
>   at java.util.HashMap.treeifyBin(HashMap.java:772)
>   at java.util.HashMap.putVal(HashMap.java:644)
>   at java.util.HashMap.put(HashMap.java:612)
>   at 
> org.apache.hadoop.hdfs.util.ReferenceCountMap.put(ReferenceCountMap.java:53)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AclStorage.addAclFeature(AclStorage.java:391)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeWithAdditionalFields.addAclFeature(INodeWithAdditionalFields.java:349)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeDirectory(FSImageFormatPBINode.java:225)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINode(FSImageFormatPBINode.java:406)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.readPBINodes(FSImageFormatPBINode.java:367)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeSection(FSImageFormatPBINode.java:342)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader$2.call(FSImageFormatProtobuf.java:469)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> 2020-12-28 11:36:26,130 | ERROR | main | Failed to load image from 
> FSImageFile(file=/srv/BigData/namenode/current/fsimage_00198227480, 
> cpktTxId=00198227480) | FSImage.java:738
> java.io.IOException: java.lang.ClassCastException: java.util.HashMap$Node 
> cannot be cast to java.util.HashMap$TreeNode
>   at 
> org.apache.hadoop.io.MultipleIOException$Builder.add(MultipleIOException.java:68)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.runLoaderTasks(FSImageFormatProtobuf.java:444)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:360)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:263)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:227)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:971)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:955)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:820)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:733)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:331)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1113)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:730)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:648)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:710)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:953)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:926)
>   at 
> org.apache.hadoop.hdfs.server.n

[jira] [Work logged] (HDFS-15740) Make basename cross-platform

2021-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15740?focusedWorklogId=544683&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-544683
 ]

ASF GitHub Bot logged work on HDFS-15740:
-

Author: ASF GitHub Bot
Created on: 30/Jan/21 06:13
Start Date: 30/Jan/21 06:13
Worklog Time Spent: 10m 
  Work Description: iwasakims commented on pull request #2567:
URL: https://github.com/apache/hadoop/pull/2567#issuecomment-770165145


   @GauthamBanasandra @goiri Sorry for coming here late. This complicates many 
working Linux environment just for basename function. Can you make this 
optional profile for Windows only?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 544683)
Remaining Estimate: 15h 40m  (was: 15h 50m)
Time Spent: 8h 20m  (was: 8h 10m)

> Make basename cross-platform
> 
>
> Key: HDFS-15740
> URL: https://issues.apache.org/jira/browse/HDFS-15740
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>   Original Estimate: 24h
>  Time Spent: 8h 20m
>  Remaining Estimate: 15h 40m
>
> The *basename* function isn't available on Visual Studio 2019 compiler. We 
> need to make it cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15740) Make basename cross-platform

2021-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15740?focusedWorklogId=544689&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-544689
 ]

ASF GitHub Bot logged work on HDFS-15740:
-

Author: ASF GitHub Bot
Created on: 30/Jan/21 06:34
Start Date: 30/Jan/21 06:34
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra commented on pull request #2567:
URL: https://github.com/apache/hadoop/pull/2567#issuecomment-770167068


   @iwasakims the crux of my PR was to replace Linux specific code with a cross 
platform equivalent API so that Hadoop can run natively on all platforms. I've 
implemented the functionality of `basename` using a cross platform 
implementation supported by the C++17 standard in `std::filesystem`. So, it 
shouldn't really cause any platform specific issues.
   
   May I know what Linux specific complications you're referring to, that my PR 
doesn't address? I would be happy to address them in a way that's 
cross-platform friendly. 😊



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 544689)
Remaining Estimate: 15.5h  (was: 15h 40m)
Time Spent: 8.5h  (was: 8h 20m)

> Make basename cross-platform
> 
>
> Key: HDFS-15740
> URL: https://issues.apache.org/jira/browse/HDFS-15740
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>   Original Estimate: 24h
>  Time Spent: 8.5h
>  Remaining Estimate: 15.5h
>
> The *basename* function isn't available on Visual Studio 2019 compiler. We 
> need to make it cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15803) Remove unnecessary method (getWeight) in StripedReconstructionInfo

2021-01-29 Thread huhaiyang (Jira)
huhaiyang created HDFS-15803:


 Summary: Remove unnecessary method (getWeight) in 
StripedReconstructionInfo 
 Key: HDFS-15803
 URL: https://issues.apache.org/jira/browse/HDFS-15803
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: huhaiyang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15803) Remove unnecessary method (getWeight) in StripedReconstructionInfo

2021-01-29 Thread huhaiyang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huhaiyang updated HDFS-15803:
-
Attachment: HDFS-15803_001.patch

> Remove unnecessary method (getWeight) in StripedReconstructionInfo 
> ---
>
> Key: HDFS-15803
> URL: https://issues.apache.org/jira/browse/HDFS-15803
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: huhaiyang
>Priority: Trivial
> Attachments: HDFS-15803_001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15803) Remove unnecessary method (getWeight) in StripedReconstructionInfo

2021-01-29 Thread huhaiyang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huhaiyang updated HDFS-15803:
-
Description:  Removing the unused method from StripedReconstructionInfo

> Remove unnecessary method (getWeight) in StripedReconstructionInfo 
> ---
>
> Key: HDFS-15803
> URL: https://issues.apache.org/jira/browse/HDFS-15803
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: huhaiyang
>Priority: Trivial
> Attachments: HDFS-15803_001.patch
>
>
>  Removing the unused method from StripedReconstructionInfo



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15803) Remove unnecessary method (getWeight) in StripedReconstructionInfo

2021-01-29 Thread huhaiyang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17275504#comment-17275504
 ] 

huhaiyang commented on HDFS-15803:
--

Here is the patch to remove it. No need for new test case.

> Remove unnecessary method (getWeight) in StripedReconstructionInfo 
> ---
>
> Key: HDFS-15803
> URL: https://issues.apache.org/jira/browse/HDFS-15803
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: huhaiyang
>Priority: Trivial
> Attachments: HDFS-15803_001.patch
>
>
>  Removing the unused method from StripedReconstructionInfo



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15803) Remove unnecessary method (getWeight) in StripedReconstructionInfo

2021-01-29 Thread huhaiyang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huhaiyang updated HDFS-15803:
-
Description: 
 Removing the unused method from StripedReconstructionInfo
{code:java}
// StripedReconstructionInfo.java
/**
 * Return the weight of this EC reconstruction task.
 *
 * DN uses it to coordinate with NN to adjust the speed of scheduling the
 * reconstructions tasks to this DN.
 *
 * @return the weight of this reconstruction task.
 * @see HDFS-12044
 */
int getWeight() {
  // See HDFS-12044. The weight of a RS(n, k) is calculated by the network
  // connections it opens.
  return sources.length + targets.length;
}
{code}

  was: Removing the unused method from StripedReconstructionInfo


> Remove unnecessary method (getWeight) in StripedReconstructionInfo 
> ---
>
> Key: HDFS-15803
> URL: https://issues.apache.org/jira/browse/HDFS-15803
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: huhaiyang
>Priority: Trivial
> Attachments: HDFS-15803_001.patch
>
>
>  Removing the unused method from StripedReconstructionInfo
> {code:java}
> // StripedReconstructionInfo.java
> /**
>  * Return the weight of this EC reconstruction task.
>  *
>  * DN uses it to coordinate with NN to adjust the speed of scheduling the
>  * reconstructions tasks to this DN.
>  *
>  * @return the weight of this reconstruction task.
>  * @see HDFS-12044
>  */
> int getWeight() {
>   // See HDFS-12044. The weight of a RS(n, k) is calculated by the network
>   // connections it opens.
>   return sources.length + targets.length;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15803) Remove unnecessary method (getWeight) in StripedReconstructionInfo

2021-01-29 Thread huhaiyang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17275504#comment-17275504
 ] 

huhaiyang edited comment on HDFS-15803 at 1/30/21, 7:28 AM:


Upload the simple patch .

Here is the patch to remove it. No need for new test case.

 


was (Author: haiyang hu):
Upload the simple patch ,  Here is the patch to remove it. No need for new test 
case.

 

> Remove unnecessary method (getWeight) in StripedReconstructionInfo 
> ---
>
> Key: HDFS-15803
> URL: https://issues.apache.org/jira/browse/HDFS-15803
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: huhaiyang
>Priority: Trivial
> Attachments: HDFS-15803_001.patch
>
>
>  Removing the unused method from StripedReconstructionInfo
> {code:java}
> // StripedReconstructionInfo.java
> /**
>  * Return the weight of this EC reconstruction task.
>  *
>  * DN uses it to coordinate with NN to adjust the speed of scheduling the
>  * reconstructions tasks to this DN.
>  *
>  * @return the weight of this reconstruction task.
>  * @see HDFS-12044
>  */
> int getWeight() {
>   // See HDFS-12044. The weight of a RS(n, k) is calculated by the network
>   // connections it opens.
>   return sources.length + targets.length;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15803) Remove unnecessary method (getWeight) in StripedReconstructionInfo

2021-01-29 Thread huhaiyang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huhaiyang reassigned HDFS-15803:


Assignee: huhaiyang

> Remove unnecessary method (getWeight) in StripedReconstructionInfo 
> ---
>
> Key: HDFS-15803
> URL: https://issues.apache.org/jira/browse/HDFS-15803
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: huhaiyang
>Assignee: huhaiyang
>Priority: Trivial
> Attachments: HDFS-15803_001.patch
>
>
>  Removing the unused method from StripedReconstructionInfo
> {code:java}
> // StripedReconstructionInfo.java
> /**
>  * Return the weight of this EC reconstruction task.
>  *
>  * DN uses it to coordinate with NN to adjust the speed of scheduling the
>  * reconstructions tasks to this DN.
>  *
>  * @return the weight of this reconstruction task.
>  * @see HDFS-12044
>  */
> int getWeight() {
>   // See HDFS-12044. The weight of a RS(n, k) is calculated by the network
>   // connections it opens.
>   return sources.length + targets.length;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15803) Remove unnecessary method (getWeight) in StripedReconstructionInfo

2021-01-29 Thread huhaiyang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17275504#comment-17275504
 ] 

huhaiyang edited comment on HDFS-15803 at 1/30/21, 7:28 AM:


Upload the simple patch ,  Here is the patch to remove it. No need for new test 
case.

 


was (Author: haiyang hu):
Here is the patch to remove it. No need for new test case.

> Remove unnecessary method (getWeight) in StripedReconstructionInfo 
> ---
>
> Key: HDFS-15803
> URL: https://issues.apache.org/jira/browse/HDFS-15803
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: huhaiyang
>Priority: Trivial
> Attachments: HDFS-15803_001.patch
>
>
>  Removing the unused method from StripedReconstructionInfo
> {code:java}
> // StripedReconstructionInfo.java
> /**
>  * Return the weight of this EC reconstruction task.
>  *
>  * DN uses it to coordinate with NN to adjust the speed of scheduling the
>  * reconstructions tasks to this DN.
>  *
>  * @return the weight of this reconstruction task.
>  * @see HDFS-12044
>  */
> int getWeight() {
>   // See HDFS-12044. The weight of a RS(n, k) is calculated by the network
>   // connections it opens.
>   return sources.length + targets.length;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org