[jira] [Commented] (HDFS-15714) HDFS Provided Storage Read/Write Mount Support On-the-fly

2021-06-04 Thread Feilong He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17357118#comment-17357118
 ] 

Feilong He commented on HDFS-15714:
---

Hi [~bpatel], sorry for this late reply.

The relevant code path is shown as below.
{code:java}
ReadMountManager: FSMountAttrOp.addRemotePaths -> FSMountAttrOp: w.addToEdits 
-> MountEditLogWriter: createFile{code}
{{In MountEditLogWriter#createFile}}, we can know a {{HdfsFileStatus}} will be 
created based on {{remoteStatus}} obtained from remote storage, which is like 
creating a normal HDFS file except that the data is stored outside HDFS. 
*Actually, modification time of remote file is not used and kept in HDFS*. My 
previous reply may be ambiguous.

I just did a simple test to verify it: compare a file(object)'s modification 
time in S3 and that in HDFS after S3 bucket containing that file is mounted to 
HDFS. The phenomenon is they are different, which is consistent with the code 
analysis. The modification time of that file in HDFS is the time when the above 
{{#createFile}} is triggered to respond to user's mount request.

For {{readOnly}} mount mode, mounted data cannot be changed from HDFS side. So 
its modification time keeps unchanged on HDFS.

I think, generally, upper HDFS applications don't care about data modification 
time. So the inconsistency of modification time may not cause issues. If you 
have any thought or case I ignored, please kindly point out it.

Thanks a lot for your comment! And as always, any discussion is welcome! 

> HDFS Provided Storage Read/Write Mount Support On-the-fly
> -
>
> Key: HDFS-15714
> URL: https://issues.apache.org/jira/browse/HDFS-15714
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Affects Versions: 3.4.0
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15714-01.patch, 
> HDFS_Provided_Storage_Design-V1.pdf, HDFS_Provided_Storage_Performance-V1.pdf
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> HDFS Provided Storage (PS) is a feature to tier HDFS over other file systems. 
> In HDFS-9806, PROVIDED storage type was introduced to HDFS. Through 
> configuring external storage with PROVIDED tag for DataNode, user can enable 
> application to access data stored externally from HDFS side. However, there 
> are two issues need to be addressed. Firstly, mounting external storage 
> on-the-fly, namely dynamic mount, is lacking. It is necessary to get it 
> supported to flexibly combine HDFS with an external storage at runtime. 
> Secondly, PS write is not supported by current HDFS. But in real 
> applications, it is common to transfer data bi-directionally for read/write 
> between HDFS and external storage.
> Through this JIRA, we are presenting our work for PS write support and 
> dynamic mount support for both read & write. Please note in the community 
> several JIRAs have been filed for these topics. Our work is based on these 
> previous community work, with new design & implementation to support called 
> writeBack mount and enable admin to add any mount on-the-fly. We appreciate 
> those folks in the community for their great contribution! See their pending 
> JIRAs: HDFS-14805 & HDFS-12090.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16054) Replace Guava Lists usage by Hadoop's own Lists in hadoop-hdfs-project

2021-06-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16054?focusedWorklogId=606430&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-606430
 ]

ASF GitHub Bot logged work on HDFS-16054:
-

Author: ASF GitHub Bot
Created on: 04/Jun/21 07:42
Start Date: 04/Jun/21 07:42
Worklog Time Spent: 10m 
  Work Description: virajjasani opened a new pull request #3073:
URL: https://github.com/apache/hadoop/pull/3073


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 606430)
Time Spent: 0.5h  (was: 20m)

> Replace Guava Lists usage by Hadoop's own Lists in hadoop-hdfs-project
> --
>
> Key: HDFS-16054
> URL: https://issues.apache.org/jira/browse/HDFS-16054
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15714) HDFS Provided Storage Read/Write Mount Support On-the-fly

2021-06-04 Thread Feilong He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17357118#comment-17357118
 ] 

Feilong He edited comment on HDFS-15714 at 6/4/21, 7:57 AM:


Hi [~bpatel], sorry for this late reply.

The relevant code path is shown as below.
{code:java}
ReadMountManager: FSMountAttrOp.addRemotePaths -> FSMountAttrOp: w.addToEdits 
-> MountEditLogWriter: createFile{code}
{{In MountEditLogWriter#createFile}}, we can know a {{HdfsFileStatus}} will be 
created based on {{remoteStatus}} obtained from remote storage, which is like 
creating a normal HDFS file except that the data is stored outside HDFS. 
*Actually, remote file's own modification time is not used and kept in HDFS*. 
My previous reply may be ambiguous.

I just did a simple test to verify it: compare a file(object)'s modification 
time in S3 and that in HDFS after S3 bucket containing that file is mounted to 
HDFS. The phenomenon is they are different, which is consistent with the code 
analysis. The modification time of that file in HDFS is the time HDFS generates 
when responding to user's mount request.

For {{readOnly}} mount mode, mounted data cannot be changed from HDFS side. So 
its modification time keeps unchanged on HDFS. It is as same as create time.

I think, generally, many upper HDFS applications don't care about data 
modification time. So the inconsistency of modification time may not cause 
issues. If you have any thought or case I ignored, please kindly point out it.

Thanks a lot for your comment! And as always, any discussion is welcome! 


was (Author: philohe):
Hi [~bpatel], sorry for this late reply.

The relevant code path is shown as below.
{code:java}
ReadMountManager: FSMountAttrOp.addRemotePaths -> FSMountAttrOp: w.addToEdits 
-> MountEditLogWriter: createFile{code}
{{In MountEditLogWriter#createFile}}, we can know a {{HdfsFileStatus}} will be 
created based on {{remoteStatus}} obtained from remote storage, which is like 
creating a normal HDFS file except that the data is stored outside HDFS. 
*Actually, modification time of remote file is not used and kept in HDFS*. My 
previous reply may be ambiguous.

I just did a simple test to verify it: compare a file(object)'s modification 
time in S3 and that in HDFS after S3 bucket containing that file is mounted to 
HDFS. The phenomenon is they are different, which is consistent with the code 
analysis. The modification time of that file in HDFS is the time when the above 
{{#createFile}} is triggered to respond to user's mount request.

For {{readOnly}} mount mode, mounted data cannot be changed from HDFS side. So 
its modification time keeps unchanged on HDFS.

I think, generally, upper HDFS applications don't care about data modification 
time. So the inconsistency of modification time may not cause issues. If you 
have any thought or case I ignored, please kindly point out it.

Thanks a lot for your comment! And as always, any discussion is welcome! 

> HDFS Provided Storage Read/Write Mount Support On-the-fly
> -
>
> Key: HDFS-15714
> URL: https://issues.apache.org/jira/browse/HDFS-15714
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Affects Versions: 3.4.0
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15714-01.patch, 
> HDFS_Provided_Storage_Design-V1.pdf, HDFS_Provided_Storage_Performance-V1.pdf
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> HDFS Provided Storage (PS) is a feature to tier HDFS over other file systems. 
> In HDFS-9806, PROVIDED storage type was introduced to HDFS. Through 
> configuring external storage with PROVIDED tag for DataNode, user can enable 
> application to access data stored externally from HDFS side. However, there 
> are two issues need to be addressed. Firstly, mounting external storage 
> on-the-fly, namely dynamic mount, is lacking. It is necessary to get it 
> supported to flexibly combine HDFS with an external storage at runtime. 
> Secondly, PS write is not supported by current HDFS. But in real 
> applications, it is common to transfer data bi-directionally for read/write 
> between HDFS and external storage.
> Through this JIRA, we are presenting our work for PS write support and 
> dynamic mount support for both read & write. Please note in the community 
> several JIRAs have been filed for these topics. Our work is based on these 
> previous community work, with new design & implementation to support called 
> writeBack mount and enable admin to add any mount on-the-fly. We appreciate 
> those folks in the community for their great contribution! See their pending 
> JIRAs: HDFS-14805 & HDFS-12090.



--

[jira] [Work logged] (HDFS-16048) RBF: Print network topology on the router web

2021-06-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16048?focusedWorklogId=606608&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-606608
 ]

ASF GitHub Bot logged work on HDFS-16048:
-

Author: ASF GitHub Bot
Created on: 04/Jun/21 08:03
Start Date: 04/Jun/21 08:03
Worklog Time Spent: 10m 
  Work Description: goiri commented on a change in pull request #3062:
URL: https://github.com/apache/hadoop/pull/3062#discussion_r644487404



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NetworkTopologyServlet.java
##
@@ -171,7 +171,7 @@ private void printTextFormat(PrintStream stream, Map leaves,
+  private void printTopology(PrintStream stream, List leaves,

Review comment:
   Why not just override?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NetworkTopologyServlet.java
##
@@ -171,7 +171,7 @@ private void printTextFormat(PrintStream stream, Maphttp://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants;
+import org.apache.hadoop.hdfs.server.namenode.NetworkTopologyServlet;
+import org.apache.hadoop.net.Node;
+import org.apache.hadoop.util.StringUtils;
+
+import javax.servlet.ServletContext;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.PrintStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.TreeSet;
+
+/**
+ * A servlet to print out the network topology from router.
+ */
+public class RouterNetworkTopologyServlet extends NetworkTopologyServlet {
+
+  public static final String SERVLET_NAME = "topology";
+  public static final String PATH_SPEC = "/topology";
+
+  protected static final String FORMAT_JSON = "json";
+  protected static final String FORMAT_TEXT = "text";
+
+  @Override
+  public void doGet(HttpServletRequest request, HttpServletResponse response)
+  throws IOException {
+final ServletContext context = getServletContext();
+
+String format = parseAcceptHeader(request);

Review comment:
   Makes sense.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 606608)
Time Spent: 3h 20m  (was: 3h 10m)

> RBF: Print network topology on the router web
> -
>
> Key: HDFS-16048
> URL: https://issues.apache.org/jira/browse/HDFS-16048
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Attachments: topology-json.jpg, topology-text.jpg
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> In order to query the network topology information conveniently, we can print 
> it on the router web. It's related to HDFS-15970.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-13729) Fix broken links to RBF documentation

2021-06-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13729?focusedWorklogId=606637&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-606637
 ]

ASF GitHub Bot logged work on HDFS-13729:
-

Author: ASF GitHub Bot
Created on: 04/Jun/21 08:07
Start Date: 04/Jun/21 08:07
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on pull request #3059:
URL: https://github.com/apache/hadoop/pull/3059#issuecomment-853655866


   -1, there is no need to modify the license header.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 606637)
Time Spent: 1h 20m  (was: 1h 10m)

> Fix broken links to RBF documentation
> -
>
> Key: HDFS-13729
> URL: https://issues.apache.org/jira/browse/HDFS-13729
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: jwhitter
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HADOOP-15589.001.patch, HDFS-13729-branch-2.001.patch, 
> hadoop_broken_link.png
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> A broken link on the page [http://hadoop.apache.org/docs/current/]
>  * HDFS
>  ** HDFS Router based federation. See the [user 
> documentation|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html]
>  for more details.
> The link for user documentation 
> [http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html]
>  is not found.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16042) DatanodeAdminMonitor scan should be delay based

2021-06-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16042?focusedWorklogId=606641&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-606641
 ]

ASF GitHub Bot logged work on HDFS-16042:
-

Author: ASF GitHub Bot
Created on: 04/Jun/21 08:07
Start Date: 04/Jun/21 08:07
Worklog Time Spent: 10m 
  Work Description: kihwal commented on pull request #3058:
URL: https://github.com/apache/hadoop/pull/3058#issuecomment-854148519


   TestDecommissioningStatus and TestDecommissioningStatusWithBackoffMonitor
   Please check whether the failures are related to the change.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 606641)
Time Spent: 40m  (was: 0.5h)

> DatanodeAdminMonitor scan should be delay based
> ---
>
> Key: HDFS-16042
> URL: https://issues.apache.org/jira/browse/HDFS-16042
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In {{DatanodeAdminManager.activate()}}, the Monitor task is scheduled with a 
> fixed rate, ie. the period is from start1 -> start2.  
> {code:java}
> executor.scheduleAtFixedRate(monitor, intervalSecs, intervalSecs,
>TimeUnit.SECONDS);
> {code}
> According to Java API docs for {{scheduleAtFixedRate}},
> {quote}If any execution of this task takes longer than its period, then 
> subsequent executions may start late, but will not concurrently 
> execute.{quote}
> It should be a fixed delay so it's end1 -> start1.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-13729) Fix broken links to RBF documentation

2021-06-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13729?focusedWorklogId=606650&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-606650
 ]

ASF GitHub Bot logged work on HDFS-13729:
-

Author: ASF GitHub Bot
Created on: 04/Jun/21 08:08
Start Date: 04/Jun/21 08:08
Worklog Time Spent: 10m 
  Work Description: oojas commented on pull request #3059:
URL: https://github.com/apache/hadoop/pull/3059#issuecomment-853504454


   > That space is in the comment, right? What problem it is creating?
   
   NO its in the readme documentation


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 606650)
Time Spent: 1.5h  (was: 1h 20m)

> Fix broken links to RBF documentation
> -
>
> Key: HDFS-13729
> URL: https://issues.apache.org/jira/browse/HDFS-13729
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: jwhitter
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HADOOP-15589.001.patch, HDFS-13729-branch-2.001.patch, 
> hadoop_broken_link.png
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> A broken link on the page [http://hadoop.apache.org/docs/current/]
>  * HDFS
>  ** HDFS Router based federation. See the [user 
> documentation|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html]
>  for more details.
> The link for user documentation 
> [http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html]
>  is not found.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16054) Replace Guava Lists usage by Hadoop's own Lists in hadoop-hdfs-project

2021-06-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16054?focusedWorklogId=606654&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-606654
 ]

ASF GitHub Bot logged work on HDFS-16054:
-

Author: ASF GitHub Bot
Created on: 04/Jun/21 08:09
Start Date: 04/Jun/21 08:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3073:
URL: https://github.com/apache/hadoop/pull/3073#issuecomment-854330904


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  4s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 69 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 16s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   4m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 54s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 15s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   4m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   8m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   4m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   4m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 45s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   3m 31s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  7s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   2m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   8m 38s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 29s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 408m 43s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3073/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   9m  9s |  |  hadoop-hdfs-httpfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 26s |  |  hadoop-hdfs-nfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  26m 30s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 585m 13s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-m

[jira] [Work logged] (HDFS-15960) Router NamenodeHeartbeatService fails to authenticate with namenode in a kerberized envi

2021-06-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15960?focusedWorklogId=606673&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-606673
 ]

ASF GitHub Bot logged work on HDFS-15960:
-

Author: ASF GitHub Bot
Created on: 04/Jun/21 08:11
Start Date: 04/Jun/21 08:11
Worklog Time Spent: 10m 
  Work Description: goiri commented on a change in pull request #2887:
URL: https://github.com/apache/hadoop/pull/2887#discussion_r644478595



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
##
@@ -170,7 +172,20 @@ protected void serviceInit(Configuration configuration) 
throws Exception {
 
   @Override
   public void periodicInvoke() {
-updateState();
+try {
+  SecurityUtil.doAsCurrentUser(
+  new PrivilegedExceptionAction() {
+@Override
+public Object run() {
+  updateState();
+  return null;
+}
+  });
+} catch (IOException e) {
+  // Generic error that we don't know about
+  LOG.error("Unexpected exception while communicating with {}: {}",

Review comment:
   #3 sounds reasonable, do you mind giving it a try in this PR?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 606673)
Time Spent: 1.5h  (was: 1h 20m)

> Router NamenodeHeartbeatService fails to authenticate with namenode in a 
> kerberized envi
> 
>
> Key: HDFS-15960
> URL: https://issues.apache.org/jira/browse/HDFS-15960
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Borislav Iordanov
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We use http.hadoop.authentication.type = "kerberos" and when the 
> NamenodeHeartbeatService calls the namenode via JMX, it is not providing a 
> user security context so the authentication token is not transmitted and it 
> fails.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16048) RBF: Print network topology on the router web

2021-06-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16048?focusedWorklogId=606738&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-606738
 ]

ASF GitHub Bot logged work on HDFS-16048:
-

Author: ASF GitHub Bot
Created on: 04/Jun/21 08:18
Start Date: 04/Jun/21 08:18
Worklog Time Spent: 10m 
  Work Description: tomscut commented on a change in pull request #3062:
URL: https://github.com/apache/hadoop/pull/3062#discussion_r644513026



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NetworkTopologyServlet.java
##
@@ -90,7 +90,7 @@ public void doGet(HttpServletRequest request, 
HttpServletResponse response)
* @param leaves leaves nodes under base scope
* @param format the response format
*/
-  public void printTopology(PrintStream stream, List leaves,
+  private void printTopology(PrintStream stream, List leaves,

Review comment:
   > Why not just override?
   
   Because the arguments of the two methods are a little different, I'll fix 
it. Thank you for pointing that out.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NetworkTopologyServlet.java
##
@@ -171,7 +171,7 @@ private void printTextFormat(PrintStream stream, Map Package visible is not enough for using it in the other one?
   
   This is calling the superclass 
method(org.apache.hadoop.hdfs.server.namenode.parseAcceptHeader()) directly, 
not on the instance of the superclass, so I make it protected. Do you think it 
is necessary to change it to public?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterNetworkTopologyServlet.java
##
@@ -0,0 +1,116 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants;
+import org.apache.hadoop.hdfs.server.namenode.NetworkTopologyServlet;
+import org.apache.hadoop.net.Node;
+import org.apache.hadoop.util.StringUtils;
+
+import javax.servlet.ServletContext;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.PrintStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.TreeSet;
+
+/**
+ * A servlet to print out the network topology from router.
+ */
+public class RouterNetworkTopologyServlet extends NetworkTopologyServlet {
+
+  public static final String SERVLET_NAME = "topology";
+  public static final String PATH_SPEC = "/topology";
+
+  protected static final String FORMAT_JSON = "json";
+  protected static final String FORMAT_TEXT = "text";
+
+  @Override
+  public void doGet(HttpServletRequest request, HttpServletResponse response)
+  throws IOException {
+final ServletContext context = getServletContext();
+
+String format = parseAcceptHeader(request);

Review comment:
   Hi @goiri . This is calling the superclass 
method(org.apache.hadoop.hdfs.server.namenode.parseAcceptHeader()) directly, 
not on the instance of the superclass, so that's right for protected access 
rights on superclass. What do you think?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NetworkTopologyServlet.java
##
@@ -152,8 +153,8 @@ private void printJsonFormat(PrintStream stream, Map> tree, ArrayList racks) {
+  protected void printTextFormat(PrintStream stream, Map> tree, ArrayList racks) {

Review comment:
   Thanks @tasanuma for your review, I fixed it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 606738)
Time Spent: 3.5h  (was: 3h

[jira] [Work logged] (HDFS-16048) RBF: Print network topology on the router web

2021-06-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16048?focusedWorklogId=606773&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-606773
 ]

ASF GitHub Bot logged work on HDFS-16048:
-

Author: ASF GitHub Bot
Created on: 04/Jun/21 08:22
Start Date: 04/Jun/21 08:22
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on a change in pull request #3062:
URL: https://github.com/apache/hadoop/pull/3062#discussion_r644967639



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NetworkTopologyServlet.java
##
@@ -152,8 +153,8 @@ private void printJsonFormat(PrintStream stream, Map> tree, ArrayList racks) {
+  protected void printTextFormat(PrintStream stream, Map> tree, ArrayList racks) {

Review comment:
   One very minor comment, there are unnecessary spaces.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 606773)
Time Spent: 3h 40m  (was: 3.5h)

> RBF: Print network topology on the router web
> -
>
> Key: HDFS-16048
> URL: https://issues.apache.org/jira/browse/HDFS-16048
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Attachments: topology-json.jpg, topology-text.jpg
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> In order to query the network topology information conveniently, we can print 
> it on the router web. It's related to HDFS-15970.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16048) RBF: Print network topology on the router web

2021-06-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16048?focusedWorklogId=606835&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-606835
 ]

ASF GitHub Bot logged work on HDFS-16048:
-

Author: ASF GitHub Bot
Created on: 04/Jun/21 08:28
Start Date: 04/Jun/21 08:28
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3062:
URL: https://github.com/apache/hadoop/pull/3062#issuecomment-853993168


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 37s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 19s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   5m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 13s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   4m 34s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   5m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   5m 11s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m  1s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   4m 58s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 356m 19s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3062/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |  23m 49s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 503m 48s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3062/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3062 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 49ae01131d22 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 
01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Lin

[jira] [Work logged] (HDFS-16048) RBF: Print network topology on the router web

2021-06-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16048?focusedWorklogId=606840&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-606840
 ]

ASF GitHub Bot logged work on HDFS-16048:
-

Author: ASF GitHub Bot
Created on: 04/Jun/21 08:29
Start Date: 04/Jun/21 08:29
Worklog Time Spent: 10m 
  Work Description: hemanthboyina commented on pull request #3062:
URL: https://github.com/apache/hadoop/pull/3062#issuecomment-854351287


   LGTM


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 606840)
Time Spent: 4h  (was: 3h 50m)

> RBF: Print network topology on the router web
> -
>
> Key: HDFS-16048
> URL: https://issues.apache.org/jira/browse/HDFS-16048
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Attachments: topology-json.jpg, topology-text.jpg
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> In order to query the network topology information conveniently, we can print 
> it on the router web. It's related to HDFS-15970.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16048) RBF: Print network topology on the router web

2021-06-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16048?focusedWorklogId=606876&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-606876
 ]

ASF GitHub Bot logged work on HDFS-16048:
-

Author: ASF GitHub Bot
Created on: 04/Jun/21 08:34
Start Date: 04/Jun/21 08:34
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #3062:
URL: https://github.com/apache/hadoop/pull/3062#issuecomment-854387644


   > LGTM
   
   Thanks @hemanthboyina for your review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 606876)
Time Spent: 4h 10m  (was: 4h)

> RBF: Print network topology on the router web
> -
>
> Key: HDFS-16048
> URL: https://issues.apache.org/jira/browse/HDFS-16048
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Attachments: topology-json.jpg, topology-text.jpg
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> In order to query the network topology information conveniently, we can print 
> it on the router web. It's related to HDFS-15970.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-13671) Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet

2021-06-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13671?focusedWorklogId=606884&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-606884
 ]

ASF GitHub Bot logged work on HDFS-13671:
-

Author: ASF GitHub Bot
Created on: 04/Jun/21 08:35
Start Date: 04/Jun/21 08:35
Worklog Time Spent: 10m 
  Work Description: AlphaGouGe commented on a change in pull request #3065:
URL: https://github.com/apache/hadoop/pull/3065#discussion_r645327101



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
##
@@ -188,8 +188,7 @@ public HeartbeatResponse sendHeartbeat(DatanodeRegistration 
registration,
 
   @Override
   public DatanodeCommand blockReport(DatanodeRegistration registration,
-  String poolId, StorageBlockReport[] reports,
-  BlockReportContext context)
+  String poolId, StorageBlockReport[] reports, BlockReportContext context)

Review comment:
   already fixed




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 606884)
Time Spent: 1h 20m  (was: 1h 10m)

> Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
> --
>
> Key: HDFS-13671
> URL: https://issues.apache.org/jira/browse/HDFS-13671
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Yiqun Lin
>Assignee: Haibin Huang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-13671-001.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> NameNode hung when deleting large files/blocks. The stack info:
> {code}
> "IPC Server handler 4 on 8020" #87 daemon prio=5 os_prio=0 
> tid=0x7fb505b27800 nid=0x94c3 runnable [0x7fa861361000]
>java.lang.Thread.State: RUNNABLE
>   at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet.compare(FoldedTreeSet.java:474)
>   at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet.removeAndGet(FoldedTreeSet.java:849)
>   at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet.remove(FoldedTreeSet.java:911)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo.removeBlock(DatanodeStorageInfo.java:252)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.removeBlock(BlocksMap.java:194)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.removeBlock(BlocksMap.java:108)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeBlockFromMap(BlockManager.java:3813)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeBlock(BlockManager.java:3617)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.removeBlocks(FSNamesystem.java:4270)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:4244)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:4180)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:4164)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:871)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.delete(AuthorizationProviderProxyClientProtocol.java:311)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:625)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> {code}
> In the current deletion logic in NameNode, there are mainly two steps:
> * Collect INodes and all blocks to be deleted, then delete INodes.
> * Remove blocks  chunk by chunk in a loop.
> Actually the first step should be a more expensive operation and will takes 
> more time. However, now we always see NN hangs during the remove block 
> operation. 
> Looking into this, we introduced a new structure {{FoldedTreeSet}} to have a 
> better performance in dealing FBR/IBRs. But compared with early 
> implementation in remove-block logic, {{FoldedTreeSet}} seems more slower 
> since It will take additional time to balance tree node. When there are large 
> block to be removed/deleted, it looks bad.
> For the get type operations in {{DatanodeStorageInfo}

[jira] [Work logged] (HDFS-15814) Make some parameters configurable for DataNodeDiskMetrics

2021-06-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15814?focusedWorklogId=606883&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-606883
 ]

ASF GitHub Bot logged work on HDFS-15814:
-

Author: ASF GitHub Bot
Created on: 04/Jun/21 08:35
Start Date: 04/Jun/21 08:35
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #3021:
URL: https://github.com/apache/hadoop/pull/3021#issuecomment-854456159


   Hi @jojochuang @dineshchitlangia , I created a new PR for branch-3.3, could 
you please help to review the code? The failed tests are not related.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 606883)
Time Spent: 2h 20m  (was: 2h 10m)

> Make some parameters configurable for DataNodeDiskMetrics
> -
>
> Key: HDFS-15814
> URL: https://issues.apache.org/jira/browse/HDFS-15814
> Project: Hadoop HDFS
>  Issue Type: Wish
>  Components: hdfs
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> For ease of use, especially for small clusters, we can change some 
> parameters(MIN_OUTLIER_DETECTION_DISKS, SLOW_DISK_LOW_THRESHOLD_MS) 
> configurable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16048) RBF: Print network topology on the router web

2021-06-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16048?focusedWorklogId=606997&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-606997
 ]

ASF GitHub Bot logged work on HDFS-16048:
-

Author: ASF GitHub Bot
Created on: 04/Jun/21 10:48
Start Date: 04/Jun/21 10:48
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3062:
URL: https://github.com/apache/hadoop/pull/3062#issuecomment-854611834


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m  2s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 59s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   6m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   5m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 13s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m  3s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   5m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   5m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m  6s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   4m 49s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 17s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 394m 50s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3062/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |  27m  9s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3062/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 548m 21s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3062/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3062 |
   | Optional Te

[jira] [Work logged] (HDFS-15976) Make mkdtemp cross platform

2021-06-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15976?focusedWorklogId=607077&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-607077
 ]

ASF GitHub Bot logged work on HDFS-15976:
-

Author: ASF GitHub Bot
Created on: 04/Jun/21 13:37
Start Date: 04/Jun/21 13:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2908:
URL: https://github.com/apache/hadoop/pull/2908#issuecomment-854730486


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  24m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   2m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  57m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  cc  |   2m 52s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 52s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 45s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  cc  |   2m 45s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 45s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 45s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2908/3/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  mvnsite  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  92m 46s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2908/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 198m 49s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed CTEST tests | configuration |
   |   | memcheck_configuration |
   |   | hdfs_configuration |
   |   | memcheck_hdfs_configuration |
   |   | hdfs_builder_test |
   |   | memcheck_hdfs_builder_test |
   |   | hdfs_config_connect_bugs |
   |   | memcheck_hdfs_config_connect_bugs |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2908/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2908 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit 
codespell golang |
   | uname | Linux e21dc1fee9ad 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5977b9fee13a5f4669029b3ed3492c49ed152b07 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | CTEST | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2908/3/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2908/3/testReport/ |
   | Max. process+thread count | 517 (vs. uli

[jira] [Work logged] (HDFS-16042) DatanodeAdminMonitor scan should be delay based

2021-06-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16042?focusedWorklogId=607107&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-607107
 ]

ASF GitHub Bot logged work on HDFS-16042:
-

Author: ASF GitHub Bot
Created on: 04/Jun/21 14:21
Start Date: 04/Jun/21 14:21
Worklog Time Spent: 10m 
  Work Description: jbrennan333 commented on pull request #3058:
URL: https://github.com/apache/hadoop/pull/3058#issuecomment-854764625


   @amahussein code looks good.  As @kihwal said, please take a look at the 
unit test failures to see if they are related.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 607107)
Time Spent: 50m  (was: 40m)

> DatanodeAdminMonitor scan should be delay based
> ---
>
> Key: HDFS-16042
> URL: https://issues.apache.org/jira/browse/HDFS-16042
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In {{DatanodeAdminManager.activate()}}, the Monitor task is scheduled with a 
> fixed rate, ie. the period is from start1 -> start2.  
> {code:java}
> executor.scheduleAtFixedRate(monitor, intervalSecs, intervalSecs,
>TimeUnit.SECONDS);
> {code}
> According to Java API docs for {{scheduleAtFixedRate}},
> {quote}If any execution of this task takes longer than its period, then 
> subsequent executions may start late, but will not concurrently 
> execute.{quote}
> It should be a fixed delay so it's end1 -> start1.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-13671) Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet

2021-06-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13671?focusedWorklogId=607116&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-607116
 ]

ASF GitHub Bot logged work on HDFS-13671:
-

Author: ASF GitHub Bot
Created on: 04/Jun/21 14:44
Start Date: 04/Jun/21 14:44
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3065:
URL: https://github.com/apache/hadoop/pull/3065#issuecomment-854783064


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  3s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 18 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 47s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  cc  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  cc  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  6s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3065/4/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 1338 unchanged 
- 13 fixed = 1339 total (was 1351)  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 48s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 381m 20s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3065/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  asflicense  |   0m 41s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3065/4/artifact/out/results-asflicense.txt)
 |  The patch generated 2 ASF License warnings.  |
   |  |   | 485m  4s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.TestHDFSFileSystemContract |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41

[jira] [Work logged] (HDFS-16042) DatanodeAdminMonitor scan should be delay based

2021-06-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16042?focusedWorklogId=607328&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-607328
 ]

ASF GitHub Bot logged work on HDFS-16042:
-

Author: ASF GitHub Bot
Created on: 04/Jun/21 21:09
Start Date: 04/Jun/21 21:09
Worklog Time Spent: 10m 
  Work Description: amahussein commented on pull request #3058:
URL: https://github.com/apache/hadoop/pull/3058#issuecomment-854998769


   I looked at the unit test failures. Those unit tests seem to be failing 
every now and then as the number of under-replicated blocks differ than the 
expected.
   Last failure of those tests in the qbt-report was back in May 13th.
   I will keep an eye on the unit tests to see of their frequency has increased.
   
   I am rebasing to see if the Yetus failures are persistent.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 607328)
Time Spent: 1h  (was: 50m)

> DatanodeAdminMonitor scan should be delay based
> ---
>
> Key: HDFS-16042
> URL: https://issues.apache.org/jira/browse/HDFS-16042
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> In {{DatanodeAdminManager.activate()}}, the Monitor task is scheduled with a 
> fixed rate, ie. the period is from start1 -> start2.  
> {code:java}
> executor.scheduleAtFixedRate(monitor, intervalSecs, intervalSecs,
>TimeUnit.SECONDS);
> {code}
> According to Java API docs for {{scheduleAtFixedRate}},
> {quote}If any execution of this task takes longer than its period, then 
> subsequent executions may start late, but will not concurrently 
> execute.{quote}
> It should be a fixed delay so it's end1 -> start1.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16055) Quota is not preserved in snapshot INode

2021-06-04 Thread Siyao Meng (Jira)
Siyao Meng created HDFS-16055:
-

 Summary: Quota is not preserved in snapshot INode
 Key: HDFS-16055
 URL: https://issues.apache.org/jira/browse/HDFS-16055
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.3.0
Reporter: Siyao Meng
Assignee: Siyao Meng


Quota feature is not preserved during snapshot creation, this causes 
{{INodeDirectory#metadataEquals}} to ALWAYS return true. Therefore, 
{{snapshotDiff}} will ALWAYS return the snapshot root as modified, even if the 
quota is set before the snapshot creation:

{code:bash}
$ hdfs snapshotDiff /diffTest s0 .
Difference between snapshot s0 and current directory under directory /diffTest:
M   .
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16055) Quota is not preserved in snapshot INode

2021-06-04 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-16055:
--
Status: Patch Available  (was: Open)

> Quota is not preserved in snapshot INode
> 
>
> Key: HDFS-16055
> URL: https://issues.apache.org/jira/browse/HDFS-16055
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.3.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Quota feature is not preserved during snapshot creation, this causes 
> {{INodeDirectory#metadataEquals}} to ALWAYS return true. Therefore, 
> {{snapshotDiff}} will ALWAYS return the snapshot root as modified, even if 
> the quota is set before the snapshot creation:
> {code:bash}
> $ hdfs snapshotDiff /diffTest s0 .
> Difference between snapshot s0 and current directory under directory 
> /diffTest:
> M .
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16055) Quota is not preserved in snapshot INode

2021-06-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16055?focusedWorklogId=607365&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-607365
 ]

ASF GitHub Bot logged work on HDFS-16055:
-

Author: ASF GitHub Bot
Created on: 04/Jun/21 22:24
Start Date: 04/Jun/21 22:24
Worklog Time Spent: 10m 
  Work Description: smengcl opened a new pull request #3078:
URL: https://github.com/apache/hadoop/pull/3078


   https://issues.apache.org/jira/browse/HDFS-16055
   
   Quota feature is not preserved during snapshot creation, this causes 
`INodeDirectory#metadataEquals` to ALWAYS return `true`. Therefore, `hdfs 
snapshotDiff` will always return the snapshot root as modified, even if the 
quota is set before the snapshot creation:
   
   ```bash
   $ hdfs snapshotDiff /diffTest s0 .
   Difference between snapshot s0 and current directory under directory 
/diffTest:
   M.
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 607365)
Remaining Estimate: 0h
Time Spent: 10m

> Quota is not preserved in snapshot INode
> 
>
> Key: HDFS-16055
> URL: https://issues.apache.org/jira/browse/HDFS-16055
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.3.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Quota feature is not preserved during snapshot creation, this causes 
> {{INodeDirectory#metadataEquals}} to ALWAYS return true. Therefore, 
> {{snapshotDiff}} will ALWAYS return the snapshot root as modified, even if 
> the quota is set before the snapshot creation:
> {code:bash}
> $ hdfs snapshotDiff /diffTest s0 .
> Difference between snapshot s0 and current directory under directory 
> /diffTest:
> M .
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16055) Quota is not preserved in snapshot INode

2021-06-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16055:
--
Labels: pull-request-available  (was: )

> Quota is not preserved in snapshot INode
> 
>
> Key: HDFS-16055
> URL: https://issues.apache.org/jira/browse/HDFS-16055
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.3.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Quota feature is not preserved during snapshot creation, this causes 
> {{INodeDirectory#metadataEquals}} to ALWAYS return true. Therefore, 
> {{snapshotDiff}} will ALWAYS return the snapshot root as modified, even if 
> the quota is set before the snapshot creation:
> {code:bash}
> $ hdfs snapshotDiff /diffTest s0 .
> Difference between snapshot s0 and current directory under directory 
> /diffTest:
> M .
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16042) DatanodeAdminMonitor scan should be delay based

2021-06-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16042?focusedWorklogId=607397&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-607397
 ]

ASF GitHub Bot logged work on HDFS-16042:
-

Author: ASF GitHub Bot
Created on: 05/Jun/21 03:45
Start Date: 05/Jun/21 03:45
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3058:
URL: https://github.com/apache/hadoop/pull/3058#issuecomment-855177353


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  2s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |   4m 31s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 314m 47s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3058/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 392m 40s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus 
|
   |   | hadoop.hdfs.TestDFSShell |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3058/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3058 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 63104db60265 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 64c066fad5b79a4568d35bb99ab268afa75c1ecc |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Priv

[jira] [Work logged] (HDFS-16055) Quota is not preserved in snapshot INode

2021-06-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16055?focusedWorklogId=607401&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-607401
 ]

ASF GitHub Bot logged work on HDFS-16055:
-

Author: ASF GitHub Bot
Created on: 05/Jun/21 05:30
Start Date: 05/Jun/21 05:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3078:
URL: https://github.com/apache/hadoop/pull/3078#issuecomment-855186959


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  7s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 16s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 47s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 55s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3078/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 20 unchanged - 
0 fixed = 21 total (was 20)  |
   | +1 :green_heart: |  mvnsite  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 54s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 332m 42s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3078/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 424m 48s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus 
|
   |   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.namenode.snapshot.TestSnapRootDescendantDiff |
   |   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.server.namenode.TestFSImageWithSnapshot |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3078/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3078 |
   | Optional Tests | dupname asflicense compile javac javadoc mvni