[GitHub] [hadoop] jojochuang merged pull request #2882: HDFS-15815. if required storageType are unavailable, log the failed reason during choosing Datanode. Contributed by Yang Yun.

2021-04-12 Thread GitBox


jojochuang merged pull request #2882:
URL: https://github.com/apache/hadoop/pull/2882


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2896: HDFS-15970. Print network topology on the web

2021-04-12 Thread GitBox


hadoop-yetus commented on pull request #2896:
URL: https://github.com/apache/hadoop/pull/2896#issuecomment-818480937


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  33m 53s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2896/2/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 52s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 5 unchanged - 1 
fixed = 5 total (was 6)  |
   | +1 :green_heart: |  mvnsite  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 230m 19s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2896/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 316m 54s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2896/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2896 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 78a824358c12 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 30daa9a064af200ebd3e79df6893273b517cb308 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job

[GitHub] [hadoop] GauthamBanasandra edited a comment on pull request #2898: HDFS-15971. Make mkstemp cross platform

2021-04-12 Thread GitBox


GauthamBanasandra edited a comment on pull request #2898:
URL: https://github.com/apache/hadoop/pull/2898#issuecomment-818472914


   I've refactored TempFile class to implement the `Rule of 5` C++ idiom for 
efficient and correct management of the temporary file resource - 
https://cpppatterns.com/patterns/rule-of-five.html.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] GauthamBanasandra commented on pull request #2898: HDFS-15971. Make mkstemp cross platform

2021-04-12 Thread GitBox


GauthamBanasandra commented on pull request #2898:
URL: https://github.com/apache/hadoop/pull/2898#issuecomment-818472914


   I've refactored TempFile class to implement the `Rule of 5` C++ idiom - 
https://cpppatterns.com/patterns/rule-of-five.html.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on pull request #2882: HDFS-15815. if required storageType are unavailable, log the failed reason during choosing Datanode. Contributed by Yang Yun.

2021-04-12 Thread GitBox


ayushtkn commented on pull request #2882:
URL: https://github.com/apache/hadoop/pull/2882#issuecomment-818456966


   Sure @jojochuang, Thanx 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hchaverri opened a new pull request #2901: HDFS-15912. Allow ProtobufRpcEngine to be extensible

2021-04-12 Thread GitBox


hchaverri opened a new pull request #2901:
URL: https://github.com/apache/hadoop/pull/2901


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17634) Fedbalance only copies data partially when there's existing opened file

2021-04-12 Thread Felix N (Jira)
Felix N created HADOOP-17634:


 Summary: Fedbalance only copies data partially when there's 
existing opened file
 Key: HADOOP-17634
 URL: https://issues.apache.org/jira/browse/HADOOP-17634
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Felix N


If there are opened files when fedbalance is run and data is being written to 
these files, fedbalance might skip the newly written data.

Steps to recreate the issue:
 # Create a dummy file /test/file with some data: {{echo "start" | hdfs dfs 
-appendToFile /test/file}}
 # Start writing to the file: {{hdfs dfs -appendToFile /test/file}} but do not 
stop writing
 # Run fedbalance: {{hadoop fedbalance submit hdfs://ns1/test hdfs://ns2/test}}
 # Write something to the file while fedbalance is running, "end" for example, 
then stop writing
 # After fedbalance is done, {{hdfs://ns2/test/file}} should only contain 
"start" while {{hdfs://ns1/user/hadoop/.Trash/Current/test/file}} contains 
"start\nend"

Fedbalance is run with default configs and arguments so no diff should happen.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on a change in pull request #2896: HDFS-15970. Print network topology on the web

2021-04-12 Thread GitBox


tomscut commented on a change in pull request #2896:
URL: https://github.com/apache/hadoop/pull/2896#discussion_r612107351



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NetworkTopologyServlet.java
##
@@ -0,0 +1,115 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockManager;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.net.Node;
+import org.apache.hadoop.net.NodeBase;
+import org.apache.hadoop.util.StringUtils;
+
+import javax.servlet.ServletContext;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.PrintStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.TreeSet;
+
+/**
+ * A servlet to print out the network topology.
+ */
+@InterfaceAudience.Private
+public class NetworkTopologyServlet extends DfsServlet {
+
+  public static final String PATH_SPEC = "/topology";
+
+  @Override
+  public void doGet(HttpServletRequest request, HttpServletResponse response)

Review comment:
   I'm sorry, I don't quite understand what you mean. Could you please give 
me some specific suggestions, thank you very much.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on a change in pull request #2896: HDFS-15970. Print network topology on the web

2021-04-12 Thread GitBox


tomscut commented on a change in pull request #2896:
URL: https://github.com/apache/hadoop/pull/2896#discussion_r612106334



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NetworkTopologyServlet.java
##
@@ -0,0 +1,115 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockManager;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.net.Node;
+import org.apache.hadoop.net.NodeBase;
+import org.apache.hadoop.util.StringUtils;
+
+import javax.servlet.ServletContext;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.PrintStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.TreeSet;
+
+/**
+ * A servlet to print out the network topology.
+ */
+@InterfaceAudience.Private
+public class NetworkTopologyServlet extends DfsServlet {
+
+  public static final String PATH_SPEC = "/topology";
+
+  @Override
+  public void doGet(HttpServletRequest request, HttpServletResponse response)
+  throws IOException {
+final ServletContext context = getServletContext();
+NameNode nn = NameNodeHttpServer.getNameNodeFromContext(context);
+BlockManager bm = nn.getNamesystem().getBlockManager();
+List leaves = bm.getDatanodeManager().getNetworkTopology()
+.getLeaves(NodeBase.ROOT);
+
+response.setContentType("text/plain; charset=UTF-8");
+try (PrintStream out = new PrintStream(
+response.getOutputStream(), false, "UTF-8")) {
+  printTopology(out, leaves);
+} catch (Throwable t) {
+  String errMsg = "Print network topology failed. "
+  + StringUtils.stringifyException(t);
+  response.sendError(HttpServletResponse.SC_GONE, errMsg);
+  throw new IOException(errMsg);
+} finally {
+  response.getOutputStream().close();
+}
+  }
+
+  /**
+   * Display each rack and the nodes assigned to that rack, as determined
+   * by the NameNode, in a hierarchical manner.  The nodes and racks are
+   * sorted alphabetically.
+   *
+   * @param stream print stream
+   * @param leaves leaves nodes under base scope
+   */
+  public void printTopology(PrintStream stream, List leaves) {
+if (leaves.size() == 0) {
+  stream.print("No DataNodes");
+  return;
+}
+
+// Build a map of rack -> nodes from the datanode report
+HashMap> tree = new HashMap>();
+for(Node dni : leaves) {
+  String location = dni.getNetworkLocation();
+  String name = dni.getName();
+
+  if(!tree.containsKey(location)) {

Review comment:
   Thanks @goiri for your careful review, I will fix these problems quickly.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on pull request #2849: HDFS-15621. Datanode DirectoryScanner uses excessive memory

2021-04-12 Thread GitBox


jojochuang commented on pull request #2849:
URL: https://github.com/apache/hadoop/pull/2849#issuecomment-818409032


   The spotbugs warning looks like a false positive to me.
   `Redundant nullcheck of file, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.compileReport(File,
 File, Collection, DirectoryScanner$ReportCompiler) Redundant null check at 
FsVolumeImpl.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.compileReport(File,
 File, Collection, DirectoryScanner$ReportCompiler) Redundant null check at 
FsVolumeImpl.java:[line 1477]`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri merged pull request #2900: Revert "HDFS-15423 RBF: WebHDFS create shouldn't choose DN from all sub-clusters"

2021-04-12 Thread GitBox


goiri merged pull request #2900:
URL: https://github.com/apache/hadoop/pull/2900


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri opened a new pull request #2900: Revert "HDFS-15423 RBF: WebHDFS create shouldn't choose DN from all sub-clusters"

2021-04-12 Thread GitBox


goiri opened a new pull request #2900:
URL: https://github.com/apache/hadoop/pull/2900


   Reverts apache/hadoop#2605


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #2896: HDFS-15970. Print network topology on the web

2021-04-12 Thread GitBox


goiri commented on a change in pull request #2896:
URL: https://github.com/apache/hadoop/pull/2896#discussion_r611908007



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NetworkTopologyServlet.java
##
@@ -0,0 +1,115 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockManager;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.net.Node;
+import org.apache.hadoop.net.NodeBase;
+import org.apache.hadoop.util.StringUtils;
+
+import javax.servlet.ServletContext;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.PrintStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.TreeSet;
+
+/**
+ * A servlet to print out the network topology.
+ */
+@InterfaceAudience.Private
+public class NetworkTopologyServlet extends DfsServlet {
+
+  public static final String PATH_SPEC = "/topology";
+
+  @Override
+  public void doGet(HttpServletRequest request, HttpServletResponse response)
+  throws IOException {
+final ServletContext context = getServletContext();
+NameNode nn = NameNodeHttpServer.getNameNodeFromContext(context);
+BlockManager bm = nn.getNamesystem().getBlockManager();
+List leaves = bm.getDatanodeManager().getNetworkTopology()
+.getLeaves(NodeBase.ROOT);
+
+response.setContentType("text/plain; charset=UTF-8");
+try (PrintStream out = new PrintStream(
+response.getOutputStream(), false, "UTF-8")) {
+  printTopology(out, leaves);
+} catch (Throwable t) {
+  String errMsg = "Print network topology failed. "
+  + StringUtils.stringifyException(t);
+  response.sendError(HttpServletResponse.SC_GONE, errMsg);
+  throw new IOException(errMsg);
+} finally {
+  response.getOutputStream().close();
+}
+  }
+
+  /**
+   * Display each rack and the nodes assigned to that rack, as determined
+   * by the NameNode, in a hierarchical manner.  The nodes and racks are
+   * sorted alphabetically.
+   *
+   * @param stream print stream
+   * @param leaves leaves nodes under base scope
+   */
+  public void printTopology(PrintStream stream, List leaves) {
+if (leaves.size() == 0) {
+  stream.print("No DataNodes");
+  return;
+}
+
+// Build a map of rack -> nodes from the datanode report
+HashMap> tree = new HashMap>();

Review comment:
   Can we do:
   Map> tree = new HashMap>();

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NetworkTopologyServlet.java
##
@@ -0,0 +1,115 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockManager;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.net.Node;
+import org.apache.hadoop.net.NodeBase;
+import org.apache.hadoop.util.StringUtils;
+
+import javax.servlet.ServletContext;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import java.io.IO

[GitHub] [hadoop] jojochuang commented on pull request #2882: HDFS-15815. if required storageType are unavailable, log the failed reason during choosing Datanode. Contributed by Yang Yun.

2021-04-12 Thread GitBox


jojochuang commented on pull request #2882:
URL: https://github.com/apache/hadoop/pull/2882#issuecomment-818406777


   @ayushtkn fyi will merge later if no objections.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] PHILO-HE commented on pull request #2655: HDFS-15714: HDFS Provided Storage Read/Write Mount Support On-the-fly

2021-04-12 Thread GitBox


PHILO-HE commented on pull request #2655:
URL: https://github.com/apache/hadoop/pull/2655#issuecomment-818398537


   1) Yes, LevelDB based AliasMap is recommended to user. And text based 
AliasMap is just for the purpose of unit test. In this patch, we made few code 
changes for AliasMap. You may note that it was initially introduced by the 
community few years ago.
   
   2) For namenode HA, we had not tested this feature on that. I think there 
are two main considerations. Firstly, mount operation should be recovered in 
new NN. Thus, the mounted remote storages are "visible" to new active NN. Since 
we log mount info in edit log for each mount request, this may be not a 
problem. Secondly, key info currently kept in memory should be available to 
other NNs, e.g., key tracking info in syncing data to remote storage to 
guarantee data consistency even though active NN is shifted.
   
   Frankly speaking, provided storage is still an experimental feature. So 
there may still exist a large gap for productization.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ferhui commented on pull request #2868: HDFS-15759. EC: Verify EC reconstruction correctness on DataNode (#2585)

2021-04-12 Thread GitBox


ferhui commented on pull request #2868:
URL: https://github.com/apache/hadoop/pull/2868#issuecomment-818398223


   HDFS-15940 has fixed TestBlockRecovery


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on pull request #2868: HDFS-15759. EC: Verify EC reconstruction correctness on DataNode (#2585)

2021-04-12 Thread GitBox


jojochuang commented on pull request #2868:
URL: https://github.com/apache/hadoop/pull/2868#issuecomment-818391251


   Thanks! @ferhui 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang merged pull request #2868: HDFS-15759. EC: Verify EC reconstruction correctness on DataNode (#2585)

2021-04-12 Thread GitBox


jojochuang merged pull request #2868:
URL: https://github.com/apache/hadoop/pull/2868


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-11245) Update NFS gateway to use Netty4

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11245?focusedWorklogId=581496&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581496
 ]

ASF GitHub Bot logged work on HADOOP-11245:
---

Author: ASF GitHub Bot
Created on: 13/Apr/21 02:28
Start Date: 13/Apr/21 02:28
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on pull request #2832:
URL: https://github.com/apache/hadoop/pull/2832#issuecomment-818384841


   As of the last commit, the code was deployed on a small cluster, verified to 
contain no memory leak via -Dio.netty.leakDetectionLevel=paranoid.
   
   The performance saw no noticeable change: prior to the change, the 
throughput is 39.3MB/s; after the change, the throughput is 38.2MB/s (writing a 
1GB file)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 581496)
Time Spent: 1.5h  (was: 1h 20m)

> Update NFS gateway to use Netty4
> 
>
> Key: HADOOP-11245
> URL: https://issues.apache.org/jira/browse/HADOOP-11245
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Brandon Li
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on pull request #2832: HADOOP-11245. Update NFS gateway to use Netty4

2021-04-12 Thread GitBox


jojochuang commented on pull request #2832:
URL: https://github.com/apache/hadoop/pull/2832#issuecomment-818384841


   As of the last commit, the code was deployed on a small cluster, verified to 
contain no memory leak via -Dio.netty.leakDetectionLevel=paranoid.
   
   The performance saw no noticeable change: prior to the change, the 
throughput is 39.3MB/s; after the change, the throughput is 38.2MB/s (writing a 
1GB file)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ferhui commented on pull request #2868: HDFS-15759. EC: Verify EC reconstruction correctness on DataNode (#2585)

2021-04-12 Thread GitBox


ferhui commented on pull request #2868:
URL: https://github.com/apache/hadoop/pull/2868#issuecomment-818375830


   @jojochuang Thanks.
   Failed tests are unrelated, They passed locally except TestBlockRecovery. 
And TestBlockRecovery fails without this PR, so i think it's not related to 
this PR. I will check it on trunk.
   +1 for this PR


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Zhangshunyu edited a comment on pull request #2655: HDFS-15714: HDFS Provided Storage Read/Write Mount Support On-the-fly

2021-04-12 Thread GitBox


Zhangshunyu edited a comment on pull request #2655:
URL: https://github.com/apache/hadoop/pull/2655#issuecomment-818374335


   Currentlly, alias map is based on LevelDB, and it does not support namenode 
HA, right?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Zhangshunyu commented on pull request #2655: HDFS-15714: HDFS Provided Storage Read/Write Mount Support On-the-fly

2021-04-12 Thread GitBox


Zhangshunyu commented on pull request #2655:
URL: https://github.com/apache/hadoop/pull/2655#issuecomment-818374335


   Currentlly, alias map is based on LevelDB, and it is not support namenode 
HA, right?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17601) Upgrade Jackson databind in branch-2.10 to 2.9.10.7

2021-04-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-17601:
-
Fix Version/s: 2.10.2
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Upgrade Jackson databind in branch-2.10 to 2.9.10.7
> ---
>
> Key: HADOOP-17601
> URL: https://issues.apache.org/jira/browse/HADOOP-17601
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.10.2
>
> Attachments: HADOOP-17601.branch-2.10.001.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Two known vulnerabilities found in Jackson-databind
> [CVE-2021-20190|https://nvd.nist.gov/vuln/detail/CVE-2021-20190] high severity
> [CVE-2020-25649|https://nvd.nist.gov/vuln/detail/CVE-2020-25649] high severity



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17601) Upgrade Jackson databind in branch-2.10 to 2.9.10.7

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17601?focusedWorklogId=581463&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581463
 ]

ASF GitHub Bot logged work on HADOOP-17601:
---

Author: ASF GitHub Bot
Created on: 13/Apr/21 00:43
Start Date: 13/Apr/21 00:43
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on pull request #2835:
URL: https://github.com/apache/hadoop/pull/2835#issuecomment-818341722


   Sorry my bad. Merged.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 581463)
Time Spent: 50m  (was: 40m)

> Upgrade Jackson databind in branch-2.10 to 2.9.10.7
> ---
>
> Key: HADOOP-17601
> URL: https://issues.apache.org/jira/browse/HADOOP-17601
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-17601.branch-2.10.001.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Two known vulnerabilities found in Jackson-databind
> [CVE-2021-20190|https://nvd.nist.gov/vuln/detail/CVE-2021-20190] high severity
> [CVE-2020-25649|https://nvd.nist.gov/vuln/detail/CVE-2020-25649] high severity



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17601) Upgrade Jackson databind in branch-2.10 to 2.9.10.7

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17601?focusedWorklogId=581462&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581462
 ]

ASF GitHub Bot logged work on HADOOP-17601:
---

Author: ASF GitHub Bot
Created on: 13/Apr/21 00:42
Start Date: 13/Apr/21 00:42
Worklog Time Spent: 10m 
  Work Description: jojochuang merged pull request #2835:
URL: https://github.com/apache/hadoop/pull/2835


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 581462)
Time Spent: 40m  (was: 0.5h)

> Upgrade Jackson databind in branch-2.10 to 2.9.10.7
> ---
>
> Key: HADOOP-17601
> URL: https://issues.apache.org/jira/browse/HADOOP-17601
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-17601.branch-2.10.001.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Two known vulnerabilities found in Jackson-databind
> [CVE-2021-20190|https://nvd.nist.gov/vuln/detail/CVE-2021-20190] high severity
> [CVE-2020-25649|https://nvd.nist.gov/vuln/detail/CVE-2020-25649] high severity



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang merged pull request #2835: HADOOP-17601. Upgrade Jackson databind in branch-2.10 to 2.9.10.7

2021-04-12 Thread GitBox


jojochuang merged pull request #2835:
URL: https://github.com/apache/hadoop/pull/2835


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on pull request #2835: HADOOP-17601. Upgrade Jackson databind in branch-2.10 to 2.9.10.7

2021-04-12 Thread GitBox


jojochuang commented on pull request #2835:
URL: https://github.com/apache/hadoop/pull/2835#issuecomment-818341722


   Sorry my bad. Merged.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16948) ABFS: Support single writer dirs

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16948?focusedWorklogId=581448&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581448
 ]

ASF GitHub Bot logged work on HADOOP-16948:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 23:48
Start Date: 12/Apr/21 23:48
Worklog Time Spent: 10m 
  Work Description: billierinaldi merged pull request #1925:
URL: https://github.com/apache/hadoop/pull/1925


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 581448)
Time Spent: 10h 50m  (was: 10h 40m)

> ABFS: Support single writer dirs
> 
>
> Key: HADOOP-16948
> URL: https://issues.apache.org/jira/browse/HADOOP-16948
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Minor
>  Labels: abfsactive, pull-request-available
>  Time Spent: 10h 50m
>  Remaining Estimate: 0h
>
> This would allow some directories to be configured as single writer 
> directories. The ABFS driver would obtain a lease when creating or opening a 
> file for writing and would automatically renew the lease and release the 
> lease when closing the file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] billierinaldi merged pull request #1925: HADOOP-16948. Support single writer dirs.

2021-04-12 Thread GitBox


billierinaldi merged pull request #1925:
URL: https://github.com/apache/hadoop/pull/1925


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2899: YARN-10733. TimelineService Hbase tests failing with timeouts

2021-04-12 Thread GitBox


hadoop-yetus commented on pull request #2899:
URL: https://github.com/apache/hadoop/pull/2899#issuecomment-818284213


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  11m 20s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ branch-2.10 Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m  4s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  13m 33s |  branch-2.10 passed  |
   | +1 :green_heart: |  compile  |  13m  5s |  branch-2.10 passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  compile  |  10m 47s |  branch-2.10 passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 52s |  branch-2.10 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  branch-2.10 passed  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  branch-2.10 passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  branch-2.10 passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08  |
   | +0 :ok: |  spotbugs  |   7m 53s |  Both FindBugs and SpotBugs are enabled, 
using SpotBugs.  |
   | +0 :ok: |  spotbugs  |   0m 34s |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  the patch passed  |
   | +1 :green_heart: |  compile  |  12m 26s |  the patch passed with JDK Azul 
Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  javac  |  12m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |  10m 42s |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08  |
   | +1 :green_heart: |  javac  |  10m 42s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 51s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  the patch passed with JDK Azul 
Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 28s |  hadoop-project has no data from 
spotbugs  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 30s |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   6m 24s |  
hadoop-yarn-server-timelineservice-hbase-tests in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 104m 19s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2899/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2899 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml spotbugs checkstyle |
   | uname | Linux e231c4fa59d8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-2.10 / cb5e41a |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-7-amd64:Azul Systems, 
Inc.-1.7.0_262-b10 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2899/1/testReport/ |
   | Max. process+thread count | 752 (vs. ulimit of 5500) |
   | modules | C: hadoop-project 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2899/1/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Work logged] (HADOOP-17511) Add an Audit plugin point for S3A auditing/context

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17511?focusedWorklogId=581389&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581389
 ]

ASF GitHub Bot logged work on HADOOP-17511:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 21:30
Start Date: 12/Apr/21 21:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-818255439


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 43 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 24s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  17m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 50s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 33s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javac  |  19m 49s | 
[/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt)
 |  root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 1 new + 1937 unchanged - 1 
fixed = 1938 total (was 1938)  |
   | +1 :green_heart: |  compile  |  17m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  javac  |  17m 55s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt)
 |  root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 1832 
unchanged - 1 fixed = 1833 total (was 1833)  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   3m 44s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 7 new + 185 unchanged - 4 fixed = 192 total (was 
189)  |
   | +1 :green_heart: |  mvnsite  |   2m 23s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javadoc  |   0m 45s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 
with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 6 new + 
80 unc

[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

2021-04-12 Thread GitBox


hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-818255439


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 43 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 24s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  17m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 50s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 33s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javac  |  19m 49s | 
[/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt)
 |  root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 1 new + 1937 unchanged - 1 
fixed = 1938 total (was 1938)  |
   | +1 :green_heart: |  compile  |  17m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  javac  |  17m 55s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt)
 |  root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 1832 
unchanged - 1 fixed = 1833 total (was 1833)  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   3m 44s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 7 new + 185 unchanged - 4 fixed = 192 total (was 
189)  |
   | +1 :green_heart: |  mvnsite  |   2m 23s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javadoc  |   0m 45s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 
with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 6 new + 
80 unchanged - 8 fixed = 86 total (was 88)  |
   | -1 :x: |  spotbugs  |   1m 32s | 
[/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/11/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html)
 |  hadoop-tools/hadoop-aws generated 3 new + 0 unchanged - 0 fixed = 3 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  14m 44s |  |  patch has no errors 
when building and t

[jira] [Work logged] (HADOOP-17511) Add an Audit plugin point for S3A auditing/context

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17511?focusedWorklogId=581358&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581358
 ]

ASF GitHub Bot logged work on HADOOP-17511:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 20:55
Start Date: 12/Apr/21 20:55
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-818230736


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 43 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 36s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m  7s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  18m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 57s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 37s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javac  |  19m 48s | 
[/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt)
 |  root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 1 new + 1936 unchanged - 1 
fixed = 1937 total (was 1937)  |
   | +1 :green_heart: |  compile  |  18m  1s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  javac  |  18m  1s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt)
 |  root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 1831 
unchanged - 1 fixed = 1832 total (was 1832)  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   3m 49s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 7 new + 185 unchanged - 4 fixed = 192 total (was 
189)  |
   | +1 :green_heart: |  mvnsite  |   2m 24s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javadoc  |   0m 45s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 
with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 6 new + 
80 unc

[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

2021-04-12 Thread GitBox


hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-818230736


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 43 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 36s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m  7s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  18m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 57s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 37s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javac  |  19m 48s | 
[/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt)
 |  root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 1 new + 1936 unchanged - 1 
fixed = 1937 total (was 1937)  |
   | +1 :green_heart: |  compile  |  18m  1s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  javac  |  18m  1s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt)
 |  root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 1831 
unchanged - 1 fixed = 1832 total (was 1832)  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   3m 49s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 7 new + 185 unchanged - 4 fixed = 192 total (was 
189)  |
   | +1 :green_heart: |  mvnsite  |   2m 24s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javadoc  |   0m 45s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 
with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 6 new + 
80 unchanged - 8 fixed = 86 total (was 88)  |
   | -1 :x: |  spotbugs  |   1m 32s | 
[/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/10/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html)
 |  hadoop-tools/hadoop-aws generated 3 new + 0 unchanged - 0 fixed = 3 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  14m 51s |  |  patch has no errors 
when building and t

[GitHub] [hadoop] amahussein opened a new pull request #2899: YARN-10733. TimelineService Hbase tests failing with timeouts

2021-04-12 Thread GitBox


amahussein opened a new pull request #2899:
URL: https://github.com/apache/hadoop/pull/2899


   [YARN-10733:](https://issues.apache.org/jira/browse/YARN-10733) 
TimelineService Hbase tests are failing with timeout error on branch-2.10
   
   Timeout running `hadoop-yarn-server-timelineservice-hbase-tests`. The 
failure of the tests is due to test unit `TestHBaseStorageFlowRunCompaction` 
getting stuck.
   Upon checking the surefire reports, I found several Class no Found 
Exceptions.
   
   ```bash
   Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/fs/CanUnbuffer
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at 
org.apache.hadoop.hbase.regionserver.StoreFileInfo.(StoreFileInfo.java:66)
at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:698)
at 
org.apache.hadoop.hbase.regionserver.HStore.validateStoreFile(HStore.java:1895)
at 
org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:1009)
at 
org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2523)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2638)
... 33 more
   Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.CanUnbuffer
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 51 more
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri merged pull request #2605: HDFS-15423 RBF: WebHDFS create shouldn't choose DN from all sub-clusters

2021-04-12 Thread GitBox


goiri merged pull request #2605:
URL: https://github.com/apache/hadoop/pull/2605


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2898: HDFS-15971. Make mkstemp cross platform

2021-04-12 Thread GitBox


hadoop-yetus commented on pull request #2898:
URL: https://github.com/apache/hadoop/pull/2898#issuecomment-818083196


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   2m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  53m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  cc  |   2m 32s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  cc  |   2m 34s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 34s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  31m 48s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 107m 33s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2898/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2898 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit 
codespell golang |
   | uname | Linux 939cc4c9ed63 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2b1da158404546225a694691400c5271d4f631ac |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2898/1/testReport/ |
   | Max. process+thread count | 713 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2898/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17597) Add option to downgrade S3A rejection of Syncable to warning

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17597?focusedWorklogId=581260&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581260
 ]

ASF GitHub Bot logged work on HADOOP-17597:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 19:04
Start Date: 12/Apr/21 19:04
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2801:
URL: https://github.com/apache/hadoop/pull/2801#issuecomment-818060004


   thanks, will update the PR with the comments.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 581260)
Time Spent: 1h 10m  (was: 1h)

> Add option to downgrade S3A rejection of Syncable to warning
> 
>
> Key: HADOOP-17597
> URL: https://issues.apache.org/jira/browse/HADOOP-17597
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The Hadoop Filesystem Syncable API is intended to meet the requirements laid 
> out in [StoneBraker81] _Operating System Support for Database Management_
> bq.  The service required from an OS buffer manager is a selectedforce out 
> which would push the intentions list and the commit flag to disk in the 
> proper order. Such a service is not present in any buffer manager known to us.
> It's an expensive operation -so expensive that {{Syncable.hsync()}} isn't 
> even called on {{DFSOutputStream.close()}}. I
> Even though S3A does not manifest any data until close() is called, 
> applications coming from HDFS may call Syncable methods and expect to them to 
> persist data with the durability guarantees offered by HDFS.
> Since the output stream hardening of HADOOP-13327, S3A throws 
> UnsupportedOperationException to indicate that the synchronization semantics 
> of Syncable absolutely cannot be met. 
> As a result, applications which have been calling the Syncable APIs are 
> finding the call failing. In the absence of exception handling to recognise 
> that the durability semantics are being met, they fail.
> If the user and the application actually expects data to be persisted, this 
> is the correct behaviour. The data cannot be persisted this way.
> If, however, they were calling this on HDFS more as a {{flush()}} than the 
> full and expensive DBMS-class persistence call, then this failure is 
> unwelcome. The applications really needs to catch the 
> UnsupportedOperationException raised by S3A _or any other FS strictly 
> reporting failures_, report the problem and perform some other means of safe 
> data storage
> Even better, they can use hasPathCapability on the FS or hasCapability() on 
> the stream to probe before even opening a file or trying to sync it. the 
> hasCapability() on a stream was actually implemented in Hadooop-2.x precisely 
> to allow applications to identify when a stream could not meet the guarantees 
> (e.g some of the encrypted streams, file:// before HADOOP-13...)
> Until they can correct their code, I propose adding the option for s3a to 
> downgrade
> fs.s3a.downgrade.syncable.exceptions 
> This will
> * Log once per process at WARN
> * downgrade the calls to noop() 
> * increment counters in S3A stats and IO stats of invocations of the Syncable 
> methods. This will allow for stats gathering to let us identify which 
> applications need fixing in cloud deployments
> Testing: copy the hsync tests but expect exceptions to be swallowed and stats 
> to be collected
> Also: UnsupportedException text will link to this JIRA



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2801: HADOOP-17597. Optionally downgrade on S3A Syncable calls

2021-04-12 Thread GitBox


steveloughran commented on pull request #2801:
URL: https://github.com/apache/hadoop/pull/2801#issuecomment-818060004


   thanks, will update the PR with the comments.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17577) Fix TestLogLevel

2021-04-12 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17319669#comment-17319669
 ] 

Steve Loughran commented on HADOOP-17577:
-

yes. looks like different JREs report differently. 

options: 
# slightly relax the error check (lower case or different substring)
# just expect any exception and don't look at the message.

option 2 is less brittle to future changes

> Fix TestLogLevel
> 
>
> Key: HADOOP-17577
> URL: https://issues.apache.org/jira/browse/HADOOP-17577
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: branch-2.10 and Java 8
>Reporter: Akira Ajisaka
>Priority: Major
>
> Found when fixing HADOOP-17572: 
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2755/2/testReport/org.apache.hadoop.log/TestLogLevel/testLogLevelByHttp/
> {noformat}
> Expected to find 'Unrecognized SSL message' but got unexpected 
> exception:javax.net.ssl.SSLException: Unsupported or unrecognized SSL message
>  at 
> sun.security.ssl.SSLSocketInputRecord.handleUnknownRecord(SSLSocketInputRecord.java:448)
>  at 
> sun.security.ssl.SSLSocketInputRecord.decode(SSLSocketInputRecord.java:184)
>  at sun.security.ssl.SSLTransport.decode(SSLTransport.java:108)
>  at sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1143)
>  at 
> sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1054)
>  at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:394)
>  at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
>  at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
>  at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:167)
>  at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
>  at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:347)
>  at org.apache.hadoop.log.LogLevel$CLI.connect(LogLevel.java:271)
>  at org.apache.hadoop.log.LogLevel$CLI.process(LogLevel.java:293)
>  at org.apache.hadoop.log.LogLevel$CLI.doGetLevel(LogLevel.java:234)
>  at org.apache.hadoop.log.LogLevel$CLI.sendLogLevelRequest(LogLevel.java:127)
>  at org.apache.hadoop.log.LogLevel$CLI.run(LogLevel.java:110)
>  at org.apache.hadoop.log.TestLogLevel.getLevel(TestLogLevel.java:301)
>  at org.apache.hadoop.log.TestLogLevel.access$000(TestLogLevel.java:63)
>  at org.apache.hadoop.log.TestLogLevel$1.call(TestLogLevel.java:279)
>  at org.apache.hadoop.log.TestLogLevel$1.call(TestLogLevel.java:275)
>  at 
> org.apache.hadoop.security.authentication.KerberosTestUtils$1.run(KerberosTestUtils.java:102)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAs(KerberosTestUtils.java:99)
>  at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAsClient(KerberosTestUtils.java:115)
>  at 
> org.apache.hadoop.log.TestLogLevel.testDynamicLogLevel(TestLogLevel.java:275)
>  at 
> org.apache.hadoop.log.TestLogLevel.testDynamicLogLevel(TestLogLevel.java:234)
>  at 
> org.apache.hadoop.log.TestLogLevel.testLogLevelByHttp(TestLogLevel.java:354)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2852: MAPREDUCE-7287. Distcp will delete exists file , If we use "--delete …

2021-04-12 Thread GitBox


steveloughran commented on pull request #2852:
URL: https://github.com/apache/hadoop/pull/2852#issuecomment-818046355


   OK, If @ayushtkn is happy, I'm happy.
   
   One thought: could we add a test for a file copy to 
AbstractContractDistCpTest ? This is the one we do for the object stores, and 
it might be good to have a file-file distcp test involving them?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16948) ABFS: Support single writer dirs

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16948?focusedWorklogId=581243&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581243
 ]

ASF GitHub Bot logged work on HADOOP-16948:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 18:43
Start Date: 12/Apr/21 18:43
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #1925:
URL: https://github.com/apache/hadoop/pull/1925#issuecomment-818043015


   @billierinaldi -I'm happy with this. There may be some surprises once you go 
live, but there's nothing obvious to me right now.
   
   
   +1. 
   
   merge when ready either from the button or the terminal. If you plan to 
backport to the 3.3.x line, cherry pick in to branch-3.3 and do a new test run. 
thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 581243)
Time Spent: 10h 40m  (was: 10.5h)

> ABFS: Support single writer dirs
> 
>
> Key: HADOOP-16948
> URL: https://issues.apache.org/jira/browse/HADOOP-16948
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Minor
>  Labels: abfsactive, pull-request-available
>  Time Spent: 10h 40m
>  Remaining Estimate: 0h
>
> This would allow some directories to be configured as single writer 
> directories. The ABFS driver would obtain a lease when creating or opening a 
> file for writing and would automatically renew the lease and release the 
> lease when closing the file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #1925: HADOOP-16948. Support single writer dirs.

2021-04-12 Thread GitBox


steveloughran commented on pull request #1925:
URL: https://github.com/apache/hadoop/pull/1925#issuecomment-818043015


   @billierinaldi -I'm happy with this. There may be some surprises once you go 
live, but there's nothing obvious to me right now.
   
   
   +1. 
   
   merge when ready either from the button or the terminal. If you plan to 
backport to the 3.3.x line, cherry pick in to branch-3.3 and do a new test run. 
thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16948) ABFS: Support single writer dirs

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16948?focusedWorklogId=581241&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581241
 ]

ASF GitHub Bot logged work on HADOOP-16948:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 18:40
Start Date: 12/Apr/21 18:40
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#1925:
URL: https://github.com/apache/hadoop/pull/1925#discussion_r611867287



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsLease.java
##
@@ -88,6 +88,12 @@ public AbfsLease(AbfsClient client, String path) throws 
AzureBlobFileSystemExcep
 acquireLease(retryPolicy, 0, 0);
 
 while (leaseID == null && exception == null) {
+  try {
+future.get();
+  } catch (Exception e) {
+LOG.debug("Got exception waiting for acquire lease future. Checking if 
lease ID or "
++ "exception have been set", e);
+  }

Review comment:
   understood




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 581241)
Time Spent: 10.5h  (was: 10h 20m)

> ABFS: Support single writer dirs
> 
>
> Key: HADOOP-16948
> URL: https://issues.apache.org/jira/browse/HADOOP-16948
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Minor
>  Labels: abfsactive, pull-request-available
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> This would allow some directories to be configured as single writer 
> directories. The ABFS driver would obtain a lease when creating or opening a 
> file for writing and would automatically renew the lease and release the 
> lease when closing the file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1925: HADOOP-16948. Support single writer dirs.

2021-04-12 Thread GitBox


steveloughran commented on a change in pull request #1925:
URL: https://github.com/apache/hadoop/pull/1925#discussion_r611867287



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsLease.java
##
@@ -88,6 +88,12 @@ public AbfsLease(AbfsClient client, String path) throws 
AzureBlobFileSystemExcep
 acquireLease(retryPolicy, 0, 0);
 
 while (leaseID == null && exception == null) {
+  try {
+future.get();
+  } catch (Exception e) {
+LOG.debug("Got exception waiting for acquire lease future. Checking if 
lease ID or "
++ "exception have been set", e);
+  }

Review comment:
   understood




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15566) Support OpenTelemetry

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15566?focusedWorklogId=581236&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581236
 ]

ASF GitHub Bot logged work on HADOOP-15566:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 18:38
Start Date: 12/Apr/21 18:38
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#2816:
URL: https://github.com/apache/hadoop/pull/2816#discussion_r601307384



##
File path: hadoop-common-project/hadoop-common/pom.xml
##
@@ -371,6 +371,31 @@
   lz4-java
   provided
 
+
+  io.opentelemetry
+  opentelemetry-api
+  1.0.0

Review comment:
   all these imports 
   1. need to be declared in hadoop-project/pom.xml
   2. and if the api and sdk are to be kept in sync, with the version defined 
in a property
   3. The hadoop-common import should declare as  all those only 
needed for build and test.

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/Span.java
##
@@ -17,28 +17,51 @@
  */
 package org.apache.hadoop.tracing;
 
+import io.opentelemetry.context.Scope;
+
 import java.io.Closeable;
 
 public class Span implements Closeable {
-
+  io.opentelemetry.api.trace.Span span = null;
   public Span() {
   }
 
+  public Span(io.opentelemetry.api.trace.Span span){
+this.span = span;
+  }
+
   public Span addKVAnnotation(String key, String value) {
+if(span != null){
+  span.setAttribute(key, value);
+}
 return this;
   }
 
   public Span addTimelineAnnotation(String msg) {
+if(span != null){
+  span.addEvent(msg);
+}
 return this;
   }
 
   public SpanContext getContext() {
-return null;
+return  new SpanContext(span.getSpanContext());
   }
 
   public void finish() {
+close();
   }
 
   public void close() {
+if(span != null){
+  span.end();

Review comment:
   set span to null, presumably

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/Span.java
##
@@ -17,28 +17,51 @@
  */
 package org.apache.hadoop.tracing;
 
+import io.opentelemetry.context.Scope;
+
 import java.io.Closeable;
 
 public class Span implements Closeable {
-
+  io.opentelemetry.api.trace.Span span = null;
   public Span() {
   }
 
+  public Span(io.opentelemetry.api.trace.Span span){
+this.span = span;
+  }
+
   public Span addKVAnnotation(String key, String value) {
+if(span != null){
+  span.setAttribute(key, value);
+}
 return this;
   }
 
   public Span addTimelineAnnotation(String msg) {
+if(span != null){
+  span.addEvent(msg);
+}
 return this;
   }
 
   public SpanContext getContext() {
-return null;
+return  new SpanContext(span.getSpanContext());

Review comment:
   what if span == null

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/Span.java
##
@@ -17,28 +17,51 @@
  */
 package org.apache.hadoop.tracing;
 
+import io.opentelemetry.context.Scope;
+
 import java.io.Closeable;
 
 public class Span implements Closeable {
-
+  io.opentelemetry.api.trace.Span span = null;

Review comment:
   private




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 581236)
Time Spent: 1h  (was: 50m)

> Support OpenTelemetry
> -
>
> Key: HADOOP-15566
> URL: https://issues.apache.org/jira/browse/HADOOP-15566
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics, tracing
>Affects Versions: 3.1.0
>Reporter: Todd Lipcon
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available, security
> Attachments: HADOOP-15566.000.WIP.patch, OpenTelemetry Support Scope 
> Doc v2.pdf, OpenTracing Support Scope Doc.pdf, Screen Shot 2018-06-29 at 
> 11.59.16 AM.png, ss-trace-s3a.png
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The HTrace incubator project has voted to retire itself and won't be making 
> further releases. The Hadoop project currently has various hooks with HTrace. 
> It seems in some cases (eg HDFS-13702) these hooks have had measurable 
> performance overhead. Given these two factors, I think we should consider 
> removing the HTrace integration. If there is someone willing to do the work, 
> replacing it with OpenTracing might be a better choice since there is an 
> active community.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

--

[GitHub] [hadoop] steveloughran commented on a change in pull request #2816: HADOOP-15566 initial changes for opentelemetry - WIP

2021-04-12 Thread GitBox


steveloughran commented on a change in pull request #2816:
URL: https://github.com/apache/hadoop/pull/2816#discussion_r601307384



##
File path: hadoop-common-project/hadoop-common/pom.xml
##
@@ -371,6 +371,31 @@
   lz4-java
   provided
 
+
+  io.opentelemetry
+  opentelemetry-api
+  1.0.0

Review comment:
   all these imports 
   1. need to be declared in hadoop-project/pom.xml
   2. and if the api and sdk are to be kept in sync, with the version defined 
in a property
   3. The hadoop-common import should declare as  all those only 
needed for build and test.

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/Span.java
##
@@ -17,28 +17,51 @@
  */
 package org.apache.hadoop.tracing;
 
+import io.opentelemetry.context.Scope;
+
 import java.io.Closeable;
 
 public class Span implements Closeable {
-
+  io.opentelemetry.api.trace.Span span = null;
   public Span() {
   }
 
+  public Span(io.opentelemetry.api.trace.Span span){
+this.span = span;
+  }
+
   public Span addKVAnnotation(String key, String value) {
+if(span != null){
+  span.setAttribute(key, value);
+}
 return this;
   }
 
   public Span addTimelineAnnotation(String msg) {
+if(span != null){
+  span.addEvent(msg);
+}
 return this;
   }
 
   public SpanContext getContext() {
-return null;
+return  new SpanContext(span.getSpanContext());
   }
 
   public void finish() {
+close();
   }
 
   public void close() {
+if(span != null){
+  span.end();

Review comment:
   set span to null, presumably

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/Span.java
##
@@ -17,28 +17,51 @@
  */
 package org.apache.hadoop.tracing;
 
+import io.opentelemetry.context.Scope;
+
 import java.io.Closeable;
 
 public class Span implements Closeable {
-
+  io.opentelemetry.api.trace.Span span = null;
   public Span() {
   }
 
+  public Span(io.opentelemetry.api.trace.Span span){
+this.span = span;
+  }
+
   public Span addKVAnnotation(String key, String value) {
+if(span != null){
+  span.setAttribute(key, value);
+}
 return this;
   }
 
   public Span addTimelineAnnotation(String msg) {
+if(span != null){
+  span.addEvent(msg);
+}
 return this;
   }
 
   public SpanContext getContext() {
-return null;
+return  new SpanContext(span.getSpanContext());

Review comment:
   what if span == null

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/Span.java
##
@@ -17,28 +17,51 @@
  */
 package org.apache.hadoop.tracing;
 
+import io.opentelemetry.context.Scope;
+
 import java.io.Closeable;
 
 public class Span implements Closeable {
-
+  io.opentelemetry.api.trace.Span span = null;

Review comment:
   private




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2896: HDFS-15970. Print network topology on web

2021-04-12 Thread GitBox


hadoop-yetus commented on pull request #2896:
URL: https://github.com/apache/hadoop/pull/2896#issuecomment-818033898


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m  5s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 53s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2896/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 5 unchanged - 
1 fixed = 6 total (was 6)  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 233m 16s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2896/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 319m 39s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2896/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2896 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 3e68afa520ab 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c0639d4de195c9c01e2683dc205819e69bfc451a |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/

[jira] [Work logged] (HADOOP-17633) Please upgrade json-smart dependency to the latest version

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17633?focusedWorklogId=581229&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581229
 ]

ASF GitHub Bot logged work on HADOOP-17633:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 18:30
Start Date: 12/Apr/21 18:30
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2895:
URL: https://github.com/apache/hadoop/pull/2895#issuecomment-818032692


   I  worry about the lines on the json-smart import
   ```xml
   
   ```
   The assumption here is: nimbus-jose-jwt needs to be updated in sync, and 
kerby. Are there any related JIRAS/issues we could reference there?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 581229)
Time Spent: 20m  (was: 10m)

> Please upgrade json-smart dependency to the latest version
> --
>
> Key: HADOOP-17633
> URL: https://issues.apache.org/jira/browse/HADOOP-17633
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: auth, build
>Affects Versions: 3.3.0, 3.2.1, 3.2.2, 3.4.0
>Reporter: helen huang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Please upgrade the json-smart dependency to the latest version available.
> Currently hadoop-auth is using version 2.3. Fortify scan picked up a security 
> issue with this version. Please upgrade to the latest version. 
> Thanks!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2895: HADOOP-17633. Bump json-smart to 2.4.2 due to CVEs

2021-04-12 Thread GitBox


steveloughran commented on pull request #2895:
URL: https://github.com/apache/hadoop/pull/2895#issuecomment-818032692


   I  worry about the lines on the json-smart import
   ```xml
   
   ```
   The assumption here is: nimbus-jose-jwt needs to be updated in sync, and 
kerby. Are there any related JIRAS/issues we could reference there?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17611) Distcp parallel file copy breaks the modification time

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17611?focusedWorklogId=581220&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581220
 ]

ASF GitHub Bot logged work on HADOOP-17611:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 18:17
Start Date: 12/Apr/21 18:17
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2897:
URL: https://github.com/apache/hadoop/pull/2897#issuecomment-818024160


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 17s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 14s | 
[/results-checkstyle-hadoop-tools_hadoop-distcp.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2897/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-distcp.txt)
 |  hadoop-tools/hadoop-distcp: The patch generated 5 new + 28 unchanged - 0 
fixed = 33 total (was 28)  |
   | +1 :green_heart: |  mvnsite  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  43m  3s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 120m 49s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2897/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2897 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 33e0664fca46 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c933537be33a9444161efeb53507970818a74850 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2897/1/testReport/

[GitHub] [hadoop] hadoop-yetus commented on pull request #2897: HADOOP-17611 Restore modification and access times after concat

2021-04-12 Thread GitBox


hadoop-yetus commented on pull request #2897:
URL: https://github.com/apache/hadoop/pull/2897#issuecomment-818024160


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 17s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 14s | 
[/results-checkstyle-hadoop-tools_hadoop-distcp.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2897/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-distcp.txt)
 |  hadoop-tools/hadoop-distcp: The patch generated 5 new + 28 unchanged - 0 
fixed = 33 total (was 28)  |
   | +1 :green_heart: |  mvnsite  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  43m  3s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 120m 49s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2897/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2897 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 33e0664fca46 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c933537be33a9444161efeb53507970818a74850 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2897/1/testReport/ |
   | Max. process+thread count | 599 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2897/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-

[GitHub] [hadoop] GauthamBanasandra opened a new pull request #2898: HDFS-15971. Make mkstemp cross platform

2021-04-12 Thread GitBox


GauthamBanasandra opened a new pull request #2898:
URL: https://github.com/apache/hadoop/pull/2898


   * mkstemp isn't available in Visual C++.
 This PR implements the necessary
 cross platform implementation for
 mkstemp.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17620) DistCp: Use Iterator for listing target directory as well

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17620?focusedWorklogId=581173&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581173
 ]

ASF GitHub Bot logged work on HADOOP-17620:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 17:32
Start Date: 12/Apr/21 17:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2861:
URL: https://github.com/apache/hadoop/pull/2861#issuecomment-817993990


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 47s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  13m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  |  
hadoop-tools/hadoop-distcp: The patch generated 0 new + 40 unchanged - 1 fixed 
= 40 total (was 41)  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 32s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  23m  5s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  94m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2861/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2861 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux e5c03cfce65e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e2fabb9d26d3d3055e9ac5dcc021ef9ef2d5b568 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2861/3/testReport/ |
   | Max. process+thread count | 677 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2861/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2861: HADOOP-17620. DistCp: Use Iterator for listing target directory as well.

2021-04-12 Thread GitBox


hadoop-yetus commented on pull request #2861:
URL: https://github.com/apache/hadoop/pull/2861#issuecomment-817993990


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 47s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  13m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  |  
hadoop-tools/hadoop-distcp: The patch generated 0 new + 40 unchanged - 1 fixed 
= 40 total (was 41)  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 32s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  23m  5s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  94m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2861/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2861 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux e5c03cfce65e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e2fabb9d26d3d3055e9ac5dcc021ef9ef2d5b568 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2861/3/testReport/ |
   | Max. process+thread count | 677 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2861/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



--

[jira] [Commented] (HADOOP-17611) Distcp parallel file copy breaks the modification time

2021-04-12 Thread Adam Maroti (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17319615#comment-17319615
 ] 

Adam Maroti commented on HADOOP-17611:
--

When times is set the preserve() function is called from the copy mapper
after the file/file junk creation. The copycomitter which runs after that
and does the concat doesn't call preserve because it no longer has the
source file statuses. So the concat happens inside of copycomitter which is
run after the copy mapper causing the concat to be run after the preserve.

Viraj Jasani (Jira)  ezt írta (időpont: 2021. ápr. 12., H



> Distcp parallel file copy breaks the modification time
> --
>
> Key: HADOOP-17611
> URL: https://issues.apache.org/jira/browse/HADOOP-17611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Adam Maroti
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The commit HADOOP-11794. Enable distcp to copy blocks in parallel. 
> (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of 
> large files.
>  
> In CopyCommitter.java inside concatFileChunks Filesystem.concat is called 
> which changes the modification time therefore the modification times of files 
> copeid by distcp will not match the source files. However this only occurs 
> for large enough files, which are copied by splitting them up by distcp.
> In concatFileChunks before calling concat extract the modification time and 
> apply that to the concatenated result-file after the concat. (probably best 
> -after- before the rename()).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17611) Distcp parallel file copy breaks the modification time

2021-04-12 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17319605#comment-17319605
 ] 

Viraj Jasani commented on HADOOP-17611:
---

Thanks [~amaroti]. Have you also tested with TIMES option with DistCp? It seems 
to be already retaining mTime of target file.

> Distcp parallel file copy breaks the modification time
> --
>
> Key: HADOOP-17611
> URL: https://issues.apache.org/jira/browse/HADOOP-17611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Adam Maroti
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The commit HADOOP-11794. Enable distcp to copy blocks in parallel. 
> (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of 
> large files.
>  
> In CopyCommitter.java inside concatFileChunks Filesystem.concat is called 
> which changes the modification time therefore the modification times of files 
> copeid by distcp will not match the source files. However this only occurs 
> for large enough files, which are copied by splitting them up by distcp.
> In concatFileChunks before calling concat extract the modification time and 
> apply that to the concatenated result-file after the concat. (probably best 
> -after- before the rename()).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17611) Distcp parallel file copy breaks the modification time

2021-04-12 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HADOOP-17611:
-

Assignee: (was: Viraj Jasani)

> Distcp parallel file copy breaks the modification time
> --
>
> Key: HADOOP-17611
> URL: https://issues.apache.org/jira/browse/HADOOP-17611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Adam Maroti
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The commit HADOOP-11794. Enable distcp to copy blocks in parallel. 
> (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of 
> large files.
>  
> In CopyCommitter.java inside concatFileChunks Filesystem.concat is called 
> which changes the modification time therefore the modification times of files 
> copeid by distcp will not match the source files. However this only occurs 
> for large enough files, which are copied by splitting them up by distcp.
> In concatFileChunks before calling concat extract the modification time and 
> apply that to the concatenated result-file after the concat. (probably best 
> -after- before the rename()).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17603) Upgrade tomcat-embed-core to 7.0.108

2021-04-12 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17603:

Fix Version/s: 2.10.2
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Upgrade tomcat-embed-core to 7.0.108
> 
>
> Key: HADOOP-17603
> URL: https://issues.apache.org/jira/browse/HADOOP-17603
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, security
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.10.2
>
> Attachments: HADOOP-17603.branch-2.10.001.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> [CVE-2021-25329|https://nvd.nist.gov/vuln/detail/CVE-2021-25329] critical 
> severity.
> Impact: [CVE-2020-9494|https://nvd.nist.gov/vuln/detail/CVE-2020-9494]
> 7.0.0-7.0.107 are all affected by the vulnerability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17603) Upgrade tomcat-embed-core to 7.0.108

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17603?focusedWorklogId=581157&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581157
 ]

ASF GitHub Bot logged work on HADOOP-17603:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 17:11
Start Date: 12/Apr/21 17:11
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #2834:
URL: https://github.com/apache/hadoop/pull/2834#issuecomment-817980640


   @amahussein Done. Thanks for the contribution! :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 581157)
Time Spent: 1h  (was: 50m)

> Upgrade tomcat-embed-core to 7.0.108
> 
>
> Key: HADOOP-17603
> URL: https://issues.apache.org/jira/browse/HADOOP-17603
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, security
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-17603.branch-2.10.001.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> [CVE-2021-25329|https://nvd.nist.gov/vuln/detail/CVE-2021-25329] critical 
> severity.
> Impact: [CVE-2020-9494|https://nvd.nist.gov/vuln/detail/CVE-2020-9494]
> 7.0.0-7.0.107 are all affected by the vulnerability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl commented on pull request #2834: HADOOP-17603. Upgrade tomcat-embed-core to 7.0.108

2021-04-12 Thread GitBox


smengcl commented on pull request #2834:
URL: https://github.com/apache/hadoop/pull/2834#issuecomment-817980640


   @amahussein Done. Thanks for the contribution! :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl merged pull request #2834: HADOOP-17603. Upgrade tomcat-embed-core to 7.0.108

2021-04-12 Thread GitBox


smengcl merged pull request #2834:
URL: https://github.com/apache/hadoop/pull/2834


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17603) Upgrade tomcat-embed-core to 7.0.108

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17603?focusedWorklogId=581156&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581156
 ]

ASF GitHub Bot logged work on HADOOP-17603:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 17:11
Start Date: 12/Apr/21 17:11
Worklog Time Spent: 10m 
  Work Description: smengcl merged pull request #2834:
URL: https://github.com/apache/hadoop/pull/2834


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 581156)
Time Spent: 50m  (was: 40m)

> Upgrade tomcat-embed-core to 7.0.108
> 
>
> Key: HADOOP-17603
> URL: https://issues.apache.org/jira/browse/HADOOP-17603
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, security
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-17603.branch-2.10.001.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> [CVE-2021-25329|https://nvd.nist.gov/vuln/detail/CVE-2021-25329] critical 
> severity.
> Impact: [CVE-2020-9494|https://nvd.nist.gov/vuln/detail/CVE-2020-9494]
> 7.0.0-7.0.107 are all affected by the vulnerability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] fengnanli commented on a change in pull request #2639: HDFS-15785. Datanode to support using DNS to resolve nameservices to IP addresses to get list of namenodes.

2021-04-12 Thread GitBox


fengnanli commented on a change in pull request #2639:
URL: https://github.com/apache/hadoop/pull/2639#discussion_r574117112



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
##
@@ -647,6 +634,58 @@ public static String addKeySuffixes(String key, String... 
suffixes) {
   getNNLifelineRpcAddressesForCluster(Configuration conf)
   throws IOException {
 
+Collection parentNameServices = getParentNameServices(conf);
+
+return getAddressesForNsIds(conf, parentNameServices, null,
+DFS_NAMENODE_LIFELINE_RPC_ADDRESS_KEY);
+  }
+
+  //
+  /**
+   * Returns the configured address for all NameNodes in the cluster.
+   * This is similar with DFSUtilClient.getAddressesForNsIds()
+   * but can access DFSConfigKeys.
+   *
+   * @param conf configuration
+   * @param defaultAddress default address to return in case key is not found.
+   * @param keys Set of keys to look for in the order of preference
+   *
+   * @return a map(nameserviceId to map(namenodeId to InetSocketAddress))
+   */
+  static Map> getAddressesForNsIds(

Review comment:
   Can we try this to reduce the code duplicity? Override function 
`DFSUtilClient.getAddressesForNsIds()` by adding a boolean var indicating 
whether to resolve (the var is fetched from the config). 
   Inside the `DFSUtilClient.getAddressesForNameserviceId`, add another 
override with the boolean, make the current one with the value false. If the 
var is true, do the DNS resolving and return addresses.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
##
@@ -1557,6 +1557,17 @@
   public static final double
   DFS_DATANODE_RESERVE_FOR_ARCHIVE_DEFAULT_PERCENTAGE_DEFAULT = 0.0;
 
+
+  public static final String
+  DFS_NAMESERVICES_RESOLUTION_ENABLED =

Review comment:
   If we maintain only one config across nn, qjm, zkfc and dn, this is an 
issue since the other three don't support DNS yet. I am thinking about how to 
do it for now and it requires some refactor in places such as 
`DFSUtil.getSuffixIDs` (used by zkfc). I will follow up on this soon. ATM we 
can keep a separate config for DN only as a short term solution.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17523) Replace LogCapturer with mock

2021-04-12 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17319567#comment-17319567
 ] 

Viraj Jasani commented on HADOOP-17523:
---

I believe even if we use Mockito's ArgumentCaptor, we still need to use Log4J1 
API to mock Appender. Hence, removing LogCapturer without using Log4J1 API 
might be difficult. [~aajisaka] thoughts?

> Replace LogCapturer with mock
> -
>
> Key: HADOOP-17523
> URL: https://issues.apache.org/jira/browse/HADOOP-17523
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>
> LogCapturer uses Log4J1 API, and it should be removed. Mockito can be used 
> instead for capturing logs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17611) Distcp parallel file copy breaks the modification time

2021-04-12 Thread Adam Maroti (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17319541#comment-17319541
 ] 

Adam Maroti commented on HADOOP-17611:
--

[~vjasani] This is my take on this: 
[https://github.com/apache/hadoop/pull/2897]. It also restores the parent 
directories modification time access time.

> Distcp parallel file copy breaks the modification time
> --
>
> Key: HADOOP-17611
> URL: https://issues.apache.org/jira/browse/HADOOP-17611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Adam Maroti
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The commit HADOOP-11794. Enable distcp to copy blocks in parallel. 
> (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of 
> large files.
>  
> In CopyCommitter.java inside concatFileChunks Filesystem.concat is called 
> which changes the modification time therefore the modification times of files 
> copeid by distcp will not match the source files. However this only occurs 
> for large enough files, which are copied by splitting them up by distcp.
> In concatFileChunks before calling concat extract the modification time and 
> apply that to the concatenated result-file after the concat. (probably best 
> -after- before the rename()).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17611) Distcp parallel file copy breaks the modification time

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17611?focusedWorklogId=581117&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581117
 ]

ASF GitHub Bot logged work on HADOOP-17611:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 16:16
Start Date: 12/Apr/21 16:16
Worklog Time Spent: 10m 
  Work Description: amaroti commented on pull request #2892:
URL: https://github.com/apache/hadoop/pull/2892#issuecomment-817942490


   Hey @virajjasani take a look at this: 
https://github.com/apache/hadoop/pull/2897. This also restores the parent 
directories modification time access time. I have tested this myself by copying 
a 13 GB file between two clusters.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 581117)
Time Spent: 2h  (was: 1h 50m)

> Distcp parallel file copy breaks the modification time
> --
>
> Key: HADOOP-17611
> URL: https://issues.apache.org/jira/browse/HADOOP-17611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Adam Maroti
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The commit HADOOP-11794. Enable distcp to copy blocks in parallel. 
> (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of 
> large files.
>  
> In CopyCommitter.java inside concatFileChunks Filesystem.concat is called 
> which changes the modification time therefore the modification times of files 
> copeid by distcp will not match the source files. However this only occurs 
> for large enough files, which are copied by splitting them up by distcp.
> In concatFileChunks before calling concat extract the modification time and 
> apply that to the concatenated result-file after the concat. (probably best 
> -after- before the rename()).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amaroti commented on pull request #2892: HADOOP-17611. Distcp parallel file copy should retain first chunk modifiedTime after concat

2021-04-12 Thread GitBox


amaroti commented on pull request #2892:
URL: https://github.com/apache/hadoop/pull/2892#issuecomment-817942490


   Hey @virajjasani take a look at this: 
https://github.com/apache/hadoop/pull/2897. This also restores the parent 
directories modification time access time. I have tested this myself by copying 
a 13 GB file between two clusters.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amaroti opened a new pull request #2897: HADOOP-17611 Restore modification and access times after concat

2021-04-12 Thread GitBox


amaroti opened a new pull request #2897:
URL: https://github.com/apache/hadoop/pull/2897


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17611) Distcp parallel file copy breaks the modification time

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17611?focusedWorklogId=581116&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581116
 ]

ASF GitHub Bot logged work on HADOOP-17611:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 16:15
Start Date: 12/Apr/21 16:15
Worklog Time Spent: 10m 
  Work Description: amaroti opened a new pull request #2897:
URL: https://github.com/apache/hadoop/pull/2897


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 581116)
Time Spent: 1h 50m  (was: 1h 40m)

> Distcp parallel file copy breaks the modification time
> --
>
> Key: HADOOP-17611
> URL: https://issues.apache.org/jira/browse/HADOOP-17611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Adam Maroti
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> The commit HADOOP-11794. Enable distcp to copy blocks in parallel. 
> (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of 
> large files.
>  
> In CopyCommitter.java inside concatFileChunks Filesystem.concat is called 
> which changes the modification time therefore the modification times of files 
> copeid by distcp will not match the source files. However this only occurs 
> for large enough files, which are copied by splitting them up by distcp.
> In concatFileChunks before calling concat extract the modification time and 
> apply that to the concatenated result-file after the concat. (probably best 
> -after- before the rename()).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on pull request #2896: HDFS-15970. Print network topology on web

2021-04-12 Thread GitBox


ayushtkn commented on pull request #2896:
URL: https://github.com/apache/hadoop/pull/2896#issuecomment-817930124


   Can you extend this to the rbf ui as well? the federationhealth.html and 
federationhealth.js


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17620) DistCp: Use Iterator for listing target directory as well

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17620?focusedWorklogId=581107&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581107
 ]

ASF GitHub Bot logged work on HADOOP-17620:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 15:58
Start Date: 12/Apr/21 15:58
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on a change in pull request #2861:
URL: https://github.com/apache/hadoop/pull/2861#discussion_r611748559



##
File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
##
@@ -628,8 +629,15 @@ public void testDistCpWithIterator() throws Exception {
 GenericTestUtils
 .createFiles(remoteFS, source, getDepth(), getWidth(), getWidth());
 
+GenericTestUtils.LogCapturer log =

Review comment:
   Thanx @steveloughran, do you suggest that we should have two options 
like -useiteratorforsource and -useiteratorfortarget. Do you think in that case 
we would be able to save out on memory? since the target list is being build as 
part of CopyCommitter, so even if one takes the normal path,  We would get OOM, 
just `when` will differ?
   
   Regarding the log stuff, That was the only thing I could think of, to 
confirm if iterator was used. And during migration to Log4J2, Will moving to 
something like this will be of any help instead:
   
https://github.com/apache/hive/blob/master/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHive.java#L285




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 581107)
Time Spent: 1h 20m  (was: 1h 10m)

> DistCp: Use Iterator for listing target directory as well
> -
>
> Key: HADOOP-17620
> URL: https://issues.apache.org/jira/browse/HADOOP-17620
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Use iterator for listing target directory as well, when {{-useiterator}} 
> option is specified.
> Target is listed when delete option is specified.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on a change in pull request #2861: HADOOP-17620. DistCp: Use Iterator for listing target directory as well.

2021-04-12 Thread GitBox


ayushtkn commented on a change in pull request #2861:
URL: https://github.com/apache/hadoop/pull/2861#discussion_r611748559



##
File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
##
@@ -628,8 +629,15 @@ public void testDistCpWithIterator() throws Exception {
 GenericTestUtils
 .createFiles(remoteFS, source, getDepth(), getWidth(), getWidth());
 
+GenericTestUtils.LogCapturer log =

Review comment:
   Thanx @steveloughran, do you suggest that we should have two options 
like -useiteratorforsource and -useiteratorfortarget. Do you think in that case 
we would be able to save out on memory? since the target list is being build as 
part of CopyCommitter, so even if one takes the normal path,  We would get OOM, 
just `when` will differ?
   
   Regarding the log stuff, That was the only thing I could think of, to 
confirm if iterator was used. And during migration to Log4J2, Will moving to 
something like this will be of any help instead:
   
https://github.com/apache/hive/blob/master/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHive.java#L285




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17620) DistCp: Use Iterator for listing target directory as well

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17620?focusedWorklogId=581098&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581098
 ]

ASF GitHub Bot logged work on HADOOP-17620:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 15:48
Start Date: 12/Apr/21 15:48
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on a change in pull request #2861:
URL: https://github.com/apache/hadoop/pull/2861#discussion_r611748559



##
File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
##
@@ -628,8 +629,15 @@ public void testDistCpWithIterator() throws Exception {
 GenericTestUtils
 .createFiles(remoteFS, source, getDepth(), getWidth(), getWidth());
 
+GenericTestUtils.LogCapturer log =

Review comment:
   Thanx @steveloughran, do you suggest that we should have two options 
like -useiteratorforsource and -useiteratorfortarget. Do you think in that case 
we would be able to save out on memory? since the target list is being build as 
part of CopyCommitter, so even if one takes the normal path,  We would get OOM, 
just `when` will differ?
   
   Regarding the log stuff, That was the only thing I could think of, to 
confirm if iterator was used. And during migration to Log4J2, Will moving to 
something like this will also be of no help:
   
https://github.com/apache/hive/blob/master/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHive.java#L285




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 581098)
Time Spent: 1h 10m  (was: 1h)

> DistCp: Use Iterator for listing target directory as well
> -
>
> Key: HADOOP-17620
> URL: https://issues.apache.org/jira/browse/HADOOP-17620
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Use iterator for listing target directory as well, when {{-useiterator}} 
> option is specified.
> Target is listed when delete option is specified.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on a change in pull request #2861: HADOOP-17620. DistCp: Use Iterator for listing target directory as well.

2021-04-12 Thread GitBox


ayushtkn commented on a change in pull request #2861:
URL: https://github.com/apache/hadoop/pull/2861#discussion_r611748559



##
File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
##
@@ -628,8 +629,15 @@ public void testDistCpWithIterator() throws Exception {
 GenericTestUtils
 .createFiles(remoteFS, source, getDepth(), getWidth(), getWidth());
 
+GenericTestUtils.LogCapturer log =

Review comment:
   Thanx @steveloughran, do you suggest that we should have two options 
like -useiteratorforsource and -useiteratorfortarget. Do you think in that case 
we would be able to save out on memory? since the target list is being build as 
part of CopyCommitter, so even if one takes the normal path,  We would get OOM, 
just `when` will differ?
   
   Regarding the log stuff, That was the only thing I could think of, to 
confirm if iterator was used. And during migration to Log4J2, Will moving to 
something like this will also be of no help:
   
https://github.com/apache/hive/blob/master/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHive.java#L285




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17471) ABFS to collect IOStatistics

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17471?focusedWorklogId=581080&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581080
 ]

ASF GitHub Bot logged work on HADOOP-17471:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 15:21
Start Date: 12/Apr/21 15:21
Worklog Time Spent: 10m 
  Work Description: mehakmeet commented on a change in pull request #2731:
URL: https://github.com/apache/hadoop/pull/2731#discussion_r611715063



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsOutputStreamStatistics.java
##
@@ -241,10 +247,13 @@ public void 
testAbfsOutputStreamDurationTrackerPutRequest() throws IOException {
   outputStream.write('a');
   outputStream.hflush();
 
-  AbfsOutputStreamStatisticsImpl abfsOutputStreamStatistics =
-  getAbfsOutputStreamStatistics(outputStream);
-  LOG.info("AbfsOutputStreamStats info: {}", 
abfsOutputStreamStatistics.toString());
-  
Assertions.assertThat(abfsOutputStreamStatistics.getTimeSpentOnPutRequest())
+  IOStatistics ioStatistics = extractStatistics(fs);
+  LOG.info("AbfsOutputStreamStats info: {}",
+  ioStatisticsToPrettyString(ioStatistics));
+  Assertions.assertThat(

Review comment:
   I am not able to add ```.isGreatherThan()``` after 
assertThatStatisticMean(), I guess because this is an 
```ObjectAssert``` rather than ```DoubleAssert```?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 581080)
Time Spent: 1.5h  (was: 1h 20m)

> ABFS to collect IOStatistics
> 
>
> Key: HADOOP-17471
> URL: https://issues.apache.org/jira/browse/HADOOP-17471
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Steve Loughran
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Add stats collection to ABFS FS operations, especially
> * create
> * open
> * delete
> * rename
> * getFilesStatus
> * list
> * attribute get/set



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet commented on a change in pull request #2731: HADOOP-17471. ABFS to collect IOStatistics

2021-04-12 Thread GitBox


mehakmeet commented on a change in pull request #2731:
URL: https://github.com/apache/hadoop/pull/2731#discussion_r611715063



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsOutputStreamStatistics.java
##
@@ -241,10 +247,13 @@ public void 
testAbfsOutputStreamDurationTrackerPutRequest() throws IOException {
   outputStream.write('a');
   outputStream.hflush();
 
-  AbfsOutputStreamStatisticsImpl abfsOutputStreamStatistics =
-  getAbfsOutputStreamStatistics(outputStream);
-  LOG.info("AbfsOutputStreamStats info: {}", 
abfsOutputStreamStatistics.toString());
-  
Assertions.assertThat(abfsOutputStreamStatistics.getTimeSpentOnPutRequest())
+  IOStatistics ioStatistics = extractStatistics(fs);
+  LOG.info("AbfsOutputStreamStats info: {}",
+  ioStatisticsToPrettyString(ioStatistics));
+  Assertions.assertThat(

Review comment:
   I am not able to add ```.isGreatherThan()``` after 
assertThatStatisticMean(), I guess because this is an 
```ObjectAssert``` rather than ```DoubleAssert```?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet commented on a change in pull request #2731: HADOOP-17471. ABFS to collect IOStatistics

2021-04-12 Thread GitBox


mehakmeet commented on a change in pull request #2731:
URL: https://github.com/apache/hadoop/pull/2731#discussion_r611715063



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsOutputStreamStatistics.java
##
@@ -241,10 +247,13 @@ public void 
testAbfsOutputStreamDurationTrackerPutRequest() throws IOException {
   outputStream.write('a');
   outputStream.hflush();
 
-  AbfsOutputStreamStatisticsImpl abfsOutputStreamStatistics =
-  getAbfsOutputStreamStatistics(outputStream);
-  LOG.info("AbfsOutputStreamStats info: {}", 
abfsOutputStreamStatistics.toString());
-  
Assertions.assertThat(abfsOutputStreamStatistics.getTimeSpentOnPutRequest())
+  IOStatistics ioStatistics = extractStatistics(fs);
+  LOG.info("AbfsOutputStreamStats info: {}",
+  ioStatisticsToPrettyString(ioStatistics));
+  Assertions.assertThat(

Review comment:
   I am not able to add ```.isGreatherThanOrEqualTo()``` after 
assertThatStatisticMean(), I guess because this is an 
```ObjectAssert``` rather than ```DoubleAssert```?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17471) ABFS to collect IOStatistics

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17471?focusedWorklogId=581073&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581073
 ]

ASF GitHub Bot logged work on HADOOP-17471:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 15:09
Start Date: 12/Apr/21 15:09
Worklog Time Spent: 10m 
  Work Description: mehakmeet commented on a change in pull request #2731:
URL: https://github.com/apache/hadoop/pull/2731#discussion_r611715063



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsOutputStreamStatistics.java
##
@@ -241,10 +247,13 @@ public void 
testAbfsOutputStreamDurationTrackerPutRequest() throws IOException {
   outputStream.write('a');
   outputStream.hflush();
 
-  AbfsOutputStreamStatisticsImpl abfsOutputStreamStatistics =
-  getAbfsOutputStreamStatistics(outputStream);
-  LOG.info("AbfsOutputStreamStats info: {}", 
abfsOutputStreamStatistics.toString());
-  
Assertions.assertThat(abfsOutputStreamStatistics.getTimeSpentOnPutRequest())
+  IOStatistics ioStatistics = extractStatistics(fs);
+  LOG.info("AbfsOutputStreamStats info: {}",
+  ioStatisticsToPrettyString(ioStatistics));
+  Assertions.assertThat(

Review comment:
   I am not able to add ```.isGreatherThanOrEqualTo()``` after 
assertThatStatisticMean(), I guess because this is an 
```ObjectAssert``` rather than ```DoubleAssert```?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 581073)
Time Spent: 1h 20m  (was: 1h 10m)

> ABFS to collect IOStatistics
> 
>
> Key: HADOOP-17471
> URL: https://issues.apache.org/jira/browse/HADOOP-17471
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Steve Loughran
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Add stats collection to ABFS FS operations, especially
> * create
> * open
> * delete
> * rename
> * getFilesStatus
> * list
> * attribute get/set



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17471) ABFS to collect IOStatistics

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17471?focusedWorklogId=581071&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581071
 ]

ASF GitHub Bot logged work on HADOOP-17471:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 15:08
Start Date: 12/Apr/21 15:08
Worklog Time Spent: 10m 
  Work Description: mehakmeet commented on a change in pull request #2731:
URL: https://github.com/apache/hadoop/pull/2731#discussion_r611715063



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsOutputStreamStatistics.java
##
@@ -241,10 +247,13 @@ public void 
testAbfsOutputStreamDurationTrackerPutRequest() throws IOException {
   outputStream.write('a');
   outputStream.hflush();
 
-  AbfsOutputStreamStatisticsImpl abfsOutputStreamStatistics =
-  getAbfsOutputStreamStatistics(outputStream);
-  LOG.info("AbfsOutputStreamStats info: {}", 
abfsOutputStreamStatistics.toString());
-  
Assertions.assertThat(abfsOutputStreamStatistics.getTimeSpentOnPutRequest())
+  IOStatistics ioStatistics = extractStatistics(fs);
+  LOG.info("AbfsOutputStreamStats info: {}",
+  ioStatisticsToPrettyString(ioStatistics));
+  Assertions.assertThat(

Review comment:
   I am not able to add ```.isGreatherThanOrEqualTo()``` after 
assertThatStatisticMean(), I guess because this is an 
ObjectAssert rather than DoubleAssert?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 581071)
Time Spent: 1h 10m  (was: 1h)

> ABFS to collect IOStatistics
> 
>
> Key: HADOOP-17471
> URL: https://issues.apache.org/jira/browse/HADOOP-17471
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Steve Loughran
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Add stats collection to ABFS FS operations, especially
> * create
> * open
> * delete
> * rename
> * getFilesStatus
> * list
> * attribute get/set



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet commented on a change in pull request #2731: HADOOP-17471. ABFS to collect IOStatistics

2021-04-12 Thread GitBox


mehakmeet commented on a change in pull request #2731:
URL: https://github.com/apache/hadoop/pull/2731#discussion_r611715063



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsOutputStreamStatistics.java
##
@@ -241,10 +247,13 @@ public void 
testAbfsOutputStreamDurationTrackerPutRequest() throws IOException {
   outputStream.write('a');
   outputStream.hflush();
 
-  AbfsOutputStreamStatisticsImpl abfsOutputStreamStatistics =
-  getAbfsOutputStreamStatistics(outputStream);
-  LOG.info("AbfsOutputStreamStats info: {}", 
abfsOutputStreamStatistics.toString());
-  
Assertions.assertThat(abfsOutputStreamStatistics.getTimeSpentOnPutRequest())
+  IOStatistics ioStatistics = extractStatistics(fs);
+  LOG.info("AbfsOutputStreamStats info: {}",
+  ioStatisticsToPrettyString(ioStatistics));
+  Assertions.assertThat(

Review comment:
   I am not able to add ```.isGreatherThanOrEqualTo()``` after 
assertThatStatisticMean(), I guess because this is an 
ObjectAssert rather than DoubleAssert?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17601) Upgrade Jackson databind in branch-2.10 to 2.9.10.7

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17601?focusedWorklogId=581059&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581059
 ]

ASF GitHub Bot logged work on HADOOP-17601:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 14:50
Start Date: 12/Apr/21 14:50
Worklog Time Spent: 10m 
  Work Description: amahussein commented on pull request #2835:
URL: https://github.com/apache/hadoop/pull/2835#issuecomment-817876862


   Thanks @jojochuang !
   Can you please commit the change to branch-2.10?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 581059)
Time Spent: 0.5h  (was: 20m)

> Upgrade Jackson databind in branch-2.10 to 2.9.10.7
> ---
>
> Key: HADOOP-17601
> URL: https://issues.apache.org/jira/browse/HADOOP-17601
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-17601.branch-2.10.001.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Two known vulnerabilities found in Jackson-databind
> [CVE-2021-20190|https://nvd.nist.gov/vuln/detail/CVE-2021-20190] high severity
> [CVE-2020-25649|https://nvd.nist.gov/vuln/detail/CVE-2020-25649] high severity



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein commented on pull request #2835: HADOOP-17601. Upgrade Jackson databind in branch-2.10 to 2.9.10.7

2021-04-12 Thread GitBox


amahussein commented on pull request #2835:
URL: https://github.com/apache/hadoop/pull/2835#issuecomment-817876862


   Thanks @jojochuang !
   Can you please commit the change to branch-2.10?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17603) Upgrade tomcat-embed-core to 7.0.108

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17603?focusedWorklogId=581058&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581058
 ]

ASF GitHub Bot logged work on HADOOP-17603:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 14:49
Start Date: 12/Apr/21 14:49
Worklog Time Spent: 10m 
  Work Description: amahussein commented on pull request #2834:
URL: https://github.com/apache/hadoop/pull/2834#issuecomment-817876121


   Thanks @smengcl , Can you please commit the change to branch-2.10?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 581058)
Time Spent: 40m  (was: 0.5h)

> Upgrade tomcat-embed-core to 7.0.108
> 
>
> Key: HADOOP-17603
> URL: https://issues.apache.org/jira/browse/HADOOP-17603
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, security
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-17603.branch-2.10.001.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> [CVE-2021-25329|https://nvd.nist.gov/vuln/detail/CVE-2021-25329] critical 
> severity.
> Impact: [CVE-2020-9494|https://nvd.nist.gov/vuln/detail/CVE-2020-9494]
> 7.0.0-7.0.107 are all affected by the vulnerability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein commented on pull request #2834: HADOOP-17603. Upgrade tomcat-embed-core to 7.0.108

2021-04-12 Thread GitBox


amahussein commented on pull request #2834:
URL: https://github.com/apache/hadoop/pull/2834#issuecomment-817876121


   Thanks @smengcl , Can you please commit the change to branch-2.10?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17471) ABFS to collect IOStatistics

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17471?focusedWorklogId=581051&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581051
 ]

ASF GitHub Bot logged work on HADOOP-17471:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 14:42
Start Date: 12/Apr/21 14:42
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#2731:
URL: https://github.com/apache/hadoop/pull/2731#discussion_r599105729



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
##
@@ -433,32 +430,29 @@ private synchronized void 
writeCurrentBufferToService(boolean isFlush, boolean i
   }
 }
 final Future job =
-completionService.submit(IOStatisticsBinding
-.trackDurationOfCallable((IOStatisticsStore) ioStatistics,
-StreamStatisticNames.TIME_SPENT_ON_PUT_REQUEST,
-() -> {
-  AbfsPerfTracker tracker = client.getAbfsPerfTracker();
-  try (AbfsPerfInfo perfInfo = new AbfsPerfInfo(tracker,
-  "writeCurrentBufferToService", "append")) {
-AppendRequestParameters.Mode
-mode = APPEND_MODE;
-if (isFlush & isClose) {
-  mode = FLUSH_CLOSE_MODE;
-} else if (isFlush) {
-  mode = FLUSH_MODE;
-}
-AppendRequestParameters reqParams = new 
AppendRequestParameters(
-offset, 0, bytesLength, mode, false);
-AbfsRestOperation op = client.append(path, bytes, 
reqParams,
-cachedSasToken.get());
-cachedSasToken.update(op.getSasToken());
-perfInfo.registerResult(op.getResult());
-byteBufferPool.putBuffer(ByteBuffer.wrap(bytes));
-perfInfo.registerSuccess(true);
-return null;
-  }
-})
-);
+completionService.submit(() -> {

Review comment:
   It's not in use here, but 
org.apache.hadoop.util.SemaphoredDelegatingExecutor now takes a 
DurationTrackerFactory and measures the time between submission and execution 
-how much time we have to wait for space to actually launch the callback. 
   Not sure it would go in here right now, but it's why a standard 
DurationTrackerFactory API offers opportunities in future

##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsNetworkStatistics.java
##
@@ -64,4 +82,60 @@ public void testAbfsThrottlingStatistics() throws 
IOException {
 assertAbfsStatistics(AbfsStatistic.WRITE_THROTTLES, LARGE_OPERATIONS,
 metricMap);
   }
+
+  /**
+   * Test to check if the DurationTrackers are tracking as expected whilst
+   * doing some work.
+   */
+  @Test
+  public void testAbfsNetworkDurationTrackers() throws IOException {
+describe("Test to verify the actual values of DurationTrackers are "
++ "greater than 0.0 while tracking some work.");
+
+AbfsCounters abfsCounters = new AbfsCountersImpl(getFileSystem().getUri());
+// Start dummy work for the DurationTrackers and start tracking.
+try (DurationTracker ignoredPatch =
+abfsCounters.startRequest(AbfsHttpConstants.HTTP_METHOD_PATCH);
+DurationTracker ignoredPost =
+abfsCounters.startRequest(AbfsHttpConstants.HTTP_METHOD_POST)
+) {
+  // Emulates doing some work.
+  Thread.sleep(10);
+  LOG.info("Execute some Http requests...");
+} catch (InterruptedException e) {
+  throw new RuntimeException(
+  "Exception encountered while Thread tried to sleep", e);
+}
+
+// Extract the iostats from the abfsCounters instance.
+IOStatistics ioStatistics = extractStatistics(abfsCounters);
+// Asserting that the durationTrackers have mean > 0.0.
+for (AbfsStatistic abfsStatistic : HTTP_DURATION_TRACKER_LIST) {
+  Assertions.assertThat(lookupMeanStatistic(ioStatistics,

Review comment:
   assertThatStatisticMean

##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsDurationTrackers.java
##
@@ -0,0 +1,110 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in w

[GitHub] [hadoop] steveloughran commented on a change in pull request #2731: HADOOP-17471. ABFS to collect IOStatistics

2021-04-12 Thread GitBox


steveloughran commented on a change in pull request #2731:
URL: https://github.com/apache/hadoop/pull/2731#discussion_r599105729



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
##
@@ -433,32 +430,29 @@ private synchronized void 
writeCurrentBufferToService(boolean isFlush, boolean i
   }
 }
 final Future job =
-completionService.submit(IOStatisticsBinding
-.trackDurationOfCallable((IOStatisticsStore) ioStatistics,
-StreamStatisticNames.TIME_SPENT_ON_PUT_REQUEST,
-() -> {
-  AbfsPerfTracker tracker = client.getAbfsPerfTracker();
-  try (AbfsPerfInfo perfInfo = new AbfsPerfInfo(tracker,
-  "writeCurrentBufferToService", "append")) {
-AppendRequestParameters.Mode
-mode = APPEND_MODE;
-if (isFlush & isClose) {
-  mode = FLUSH_CLOSE_MODE;
-} else if (isFlush) {
-  mode = FLUSH_MODE;
-}
-AppendRequestParameters reqParams = new 
AppendRequestParameters(
-offset, 0, bytesLength, mode, false);
-AbfsRestOperation op = client.append(path, bytes, 
reqParams,
-cachedSasToken.get());
-cachedSasToken.update(op.getSasToken());
-perfInfo.registerResult(op.getResult());
-byteBufferPool.putBuffer(ByteBuffer.wrap(bytes));
-perfInfo.registerSuccess(true);
-return null;
-  }
-})
-);
+completionService.submit(() -> {

Review comment:
   It's not in use here, but 
org.apache.hadoop.util.SemaphoredDelegatingExecutor now takes a 
DurationTrackerFactory and measures the time between submission and execution 
-how much time we have to wait for space to actually launch the callback. 
   Not sure it would go in here right now, but it's why a standard 
DurationTrackerFactory API offers opportunities in future

##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsNetworkStatistics.java
##
@@ -64,4 +82,60 @@ public void testAbfsThrottlingStatistics() throws 
IOException {
 assertAbfsStatistics(AbfsStatistic.WRITE_THROTTLES, LARGE_OPERATIONS,
 metricMap);
   }
+
+  /**
+   * Test to check if the DurationTrackers are tracking as expected whilst
+   * doing some work.
+   */
+  @Test
+  public void testAbfsNetworkDurationTrackers() throws IOException {
+describe("Test to verify the actual values of DurationTrackers are "
++ "greater than 0.0 while tracking some work.");
+
+AbfsCounters abfsCounters = new AbfsCountersImpl(getFileSystem().getUri());
+// Start dummy work for the DurationTrackers and start tracking.
+try (DurationTracker ignoredPatch =
+abfsCounters.startRequest(AbfsHttpConstants.HTTP_METHOD_PATCH);
+DurationTracker ignoredPost =
+abfsCounters.startRequest(AbfsHttpConstants.HTTP_METHOD_POST)
+) {
+  // Emulates doing some work.
+  Thread.sleep(10);
+  LOG.info("Execute some Http requests...");
+} catch (InterruptedException e) {
+  throw new RuntimeException(
+  "Exception encountered while Thread tried to sleep", e);
+}
+
+// Extract the iostats from the abfsCounters instance.
+IOStatistics ioStatistics = extractStatistics(abfsCounters);
+// Asserting that the durationTrackers have mean > 0.0.
+for (AbfsStatistic abfsStatistic : HTTP_DURATION_TRACKER_LIST) {
+  Assertions.assertThat(lookupMeanStatistic(ioStatistics,

Review comment:
   assertThatStatisticMean

##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsDurationTrackers.java
##
@@ -0,0 +1,110 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.IOException;
+
+import org.assertj.core.api.Assertions;
+import org.junit.Test;
+import org.slf4j.

[GitHub] [hadoop] steveloughran commented on a change in pull request #2707: HADOOP-17536. ABFS: Supporting customer provided encryption key

2021-04-12 Thread GitBox


steveloughran commented on a change in pull request #2707:
URL: https://github.com/apache/hadoop/pull/2707#discussion_r611627052



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
##
@@ -412,6 +466,7 @@ public AbfsRestOperation append(final String path, final 
byte[] buffer,
   AppendRequestParameters reqParams, final String cachedSasToken)
   throws AzureBlobFileSystemException {
 final List requestHeaders = createDefaultHeaders();
+addCustomerProvidedKeyHeaders(requestHeaders);
 // JDK7 does not support PATCH, so to workaround the issue we will use
 // PUT and specify the real method in the X-Http-Method-Override header.
 requestHeaders.add(new AbfsHttpHeader(X_HTTP_METHOD_OVERRIDE,

Review comment:
   while here, is this code needed any more?

##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestCustomerProvidedKey.java
##
@@ -0,0 +1,937 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.nio.CharBuffer;
+import java.nio.charset.CharacterCodingException;
+import java.nio.charset.Charset;
+import java.nio.charset.CharsetEncoder;
+import java.nio.charset.StandardCharsets;
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+import java.util.EnumSet;
+import java.util.Hashtable;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Random;
+
+import org.apache.hadoop.fs.azurebfs.services.*;
+import org.assertj.core.api.Assertions;
+import org.junit.Assume;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.XAttrSetFlag;
+import org.apache.hadoop.test.LambdaTestUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import 
org.apache.hadoop.fs.azurebfs.contracts.services.AppendRequestParameters;
+import 
org.apache.hadoop.fs.azurebfs.contracts.services.AppendRequestParameters.Mode;
+import org.apache.hadoop.fs.azurebfs.utils.Base64;
+import org.apache.hadoop.fs.permission.AclEntry;
+import org.apache.hadoop.fs.permission.FsAction;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.thirdparty.com.google.common.base.Preconditions;

Review comment:
   Now, these do need to go up into the "non org-apache section"; relates 
to how backporting usually needs to revert these back to the com.google package

##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestCustomerProvidedKey.java
##
@@ -0,0 +1,937 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.nio.CharBuffer;
+import java.nio.charset.CharacterCodingException;
+import java.nio.charset.Charset;
+import java.nio.charset.CharsetEncoder;
+import java.nio.charset.StandardChars

[GitHub] [hadoop] tomscut opened a new pull request #2896: HDFS-15970. Print network topology on web

2021-04-12 Thread GitBox


tomscut opened a new pull request #2896:
URL: https://github.com/apache/hadoop/pull/2896


   JIRA: [HDFS-15970](https://issues.apache.org/jira/browse/HDFS-15970)
   
   In order to query the network topology information conveniently, we can 
print it on the web.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2707: HADOOP-17536. ABFS: Supporting customer provided encryption key

2021-04-12 Thread GitBox


hadoop-yetus removed a comment on pull request #2707:
URL: https://github.com/apache/hadoop/pull/2707#issuecomment-815726209


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  1s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 36s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  13m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  14m 14s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/11/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 9 new + 9 unchanged - 0 
fixed = 18 total (was 9)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 54s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 53s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  72m 55s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2707 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux dcd079ba7a41 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8da250b9c6236adb92905f24d52b4163690b36a1 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/11/testReport/ |
   | Max. process+thread count | 745 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/11/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2707: HADOOP-17536. ABFS: Supporting customer provided encryption key

2021-04-12 Thread GitBox


hadoop-yetus removed a comment on pull request #2707:
URL: https://github.com/apache/hadoop/pull/2707#issuecomment-816380859


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 16s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  1s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 33s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  16m 48s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 17s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/12/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 3 new + 7 unchanged - 0 
fixed = 10 total (was 7)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  1s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  82m  0s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2707 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux bf1f7325c791 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 176eb38e50ca229ef64a69639f67d14169797c27 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/12/testReport/ |
   | Max. process+thread count | 708 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/12/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
  

[GitHub] [hadoop] steveloughran commented on pull request #2707: HADOOP-17536. ABFS: Supporting customer provided encryption key

2021-04-12 Thread GitBox


steveloughran commented on pull request #2707:
URL: https://github.com/apache/hadoop/pull/2707#issuecomment-817793837


   > Hi @steveloughran Could you please take a look
   
   I've been on a little vacation. I did have some review which was 
unsubmitted...now looks out of date. Will review again


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17536) ABFS: Suport for customer provided encrption key

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17536?focusedWorklogId=580969&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580969
 ]

ASF GitHub Bot logged work on HADOOP-17536:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 13:03
Start Date: 12/Apr/21 13:03
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #2707:
URL: https://github.com/apache/hadoop/pull/2707#issuecomment-815726209


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  1s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 36s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  13m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  14m 14s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/11/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 9 new + 9 unchanged - 0 
fixed = 18 total (was 9)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 54s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 53s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  72m 55s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2707 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux dcd079ba7a41 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8da250b9c6236adb92905f24d52b4163690b36a1 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoo

[jira] [Work logged] (HADOOP-17536) ABFS: Suport for customer provided encrption key

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17536?focusedWorklogId=580967&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580967
 ]

ASF GitHub Bot logged work on HADOOP-17536:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 13:03
Start Date: 12/Apr/21 13:03
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #2707:
URL: https://github.com/apache/hadoop/pull/2707#issuecomment-811141758


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 26s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m  3s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  16m 20s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 17s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/10/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 4 new + 9 unchanged - 0 
fixed = 13 total (was 9)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  2s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  81m 31s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2707 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 8670aec7bb04 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2cd36d7aaee8354f70468e6ae830b4c294ced0fa |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hado

[jira] [Work logged] (HADOOP-17536) ABFS: Suport for customer provided encrption key

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17536?focusedWorklogId=580971&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580971
 ]

ASF GitHub Bot logged work on HADOOP-17536:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 13:04
Start Date: 12/Apr/21 13:04
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2707:
URL: https://github.com/apache/hadoop/pull/2707#issuecomment-817793837


   > Hi @steveloughran Could you please take a look
   
   I've been on a little vacation. I did have some review which was 
unsubmitted...now looks out of date. Will review again


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 580971)
Time Spent: 10h  (was: 9h 50m)

> ABFS: Suport for customer provided encrption key
> 
>
> Key: HADOOP-17536
> URL: https://issues.apache.org/jira/browse/HADOOP-17536
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10h
>  Remaining Estimate: 0h
>
> The data for a particular customer needs to be encrypted on account level. At 
> server side the APIs will start accepting the encryption key as part of 
> request headers. The data will be encrypted/decrypted with the given key at 
> the server. 
> Since the ABFS FileSystem APIs are implementations for Hadoop FileSystem APIs 
> there is no direct way with which customer can pass the key to ABFS driver. 
> In this case driver should have the following capabilities so that it can 
> accept and pass the encryption key as one of the request headers. 
>  # There should be a way to configure the encryption key for different 
> accounts.
>  # If there is a key specified for a particular account, the same needs to be 
> sent along with the request headers. 
> *Config changes* 
> They key for an account can be specified in the core-site as follows. 
> fs.azure.account.client-provided-encryption-key.{account 
> name}.dfs.core.windows.net 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17536) ABFS: Suport for customer provided encrption key

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17536?focusedWorklogId=580970&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580970
 ]

ASF GitHub Bot logged work on HADOOP-17536:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 13:04
Start Date: 12/Apr/21 13:04
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #2707:
URL: https://github.com/apache/hadoop/pull/2707#issuecomment-816380859


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 16s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  1s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 33s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  16m 48s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 17s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/12/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 3 new + 7 unchanged - 0 
fixed = 10 total (was 7)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  1s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  82m  0s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2707 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux bf1f7325c791 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 176eb38e50ca229ef64a69639f67d14169797c27 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hado

[jira] [Work logged] (HADOOP-17536) ABFS: Suport for customer provided encrption key

2021-04-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17536?focusedWorklogId=580966&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580966
 ]

ASF GitHub Bot logged work on HADOOP-17536:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 13:03
Start Date: 12/Apr/21 13:03
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #2707:
URL: https://github.com/apache/hadoop/pull/2707#issuecomment-810466842


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  7s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  14m 42s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 17s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/9/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 4 new + 9 unchanged - 0 
fixed = 13 total (was 9)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  spotbugs  |   1m  1s | 
[/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/9/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 4 new + 0 unchanged - 0 fixed = 4 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  14m 29s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  1s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  75m  2s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Questionable use of non-short-circuit logic in 
org.apache.hadoop.fs.azurebfs.services.AbfsIoUtils.dumpHeadersToDebugLog(String,
 List)  At AbfsIoUtils.java:in 
org.apache.hadoop.fs.azurebfs.services.AbfsIoUtils.dumpHeadersToDebugLog(String,
 List)  At AbfsIoUtils.java:[line 75] |
   |  |  httpOperation could be null and is guaranteed to be dereferenced in 
org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(int)
  Dereferenced at AbfsRestOperation.java:is guaranteed to be dereferenced in 
org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(int)
  Dereferenced at AbfsRestOperation.java:[li

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2707: HADOOP-17536. ABFS: Supporting customer provided encryption key

2021-04-12 Thread GitBox


hadoop-yetus removed a comment on pull request #2707:
URL: https://github.com/apache/hadoop/pull/2707#issuecomment-811141758


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 26s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m  3s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  16m 20s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 17s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/10/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 4 new + 9 unchanged - 0 
fixed = 13 total (was 9)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  2s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  81m 31s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2707 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 8670aec7bb04 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2cd36d7aaee8354f70468e6ae830b4c294ced0fa |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/10/testReport/ |
   | Max. process+thread count | 566 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/10/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
  

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2707: HADOOP-17536. ABFS: Supporting customer provided encryption key

2021-04-12 Thread GitBox


hadoop-yetus removed a comment on pull request #2707:
URL: https://github.com/apache/hadoop/pull/2707#issuecomment-810466842


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  7s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  14m 42s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 17s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/9/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 4 new + 9 unchanged - 0 
fixed = 13 total (was 9)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  spotbugs  |   1m  1s | 
[/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/9/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 4 new + 0 unchanged - 0 fixed = 4 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  14m 29s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  1s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  75m  2s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Questionable use of non-short-circuit logic in 
org.apache.hadoop.fs.azurebfs.services.AbfsIoUtils.dumpHeadersToDebugLog(String,
 List)  At AbfsIoUtils.java:in 
org.apache.hadoop.fs.azurebfs.services.AbfsIoUtils.dumpHeadersToDebugLog(String,
 List)  At AbfsIoUtils.java:[line 75] |
   |  |  httpOperation could be null and is guaranteed to be dereferenced in 
org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(int)
  Dereferenced at AbfsRestOperation.java:is guaranteed to be dereferenced in 
org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(int)
  Dereferenced at AbfsRestOperation.java:[line 291] |
   |  |  httpOperation could be null and is guaranteed to be dereferenced in 
org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(int)
  Dereferenced at AbfsRestOperation.java:is guaranteed to be dereferenced in 
org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(int)
  Dereferenced at AbfsRestOperation.java:[line 291] |
   |  |  httpOperation is null guaranteed to be

  1   2   >