[GitHub] [hadoop] iwasakims commented on pull request #2581: YARN-10553. Refactor TestDistributedShell

2021-01-06 Thread GitBox


iwasakims commented on pull request #2581:
URL: https://github.com/apache/hadoop/pull/2581#issuecomment-755926955


   > * The implementation has lots of code redundancy.
   > * It is inefficient in the setup and tearing down. The large percentage of 
time execution is exhausted by starting cluster and stopping the services.
   
   I think there is a gap between the PR and the above description of 
[YARN-10553](https://issues.apache.org/jira/browse/YARN-10553). Just splitting 
the TestDistributedShell does not reduce code nor minicluster ramp-up time. It 
would be nice to update the JIRA description reflecting what is addressed.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims commented on pull request #2581: YARN-10553. Refactor TestDistributedShell

2021-01-06 Thread GitBox


iwasakims commented on pull request #2581:
URL: https://github.com/apache/hadoop/pull/2581#issuecomment-755926211


   Thank for working on this, @amahussein. LGTM overall pending some nits.
   
   While it is too late here, it is hard to follow which part of the 
TestDistributedShell is modified if splitting the class and refactoring are 
mixed up in a single commit. Doing one thing in one PR makes reviewing and 
cherry-picking easier.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims commented on a change in pull request #2581: YARN-10553. Refactor TestDistributedShell

2021-01-06 Thread GitBox


iwasakims commented on a change in pull request #2581:
URL: https://github.com/apache/hadoop/pull/2581#discussion_r553140533



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
##
@@ -1330,7 +1330,7 @@ private void 
setAMResourceCapability(ApplicationSubmissionContext appContext,
 }
 if (amVCores == -1) {
   amVCores = DEFAULT_AM_VCORES;
-  LOG.warn("AM vcore not specified, use " + DEFAULT_AM_VCORES
+  LOG.warn("AM vcore not specified, use {}" + DEFAULT_AM_VCORES

Review comment:
   looks like misused place holder.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims commented on a change in pull request #2581: YARN-10553. Refactor TestDistributedShell

2021-01-06 Thread GitBox


iwasakims commented on a change in pull request #2581:
URL: https://github.com/apache/hadoop/pull/2581#discussion_r553140343



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDSTimelineV20.java
##
@@ -0,0 +1,487 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.applications.distributedshell;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileNotFoundException;
+import java.io.FileReader;
+import java.io.IOException;
+import java.util.List;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.junit.Assert;
+import org.junit.Assume;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
+import org.apache.hadoop.yarn.api.records.ApplicationAttemptReport;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.api.records.ApplicationReport;
+import org.apache.hadoop.yarn.api.records.ContainerId;
+import org.apache.hadoop.yarn.api.records.ContainerReport;
+import org.apache.hadoop.yarn.api.records.ExecutionType;
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntity;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntityType;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEvent;
+import 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.DSEvent;
+import org.apache.hadoop.yarn.client.api.YarnClient;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.server.metrics.AppAttemptMetricsConstants;
+import org.apache.hadoop.yarn.server.metrics.ApplicationMetricsConstants;
+import org.apache.hadoop.yarn.server.metrics.ContainerMetricsConstants;
+import 
org.apache.hadoop.yarn.server.timelineservice.collector.PerNodeTimelineCollectorsAuxService;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineReaderImpl;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineWriterImpl;
+import org.apache.hadoop.yarn.util.timeline.TimelineUtils;
+
+/**
+ * Unit tests implementations for distributed shell on TimeLineV2.
+ */
+public class TestDSTimelineV20 extends DistributedShellBaseTest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(TestDSTimelineV20.class);
+  private static final String TIMELINE_AUX_SERVICE_NAME = "timeline_collector";
+
+  @Override
+  protected float getTimelineVersion() {
+return 2.0f;
+  }
+
+  @Override
+  protected void customizeConfiguration(
+  YarnConfiguration config) throws Exception {
+// set version to 2
+config.setFloat(YarnConfiguration.TIMELINE_SERVICE_VERSION,
+getTimelineVersion());
+// disable v1 timeline server since we no longer have a server here
+// enable aux-service based timeline aggregators
+config.set(YarnConfiguration.NM_AUX_SERVICES, TIMELINE_AUX_SERVICE_NAME);
+config.set(YarnConfiguration.NM_AUX_SERVICES + "." +
+TIMELINE_AUX_SERVICE_NAME + ".class",
+PerNodeTimelineCollectorsAuxService.class.getName());
+config.setClass(YarnConfiguration.TIMELINE_SERVICE_WRITER_CLASS,
+FileSystemTimelineWriterImpl.class,
+org.apache.hadoop.yarn.server.timelineservice.storage.
+TimelineWriter.class);
+setTimelineV2StorageDir();
+// set the file system timeline writer storage directory
+config.set(FileSystemTimelineWriterImpl.TIMELINE_SERVICE_STORAGE_DIR_ROOT,
+getTimelineV2StorageDir());
+  }
+
+  @Test
+  public void testDSShellWithEnforceExecutionType() throws Exception {
+YarnClient yarnClient = null;
+AtomicReference thrownError = new AtomicReference<>(null);
+AtomicReference> 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2578: [HDFS-15754] Add DataNode packet metrics

2021-01-06 Thread GitBox


hadoop-yetus commented on pull request #2578:
URL: https://github.com/apache/hadoop/pull/2578#issuecomment-755923450


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 27s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 10s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |  20m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 53s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 57s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   3m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 56s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m 23s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |  23m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |  21m 21s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m 18s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2578/4/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 4 new + 124 unchanged - 0 fixed = 128 total (was 
124)  |
   | +1 :green_heart: |  mvnsite  |   2m 59s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  18m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   3m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   7m  5s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  12m  9s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 146m 43s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2578/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 370m 13s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestFsck |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2578/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2578 |
   | Optional Tests | dupname asflicense mvnsite markdownlint compile javac 
javadoc mvninstall unit shadedclient findbugs checkstyle |
   | uname | Linux 5374d5a6f8ad 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b612c310c26 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2581: YARN-10553. Refactor TestDistributedShell

2021-01-06 Thread GitBox


hadoop-yetus commented on pull request #2581:
URL: https://github.com/apache/hadoop/pull/2581#issuecomment-755917599


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 46s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 43s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell:
 The patch generated 0 new + 138 unchanged - 18 fixed = 138 total (was 156)  |
   | +1 :green_heart: |  mvnsite  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  1s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 52s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   0m 45s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  23m 36s |  |  
hadoop-yarn-applications-distributedshell in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  97m 52s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2581/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2581 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 84c1beb5da96 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b612c310c26 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2581/4/testReport/ |
   | Max. process+thread count | 718 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2581/4/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



[GitHub] [hadoop] sunchao commented on a change in pull request #2578: [HDFS-15754] Add DataNode packet metrics

2021-01-06 Thread GitBox


sunchao commented on a change in pull request #2578:
URL: https://github.com/apache/hadoop/pull/2578#discussion_r553128834



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMetrics.java
##
@@ -161,6 +163,53 @@ public void testReceivePacketMetrics() throws Exception {
 }
   }
 
+  @Test
+  public void testReceivePacketSlowMetrics() throws Exception {
+Configuration conf = new HdfsConfiguration();
+final int interval = 1;
+conf.setInt(DFSConfigKeys.DFS_METRICS_PERCENTILES_INTERVALS_KEY, interval);
+MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf)
+.numDataNodes(3).build();
+try {
+  cluster.waitActive();
+  DistributedFileSystem fs = cluster.getFileSystem();
+  final DataNodeFaultInjector injector =
+  Mockito.mock(DataNodeFaultInjector.class);
+  Answer answer = new Answer() {
+@Override
+public Object answer(InvocationOnMock invocationOnMock)
+throws Throwable {
+  // make the op taking longer time
+  Thread.sleep(1000);
+  return null;
+}
+  };
+  Mockito.doAnswer(answer).when(injector).
+  stopSendingPacketDownstream(Mockito.anyString());
+  Mockito.doAnswer(answer).when(injector).delayWriteToOsCache();
+  Mockito.doAnswer(answer).when(injector).delayWriteToDisk();
+  DataNodeFaultInjector.set(injector);
+  Path testFile = new Path("/testFlushNanosMetric.txt");
+  FSDataOutputStream fout = fs.create(testFile);
+  fout.write(new byte[1]);
+  fout.hsync();
+  fout.close();
+  List datanodes = cluster.getDataNodes();
+  DataNode datanode = datanodes.get(0);
+  MetricsRecordBuilder dnMetrics = 
getMetrics(datanode.getMetrics().name());
+  assertTrue("More than 1 packet received",
+  getLongCounter("TotalPacketsReceived", dnMetrics) > 1L);
+  assertTrue("More than 1 slow packet to mirror",
+  getLongCounter("TotalPacketsSlowWriteToMirror", dnMetrics) > 1L);
+  assertCounter("TotalPacketsSlowWriteToDisk", 1L, dnMetrics);
+  assertCounter("TotalPacketsSlowWriteOsCache", 0L, dnMetrics);

Review comment:
   I think this also needs update.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17433) Skipping network I/O in S3A getFileStatus(/) breaks ITestAssumeRole

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17433?focusedWorklogId=532280=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-532280
 ]

ASF GitHub Bot logged work on HADOOP-17433:
---

Author: ASF GitHub Bot
Created on: 07/Jan/21 05:55
Start Date: 07/Jan/21 05:55
Worklog Time Spent: 10m 
  Work Description: mukund-thakur commented on pull request #2600:
URL: https://github.com/apache/hadoop/pull/2600#issuecomment-755902810


   LGTM +1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 532280)
Time Spent: 40m  (was: 0.5h)

> Skipping network I/O in S3A getFileStatus(/) breaks ITestAssumeRole
> ---
>
> Key: HADOOP-17433
> URL: https://issues.apache.org/jira/browse/HADOOP-17433
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Mukund Thakur
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Test failure in ITestAssumeRole.testAssumeRoleRestrictedPolicyFS if the test 
> bucket is unguarded. I've been playing with my bucket settings so this 
> probably didn't surface before. 
> test arguments -Dparallel-tests -DtestsThreadCount=4 -Dmarkers=keep  
> -Dfs.s3a.directory.marker.audit=true



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on pull request #2600: HADOOP-17433. Skipping network I/O in S3A getFileStatus(/) breaks ITestAssumeRole.

2021-01-06 Thread GitBox


mukund-thakur commented on pull request #2600:
URL: https://github.com/apache/hadoop/pull/2600#issuecomment-755902810


   LGTM +1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein commented on a change in pull request #2581: YARN-10553. Refactor TestDistributedShell

2021-01-06 Thread GitBox


amahussein commented on a change in pull request #2581:
URL: https://github.com/apache/hadoop/pull/2581#discussion_r553108878



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/MiniYARNCluster.java
##
@@ -815,7 +815,6 @@ protected synchronized void serviceInit(Configuration conf)
 
 @Override
 protected synchronized void serviceStart() throws Exception {
-

Review comment:
   Done!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein commented on a change in pull request #2581: YARN-10553. Refactor TestDistributedShell

2021-01-06 Thread GitBox


amahussein commented on a change in pull request #2581:
URL: https://github.com/apache/hadoop/pull/2581#discussion_r553108800



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDSShellTimelineV10.java
##
@@ -0,0 +1,845 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.applications.distributedshell;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileReader;
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.io.UncheckedIOException;
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.junit.Assert;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.commons.cli.MissingArgumentException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.test.LambdaTestUtils;
+import org.apache.hadoop.util.Shell;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext;
+import org.apache.hadoop.yarn.api.records.ContainerState;
+import org.apache.hadoop.yarn.api.records.ContainerStatus;
+import org.apache.hadoop.yarn.api.records.LogAggregationContext;
+import org.apache.hadoop.yarn.client.api.impl.DirectTimelineWriter;
+import org.apache.hadoop.yarn.client.api.impl.TestTimelineClient;
+import org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl;
+import org.apache.hadoop.yarn.client.api.impl.TimelineWriter;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.ResourceNotFoundException;
+import org.apache.hadoop.yarn.server.utils.BuilderUtils;
+import org.apache.hadoop.yarn.util.Records;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.spy;
+import static org.mockito.Mockito.when;
+
+/**
+ * Unit tests implementations for distributed shell on TimeLineV1.
+ */
+public class TestDSShellTimelineV10 extends DistributedShellBaseTest {

Review comment:
   > TestDSTimelineV10 rather than TestDSShellTimelineV10 sounds natural? 
Same for V15 and V20.
   
   Done!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein commented on a change in pull request #2581: YARN-10553. Refactor TestDistributedShell

2021-01-06 Thread GitBox


amahussein commented on a change in pull request #2581:
URL: https://github.com/apache/hadoop/pull/2581#discussion_r553108729



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDSShellTimelineV10.java
##
@@ -0,0 +1,845 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.applications.distributedshell;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileReader;
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.io.UncheckedIOException;
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.junit.Assert;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.commons.cli.MissingArgumentException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.test.LambdaTestUtils;
+import org.apache.hadoop.util.Shell;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext;
+import org.apache.hadoop.yarn.api.records.ContainerState;
+import org.apache.hadoop.yarn.api.records.ContainerStatus;
+import org.apache.hadoop.yarn.api.records.LogAggregationContext;
+import org.apache.hadoop.yarn.client.api.impl.DirectTimelineWriter;
+import org.apache.hadoop.yarn.client.api.impl.TestTimelineClient;
+import org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl;
+import org.apache.hadoop.yarn.client.api.impl.TimelineWriter;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.ResourceNotFoundException;
+import org.apache.hadoop.yarn.server.utils.BuilderUtils;
+import org.apache.hadoop.yarn.util.Records;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.spy;
+import static org.mockito.Mockito.when;
+
+/**
+ * Unit tests implementations for distributed shell on TimeLineV1.
+ */
+public class TestDSShellTimelineV10 extends DistributedShellBaseTest {

Review comment:
   > Is the fix of o.a.h.tools.dynamometer.Cliet related to 
TestDistributedShell? It should be addressed in another JIRA if not.
   
   Thanks @iwasakims
   I removed them.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2603: Merge pull request #1 from apache/trunk

2021-01-06 Thread GitBox


hadoop-yetus commented on pull request #2603:
URL: https://github.com/apache/hadoop/pull/2603#issuecomment-755870419


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  7s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 50s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  shadedclient  |  29m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m  3s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  47m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2603/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2603 |
   | Optional Tests | dupname asflicense |
   | uname | Linux 6d8f61271607 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b612c310c26 |
   | Max. process+thread count | 617 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2603/1/console |
   | versions | git=2.17.1 maven=3.6.0 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17408) Optimize NetworkTopology while sorting of block locations

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17408?focusedWorklogId=532251=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-532251
 ]

ASF GitHub Bot logged work on HADOOP-17408:
---

Author: ASF GitHub Bot
Created on: 07/Jan/21 03:43
Start Date: 07/Jan/21 03:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2601:
URL: https://github.com/apache/hadoop/pull/2601#issuecomment-755863757


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 47s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 14s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |  18m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m  4s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 58s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   3m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 20s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 37s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |  20m  2s |  |  
root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 generated 0 new + 2041 unchanged - 1 
fixed = 2041 total (was 2042)  |
   | +1 :green_heart: |  compile  |  17m 48s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 48s |  |  
root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 0 new + 1935 unchanged - 
1 fixed = 1935 total (was 1936)  |
   | -0 :warning: |  checkstyle  |   2m 33s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2601/1/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 1 new + 48 unchanged - 3 fixed = 49 total (was 
51)  |
   | +1 :green_heart: |  mvnsite  |   2m 55s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 18s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   3m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m 45s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m  0s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  |  99m 59s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2601/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  7s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 302m 25s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestMaintenanceState |
   |   | hadoop.hdfs.server.blockmanagement.TestDatanodeManager |
   |   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2601: HADOOP-17408. Optimize NetworkTopology sorting block locations.

2021-01-06 Thread GitBox


hadoop-yetus commented on pull request #2601:
URL: https://github.com/apache/hadoop/pull/2601#issuecomment-755863757


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 47s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 14s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |  18m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m  4s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 58s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   3m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 20s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 37s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |  20m  2s |  |  
root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 generated 0 new + 2041 unchanged - 1 
fixed = 2041 total (was 2042)  |
   | +1 :green_heart: |  compile  |  17m 48s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 48s |  |  
root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 0 new + 1935 unchanged - 
1 fixed = 1935 total (was 1936)  |
   | -0 :warning: |  checkstyle  |   2m 33s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2601/1/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 1 new + 48 unchanged - 3 fixed = 49 total (was 
51)  |
   | +1 :green_heart: |  mvnsite  |   2m 55s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 18s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   3m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m 45s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m  0s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  |  99m 59s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2601/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  7s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 302m 25s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestMaintenanceState |
   |   | hadoop.hdfs.server.blockmanagement.TestDatanodeManager |
   |   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2601/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2601 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 0287b694bcb4 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 

[GitHub] [hadoop] zml4518079 opened a new pull request #2603: Merge pull request #1 from apache/trunk

2021-01-06 Thread GitBox


zml4518079 opened a new pull request #2603:
URL: https://github.com/apache/hadoop/pull/2603


   sychronizing code
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17452) Upgrade guice to 4.2.3

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17452?focusedWorklogId=532245=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-532245
 ]

ASF GitHub Bot logged work on HADOOP-17452:
---

Author: ASF GitHub Bot
Created on: 07/Jan/21 02:58
Start Date: 07/Jan/21 02:58
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on pull request #2582:
URL: https://github.com/apache/hadoop/pull/2582#issuecomment-755850969


   > Let me check if there are any test failures due to this change.
   
   Created #2602



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 532245)
Time Spent: 1.5h  (was: 1h 20m)

> Upgrade guice to 4.2.3
> --
>
> Key: HADOOP-17452
> URL: https://issues.apache.org/jira/browse/HADOOP-17452
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yuming Wang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Upgrade guice to 4.2.3 to fix compatibility issue:
> {noformat}
> Exception in thread "main" java.lang.NoSuchMethodError: 
> com.google.inject.util.Types.collectionOf(Ljava/lang/reflect/Type;)Ljava/lang/reflect/ParameterizedType;
> » at 
> com.google.inject.multibindings.Multibinder.collectionOfProvidersOf(Multibinder.java:202)
> » at 
> com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:283)
> » at 
> com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:258)
> » at 
> com.google.inject.multibindings.Multibinder.newRealSetBinder(Multibinder.java:178)
> » at 
> com.google.inject.multibindings.Multibinder.newSetBinder(Multibinder.java:150)
> » at 
> org.apache.druid.guice.LifecycleModule.getEagerBinder(LifecycleModule.java:115)
> » at 
> org.apache.druid.guice.LifecycleModule.configure(LifecycleModule.java:121)
> » at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
> » at com.google.inject.spi.Elements.getElements(Elements.java:110)
> » at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177)
> » at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
> » at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
> » at com.google.inject.spi.Elements.getElements(Elements.java:110)
> » at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177)
> » at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
> » at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
> » at com.google.inject.spi.Elements.getElements(Elements.java:110)
> » at 
> com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:138)
> » at 
> com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:104)
> » at com.google.inject.Guice.createInjector(Guice.java:96)
> » at com.google.inject.Guice.createInjector(Guice.java:73)
> » at com.google.inject.Guice.createInjector(Guice.java:62)
> » at 
> org.apache.druid.initialization.Initialization.makeInjectorWithModules(Initialization.java:431)
> » at org.apache.druid.cli.GuiceRunnable.makeInjector(GuiceRunnable.java:69)
> » at org.apache.druid.cli.ServerRunnable.run(ServerRunnable.java:58)
> » at org.apache.druid.cli.Main.main(Main.java:113)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #2582: HADOOP-17452. Upgrade Guice to 4.2.3

2021-01-06 Thread GitBox


aajisaka commented on pull request #2582:
URL: https://github.com/apache/hadoop/pull/2582#issuecomment-755850969


   > Let me check if there are any test failures due to this change.
   
   Created #2602



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17452) Upgrade guice to 4.2.3

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17452?focusedWorklogId=532244=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-532244
 ]

ASF GitHub Bot logged work on HADOOP-17452:
---

Author: ASF GitHub Bot
Created on: 07/Jan/21 02:57
Start Date: 07/Jan/21 02:57
Worklog Time Spent: 10m 
  Work Description: aajisaka opened a new pull request #2602:
URL: https://github.com/apache/hadoop/pull/2602


   Added a commit to run all the YARN unit tests for #2582 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 532244)
Time Spent: 1h 20m  (was: 1h 10m)

> Upgrade guice to 4.2.3
> --
>
> Key: HADOOP-17452
> URL: https://issues.apache.org/jira/browse/HADOOP-17452
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yuming Wang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Upgrade guice to 4.2.3 to fix compatibility issue:
> {noformat}
> Exception in thread "main" java.lang.NoSuchMethodError: 
> com.google.inject.util.Types.collectionOf(Ljava/lang/reflect/Type;)Ljava/lang/reflect/ParameterizedType;
> » at 
> com.google.inject.multibindings.Multibinder.collectionOfProvidersOf(Multibinder.java:202)
> » at 
> com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:283)
> » at 
> com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:258)
> » at 
> com.google.inject.multibindings.Multibinder.newRealSetBinder(Multibinder.java:178)
> » at 
> com.google.inject.multibindings.Multibinder.newSetBinder(Multibinder.java:150)
> » at 
> org.apache.druid.guice.LifecycleModule.getEagerBinder(LifecycleModule.java:115)
> » at 
> org.apache.druid.guice.LifecycleModule.configure(LifecycleModule.java:121)
> » at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
> » at com.google.inject.spi.Elements.getElements(Elements.java:110)
> » at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177)
> » at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
> » at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
> » at com.google.inject.spi.Elements.getElements(Elements.java:110)
> » at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177)
> » at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
> » at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
> » at com.google.inject.spi.Elements.getElements(Elements.java:110)
> » at 
> com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:138)
> » at 
> com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:104)
> » at com.google.inject.Guice.createInjector(Guice.java:96)
> » at com.google.inject.Guice.createInjector(Guice.java:73)
> » at com.google.inject.Guice.createInjector(Guice.java:62)
> » at 
> org.apache.druid.initialization.Initialization.makeInjectorWithModules(Initialization.java:431)
> » at org.apache.druid.cli.GuiceRunnable.makeInjector(GuiceRunnable.java:69)
> » at org.apache.druid.cli.ServerRunnable.run(ServerRunnable.java:58)
> » at org.apache.druid.cli.Main.main(Main.java:113)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka opened a new pull request #2602: Test PR for HADOOP-17452

2021-01-06 Thread GitBox


aajisaka opened a new pull request #2602:
URL: https://github.com/apache/hadoop/pull/2602


   Added a commit to run all the YARN unit tests for #2582 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims commented on a change in pull request #2581: YARN-10553. Refactor TestDistributedShell

2021-01-06 Thread GitBox


iwasakims commented on a change in pull request #2581:
URL: https://github.com/apache/hadoop/pull/2581#discussion_r553079537



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDSShellTimelineV10.java
##
@@ -0,0 +1,845 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.applications.distributedshell;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileReader;
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.io.UncheckedIOException;
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.junit.Assert;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.commons.cli.MissingArgumentException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.test.LambdaTestUtils;
+import org.apache.hadoop.util.Shell;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext;
+import org.apache.hadoop.yarn.api.records.ContainerState;
+import org.apache.hadoop.yarn.api.records.ContainerStatus;
+import org.apache.hadoop.yarn.api.records.LogAggregationContext;
+import org.apache.hadoop.yarn.client.api.impl.DirectTimelineWriter;
+import org.apache.hadoop.yarn.client.api.impl.TestTimelineClient;
+import org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl;
+import org.apache.hadoop.yarn.client.api.impl.TimelineWriter;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.ResourceNotFoundException;
+import org.apache.hadoop.yarn.server.utils.BuilderUtils;
+import org.apache.hadoop.yarn.util.Records;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.spy;
+import static org.mockito.Mockito.when;
+
+/**
+ * Unit tests implementations for distributed shell on TimeLineV1.
+ */
+public class TestDSShellTimelineV10 extends DistributedShellBaseTest {

Review comment:
   TestDSTimelineV10 rather than TestDSShellTimelineV10 sounds natural? 
Same for V15 and V20.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims commented on pull request #2581: YARN-10553. Refactor TestDistributedShell

2021-01-06 Thread GitBox


iwasakims commented on pull request #2581:
URL: https://github.com/apache/hadoop/pull/2581#issuecomment-755846099


   Is the fix of o.a.h.tools.dynamometer.Cliet related to TestDistributedShell? 
It should be addressed in another JIRA if not.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17452) Upgrade guice to 4.2.3

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17452?focusedWorklogId=532228=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-532228
 ]

ASF GitHub Bot logged work on HADOOP-17452:
---

Author: ASF GitHub Bot
Created on: 07/Jan/21 01:41
Start Date: 07/Jan/21 01:41
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2582:
URL: https://github.com/apache/hadoop/pull/2582#issuecomment-755829152


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  59m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 12s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  18m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 16s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  85m  5s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2582/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2582 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 8f62d6fba0bf 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b612c310c26 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2582/3/testReport/ |
   | Max. process+thread count | 541 (vs. ulimit of 5500) |
   | modules | C: hadoop-project U: hadoop-project |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2582/3/console |
   | versions | git=2.17.1 maven=3.6.0 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the

[GitHub] [hadoop] hadoop-yetus commented on pull request #2582: HADOOP-17452. Upgrade Guice to 4.2.3

2021-01-06 Thread GitBox


hadoop-yetus commented on pull request #2582:
URL: https://github.com/apache/hadoop/pull/2582#issuecomment-755829152


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  59m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 12s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  18m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 16s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  85m  5s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2582/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2582 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 8f62d6fba0bf 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b612c310c26 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2582/3/testReport/ |
   | Max. process+thread count | 541 (vs. ulimit of 5500) |
   | modules | C: hadoop-project U: hadoop-project |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2582/3/console |
   | versions | git=2.17.1 maven=3.6.0 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2581: YARN-10553. Refactor TestDistributedShell

2021-01-06 Thread GitBox


hadoop-yetus commented on pull request #2581:
URL: https://github.com/apache/hadoop/pull/2581#issuecomment-755824076


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 8 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 45s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m  6s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |  17m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 45s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 54s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 44s |  |  
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
 no findbugs output file (findbugsXml.xml)  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |  19m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 23s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m 15s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2581/3/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 4 new + 164 unchanged - 18 fixed = 168 total (was 
182)  |
   | +1 :green_heart: |  mvnsite  |   2m 12s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 27s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 52s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  findbugs  |   0m 41s |  |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests has 
no data from findbugs  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 26s |  |  hadoop-yarn-server-tests in the 
patch passed.  |
   | +1 :green_heart: |  unit  |  23m 44s |  |  
hadoop-yarn-applications-distributedshell in the patch passed.  |
   | -1 :x: |  unit  |   0m 43s | 
[/patch-unit-hadoop-tools_hadoop-dynamometer_hadoop-dynamometer-infra.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2581/3/artifact/out/patch-unit-hadoop-tools_hadoop-dynamometer_hadoop-dynamometer-infra.txt)
 |  hadoop-dynamometer-infra in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 200m 17s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.tools.dynamometer.TestDynamometerInfra |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2581/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2581 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 72dec302ac86 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b612c310c26 |
   | Default Java | Private 

[GitHub] [hadoop] amahussein commented on pull request #2581: YARN-10553. Refactor TestDistributedShell

2021-01-06 Thread GitBox


amahussein commented on pull request #2581:
URL: https://github.com/apache/hadoop/pull/2581#issuecomment-755797255


   > > @goiri the code changes fix {{testDSShellWithEnforceExecutionType}}.
   > > The problem with the test was that it launches two containers executing 
cmd `date`. Apparently the two containers would exit fast. The unit test will 
stay blocked waiting for the containers to be exactly "2".
   > > This does not take into consideration that the containers count is 3 
including the AMContainer.
   > > The fix was to get rid of the equality in the check, and change the 
application command to `ls`
   > 
   > That makes sense, does it make sense to make the assert more general than 
an equality instead of getting rid of the equality?
   
   In `DistributedShellBaseTest.waitForContainersLaunch()`, I used inequality 
`if (containers.size() < nContainers) { return false; }`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao commented on a change in pull request #2578: [HDFS-15754] Add DataNode packet metrics

2021-01-06 Thread GitBox


sunchao commented on a change in pull request #2578:
URL: https://github.com/apache/hadoop/pull/2578#discussion_r55296



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java
##
@@ -690,4 +695,20 @@ public void addCheckAndUpdateOp(long latency) {
   public void addUpdateReplicaUnderRecoveryOp(long latency) {
 updateReplicaUnderRecoveryOp.add(latency);
   }
+
+  public void incrPacketsReceived() {
+packetsReceived.incr();
+  }
+
+  public void incrPacketsSlowWriteToMirror() {
+packetsSlowWriteToMirror.incr();
+  }
+
+  public void incrPacketsSlowWriteToDisk() {
+packetsSlowWriteToDisk.incr();
+  }
+
+  public void incrPacketsSlowWriteOsCache() {

Review comment:
   nit: name this to `incrPacketsSlowWriteToOsCache`?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java
##
@@ -183,6 +183,11 @@
   @Metric private MutableRate checkAndUpdateOp;
   @Metric private MutableRate updateReplicaUnderRecoveryOp;
 
+  @Metric MutableCounterLong packetsReceived;
+  @Metric MutableCounterLong packetsSlowWriteToMirror;
+  @Metric MutableCounterLong packetsSlowWriteToDisk;
+  @Metric MutableCounterLong packetsSlowWriteOsCache;

Review comment:
   ditto





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #2581: YARN-10553. Refactor TestDistributedShell

2021-01-06 Thread GitBox


goiri commented on a change in pull request #2581:
URL: https://github.com/apache/hadoop/pull/2581#discussion_r553013460



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/MiniYARNCluster.java
##
@@ -815,7 +815,6 @@ protected synchronized void serviceInit(Configuration conf)
 
 @Override
 protected synchronized void serviceStart() throws Exception {
-

Review comment:
   Avoid? I would prefer to reduce churn for commits.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on pull request #2581: YARN-10553. Refactor TestDistributedShell

2021-01-06 Thread GitBox


goiri commented on pull request #2581:
URL: https://github.com/apache/hadoop/pull/2581#issuecomment-755773457


   > @goiri the code changes fix {{testDSShellWithEnforceExecutionType}}.
   > The problem with the test was that it launches two containers executing 
cmd `date`. Apparently the two containers would exit fast. The unit test will 
stay blocked waiting for the containers to be exactly "2".
   > This does not take into consideration that the containers count is 3 
including the AMContainer.
   > 
   > The fix was to get rid of the equality in the check, and change the 
application command to `ls`
   
   That makes sense, does it make sense to make the assert more general than an 
equality instead of getting rid of the equality?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein commented on pull request #2581: YARN-10553. Refactor TestDistributedShell

2021-01-06 Thread GitBox


amahussein commented on pull request #2581:
URL: https://github.com/apache/hadoop/pull/2581#issuecomment-755767916


   CC: @iwasakims . This includes a fix to `testDSShellWithEnforceExecutionType`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein edited a comment on pull request #2581: YARN-10553. Refactor TestDistributedShell

2021-01-06 Thread GitBox


amahussein edited a comment on pull request #2581:
URL: https://github.com/apache/hadoop/pull/2581#issuecomment-755765315


   @goiri the code changes fix {{testDSShellWithEnforceExecutionType}}.
   The problem with the test was that it launches two containers executing cmd 
`date`. Apparently the two containers would exit fast. The unit test will stay 
blocked waiting for the containers to be exactly "2".
   This does not take into consideration that the containers count is 3 
including the AMContainer.
   
   The fix was to get rid of the equality in the check, and change the 
application command to `ls`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein commented on pull request #2581: YARN-10553. Refactor TestDistributedShell

2021-01-06 Thread GitBox


amahussein commented on pull request #2581:
URL: https://github.com/apache/hadoop/pull/2581#issuecomment-755765315


   @goiri the code changes fix {{testDSShellWithOpportunisticContainers}}.
   The problem with the test was that it launches two containers executing cmd 
`date`. Apparently the two containers would exit fast. The unit test will stay 
blocked waiting for the containers to be exactly "2".
   This does not take into consideration that the containers count is 3 
including the AMContainer.
   
   The fix was to get rid of the equality in the check, and change the 
application command to `ls`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein commented on a change in pull request #2581: YARN-10553. Refactor TestDistributedShell

2021-01-06 Thread GitBox


amahussein commented on a change in pull request #2581:
URL: https://github.com/apache/hadoop/pull/2581#discussion_r553003268



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
##
@@ -1414,21 +1414,20 @@ protected void sendStopSignal() {
 }
 int waitCount = 0;
 LOG.info("Waiting for Client to exit loop");
-while (!isRunning.get()) {
+while (isRunning.get()) {
   try {
 Thread.sleep(50);
   } catch (InterruptedException ie) {
 // do nothing
   } finally {
-waitCount++;
-if (isRunning.get() || waitCount > 2000) {
+if (++waitCount > 2000) {
   break;
 }
   }
 }
-LOG.info("Stopping yarnClient within the Client");
+LOG.info("Stopping yarnClient within the DS Client");
 yarnClient.stop();
-yarnClient.waitForServiceToStop(clientTimeout);
+//yarnClient.waitForServiceToStop(clientTimeout);

Review comment:
   Oh! I forgot to delete that line.
   Waiting for the service to stop is not necessary.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #2581: YARN-10553. Refactor TestDistributedShell

2021-01-06 Thread GitBox


goiri commented on a change in pull request #2581:
URL: https://github.com/apache/hadoop/pull/2581#discussion_r553001971



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
##
@@ -1414,21 +1414,20 @@ protected void sendStopSignal() {
 }
 int waitCount = 0;
 LOG.info("Waiting for Client to exit loop");
-while (!isRunning.get()) {
+while (isRunning.get()) {
   try {
 Thread.sleep(50);
   } catch (InterruptedException ie) {
 // do nothing
   } finally {
-waitCount++;
-if (isRunning.get() || waitCount > 2000) {
+if (++waitCount > 2000) {
   break;
 }
   }
 }
-LOG.info("Stopping yarnClient within the Client");
+LOG.info("Stopping yarnClient within the DS Client");
 yarnClient.stop();
-yarnClient.waitForServiceToStop(clientTimeout);
+//yarnClient.waitForServiceToStop(clientTimeout);

Review comment:
   Testing?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17408) Optimize NetworkTopology while sorting of block locations

2021-01-06 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17260090#comment-17260090
 ] 

Ahmed Hussein commented on HADOOP-17408:


Thanks [~Jim_Brennan] for the feedback.
 I created a new [PR-2601|https://github.com/apache/hadoop/pull/2601]

> Optimize NetworkTopology while sorting of block locations
> -
>
> Key: HADOOP-17408
> URL: https://issues.apache.org/jira/browse/HADOOP-17408
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, net
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> In {{NetworkTopology}}, I noticed that there are some hanging fruits to 
> improve the performance.
> Inside {{sortByDistance}}, collections.shuffle is performed on the list 
> before calling {{secondarySort}}.
> {code:java}
> Collections.shuffle(list, r);
> if (secondarySort != null) {
>   secondarySort.accept(list);
> }
> {code}
> However, in different call sites, {{collections.shuffle}} is passed as the 
> secondarySort to {{sortByDistance}}. This means that the shuffle is executed 
> twice on each list.
> Also, logic wise, it is useless to shuffle before applying a tie breaker 
> which might make the shuffle work obsolete.
> In addition, [~daryn] reported that:
> * topology is unnecessarily locking/unlocking to calculate the distance for 
> every node
> * shuffling uses a seeded Random, instead of ThreadLocalRandom, which is 
> heavily synchronized



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein opened a new pull request #2601: HADOOP-17408. Optimize NetworkTopology sorting block locations.

2021-01-06 Thread GitBox


amahussein opened a new pull request #2601:
URL: https://github.com/apache/hadoop/pull/2601


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17408) Optimize NetworkTopology while sorting of block locations

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17408?focusedWorklogId=532174=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-532174
 ]

ASF GitHub Bot logged work on HADOOP-17408:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 22:39
Start Date: 06/Jan/21 22:39
Worklog Time Spent: 10m 
  Work Description: amahussein opened a new pull request #2601:
URL: https://github.com/apache/hadoop/pull/2601


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 532174)
Time Spent: 1h  (was: 50m)

> Optimize NetworkTopology while sorting of block locations
> -
>
> Key: HADOOP-17408
> URL: https://issues.apache.org/jira/browse/HADOOP-17408
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, net
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> In {{NetworkTopology}}, I noticed that there are some hanging fruits to 
> improve the performance.
> Inside {{sortByDistance}}, collections.shuffle is performed on the list 
> before calling {{secondarySort}}.
> {code:java}
> Collections.shuffle(list, r);
> if (secondarySort != null) {
>   secondarySort.accept(list);
> }
> {code}
> However, in different call sites, {{collections.shuffle}} is passed as the 
> secondarySort to {{sortByDistance}}. This means that the shuffle is executed 
> twice on each list.
> Also, logic wise, it is useless to shuffle before applying a tie breaker 
> which might make the shuffle work obsolete.
> In addition, [~daryn] reported that:
> * topology is unnecessarily locking/unlocking to calculate the distance for 
> every node
> * shuffling uses a seeded Random, instead of ThreadLocalRandom, which is 
> heavily synchronized



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17408) Optimize NetworkTopology while sorting of block locations

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17408?focusedWorklogId=532171=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-532171
 ]

ASF GitHub Bot logged work on HADOOP-17408:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 22:34
Start Date: 06/Jan/21 22:34
Worklog Time Spent: 10m 
  Work Description: amahussein closed pull request #2514:
URL: https://github.com/apache/hadoop/pull/2514


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 532171)
Time Spent: 50m  (was: 40m)

> Optimize NetworkTopology while sorting of block locations
> -
>
> Key: HADOOP-17408
> URL: https://issues.apache.org/jira/browse/HADOOP-17408
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, net
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In {{NetworkTopology}}, I noticed that there are some hanging fruits to 
> improve the performance.
> Inside {{sortByDistance}}, collections.shuffle is performed on the list 
> before calling {{secondarySort}}.
> {code:java}
> Collections.shuffle(list, r);
> if (secondarySort != null) {
>   secondarySort.accept(list);
> }
> {code}
> However, in different call sites, {{collections.shuffle}} is passed as the 
> secondarySort to {{sortByDistance}}. This means that the shuffle is executed 
> twice on each list.
> Also, logic wise, it is useless to shuffle before applying a tie breaker 
> which might make the shuffle work obsolete.
> In addition, [~daryn] reported that:
> * topology is unnecessarily locking/unlocking to calculate the distance for 
> every node
> * shuffling uses a seeded Random, instead of ThreadLocalRandom, which is 
> heavily synchronized



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein closed pull request #2514: HADOOP-17408. Optimize NetworkTopology while sorting of block locations.

2021-01-06 Thread GitBox


amahussein closed pull request #2514:
URL: https://github.com/apache/hadoop/pull/2514


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17408) Optimize NetworkTopology while sorting of block locations

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17408?focusedWorklogId=532169=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-532169
 ]

ASF GitHub Bot logged work on HADOOP-17408:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 22:30
Start Date: 06/Jan/21 22:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2514:
URL: https://github.com/apache/hadoop/pull/2514#issuecomment-755756961


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m  9s |  |  
https://github.com/apache/hadoop/pull/2514 does not apply to trunk. Rebase 
required? Wrong Branch? See 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  
|
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2514 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2514/3/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 532169)
Time Spent: 40m  (was: 0.5h)

> Optimize NetworkTopology while sorting of block locations
> -
>
> Key: HADOOP-17408
> URL: https://issues.apache.org/jira/browse/HADOOP-17408
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, net
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In {{NetworkTopology}}, I noticed that there are some hanging fruits to 
> improve the performance.
> Inside {{sortByDistance}}, collections.shuffle is performed on the list 
> before calling {{secondarySort}}.
> {code:java}
> Collections.shuffle(list, r);
> if (secondarySort != null) {
>   secondarySort.accept(list);
> }
> {code}
> However, in different call sites, {{collections.shuffle}} is passed as the 
> secondarySort to {{sortByDistance}}. This means that the shuffle is executed 
> twice on each list.
> Also, logic wise, it is useless to shuffle before applying a tie breaker 
> which might make the shuffle work obsolete.
> In addition, [~daryn] reported that:
> * topology is unnecessarily locking/unlocking to calculate the distance for 
> every node
> * shuffling uses a seeded Random, instead of ThreadLocalRandom, which is 
> heavily synchronized



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2514: HADOOP-17408. Optimize NetworkTopology while sorting of block locations.

2021-01-06 Thread GitBox


hadoop-yetus commented on pull request #2514:
URL: https://github.com/apache/hadoop/pull/2514#issuecomment-755756961


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m  9s |  |  
https://github.com/apache/hadoop/pull/2514 does not apply to trunk. Rebase 
required? Wrong Branch? See 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  
|
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2514 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2514/3/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17414) Magic committer files don't have the count of bytes written collected by spark

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17414?focusedWorklogId=532087=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-532087
 ]

ASF GitHub Bot logged work on HADOOP-17414:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 19:32
Start Date: 06/Jan/21 19:32
Worklog Time Spent: 10m 
  Work Description: liuml07 commented on pull request #2530:
URL: https://github.com/apache/hadoop/pull/2530#issuecomment-755566048


   Will take a look this week. Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 532087)
Time Spent: 4h  (was: 3h 50m)

> Magic committer files don't have the count of bytes written collected by spark
> --
>
> Key: HADOOP-17414
> URL: https://issues.apache.org/jira/browse/HADOOP-17414
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> The spark statistics tracking doesn't correctly assess the size of the 
> uploaded files as it only calls getFileStatus on the zero byte objects -not 
> the yet-to-manifest files. Which, given they don't exist yet, isn't easy to 
> do.
> Solution: 
> * Add getXAttr and listXAttr API calls to S3AFileSystem
> * Return all S3 object headers as XAttr attributes prefixed "header." That's 
> custom and standard (e.g header.Content-Length).
> The setXAttr call isn't implemented, so for correctness the FS doesn't
> declare its support for the API in hasPathCapability().
> The magic commit file write sets the custom header 
> set the length of the data final data in the header
> x-hadoop-s3a-magic-data-length in the marker file.
> A matching patch in Spark will look for the XAttr
> "header.x-hadoop-s3a-magic-data-length" when the file
> being probed for output data is zero byte long. 
> As a result, the job tracking statistics will report the
> bytes written but yet to be manifest.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] liuml07 commented on pull request #2530: HADOOP-17414. Magic committer files don't have the count of bytes written collected by spark

2021-01-06 Thread GitBox


liuml07 commented on pull request #2530:
URL: https://github.com/apache/hadoop/pull/2530#issuecomment-755566048


   Will take a look this week. Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2598: HDFS-15762. TestMultipleNNPortQOP#testMultipleNNPortOverwriteDownStre…

2021-01-06 Thread GitBox


hadoop-yetus commented on pull request #2598:
URL: https://github.com/apache/hadoop/pull/2598#issuecomment-755522663


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 43s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 58s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 14s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   3m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   2m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 30s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m  9s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   4m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 50s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   3m 50s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m  1s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m 29s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  |  98m 48s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2598/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 216m 36s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2598/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2598 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 478754284c37 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d21c1c65761 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2598/2/testReport/ |
   | Max. process+thread count | 4640 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
   | Console output | 

[jira] [Work logged] (HADOOP-17404) ABFS: Piggyback flush on Append calls for short writes

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17404?focusedWorklogId=532058=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-532058
 ]

ASF GitHub Bot logged work on HADOOP-17404:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 18:43
Start Date: 06/Jan/21 18:43
Worklog Time Spent: 10m 
  Work Description: DadanielZ merged pull request #2509:
URL: https://github.com/apache/hadoop/pull/2509


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 532058)
Time Spent: 3h 50m  (was: 3h 40m)

> ABFS: Piggyback flush on Append calls for short writes
> --
>
> Key: HADOOP-17404
> URL: https://issues.apache.org/jira/browse/HADOOP-17404
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> When Hflush or Hsync APIs are called, a call is made to store backend to 
> commit the data that was appended. 
> If the data size written by Hadoop app is small, i.e. data size :
>  * before any of HFlush/HSync call is made or
>  * between 2 HFlush/Hsync API calls
> is less than write buffer size, 2 separate calls, one for append and another 
> for flush is made,
> Apps that do such small writes eventually end up with almost similar number 
> of calls for flush and append.
> This PR enables Flush to be piggybacked onto append call for such short write 
> scenarios.
>  
> NOTE: The changes is guarded over a config, and is disabled by default until 
> relevant supported changes is made available on all store production clusters.
> New Config added: fs.azure.write.enableappendwithflush



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ merged pull request #2509: HADOOP-17404. ABFS: Small write - Merge append and flush

2021-01-06 Thread GitBox


DadanielZ merged pull request #2509:
URL: https://github.com/apache/hadoop/pull/2509


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17404) ABFS: Piggyback flush on Append calls for short writes

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17404?focusedWorklogId=532053=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-532053
 ]

ASF GitHub Bot logged work on HADOOP-17404:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 18:31
Start Date: 06/Jan/21 18:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2509:
URL: https://github.com/apache/hadoop/pull/2509#issuecomment-755484937


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   2m 23s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 12s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 58s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 56s |  |  trunk passed  |
   | -0 :warning: |  patch  |   1m 16s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  3s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  16m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m 14s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 30s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  89m  6s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2509/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2509 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 9b15906f971d 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d21c1c65761 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2509/6/testReport/ |
   | Max. process+thread count | 510 (vs. ulimit of 5500) |
   | modules | C: 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2509: HADOOP-17404. ABFS: Small write - Merge append and flush

2021-01-06 Thread GitBox


hadoop-yetus commented on pull request #2509:
URL: https://github.com/apache/hadoop/pull/2509#issuecomment-755484937


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   2m 23s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 12s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 58s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 56s |  |  trunk passed  |
   | -0 :warning: |  patch  |   1m 16s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  3s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  16m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m 14s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 30s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  89m  6s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2509/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2509 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 9b15906f971d 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d21c1c65761 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2509/6/testReport/ |
   | Max. process+thread count | 510 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2509/6/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated 

[jira] [Work logged] (HADOOP-17337) NetworkBinding has a runtime class dependency on a third-party shaded class

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17337?focusedWorklogId=532042=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-532042
 ]

ASF GitHub Bot logged work on HADOOP-17337:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 18:15
Start Date: 06/Jan/21 18:15
Worklog Time Spent: 10m 
  Work Description: cwensel commented on pull request #2599:
URL: https://github.com/apache/hadoop/pull/2599#issuecomment-755469136


   @steveloughran without actually running it, 
   
   - what would the fallback behavior be if the shaded artifact isn't found 
leaving the default SSL factory?
   - should the unshaded factory class be loaded and configured if available?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 532042)
Time Spent: 40m  (was: 0.5h)

> NetworkBinding has a runtime class dependency on a third-party shaded class
> ---
>
> Key: HADOOP-17337
> URL: https://issues.apache.org/jira/browse/HADOOP-17337
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Chris Wensel
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The hadoop-aws library has a dependency on 
> 'com.amazonaws':aws-java-sdk-bundle' which in turn is a fat jar of all AWS 
> SDK libraries and shaded dependencies.
>  
> This dependency is 181MB.
>  
> Some applications using the S3AFilesystem may be sensitive to having a large 
> footprint. For example, building an application using Parquet and bundled 
> with Docker.
>  
> Typically, in prior Hadoop versions, the bundle was replaced by the specific 
> AWS SDK dependencies, dropping the overall footprint.
>  
> In 3.3 (and maybe prior versions) this strategy does not work because of the 
> following exception: 
> {{java.lang.NoClassDefFoundError: 
> com/amazonaws/thirdparty/apache/http/conn/socket/ConnectionSocketFactory}}
> {{ at 
> org.apache.hadoop.fs.s3a.S3AUtils.initProtocolSettings(S3AUtils.java:1335)}}
> {{ at 
> org.apache.hadoop.fs.s3a.S3AUtils.initConnectionSettings(S3AUtils.java:1290)}}
> {{ at org.apache.hadoop.fs.s3a.S3AUtils.createAwsConf(S3AUtils.java:1247)}}
> {{ at 
> org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:61)}}
> {{ at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.bindAWSClient(S3AFileSystem.java:644)}}
> {{ at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:390)}}
> {{ at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3414)}}
> {{ at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:158)}}
> {{ at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3474)}}
> {{ at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3442)}}
> {{ at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:524)}}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] cwensel commented on pull request #2599: HADOOP-17337. NetworkBinding has a runtime dependency on shaded httpclient

2021-01-06 Thread GitBox


cwensel commented on pull request #2599:
URL: https://github.com/apache/hadoop/pull/2599#issuecomment-755469136


   @steveloughran without actually running it, 
   
   - what would the fallback behavior be if the shaded artifact isn't found 
leaving the default SSL factory?
   - should the unshaded factory class be loaded and configured if available?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17433) Skipping network I/O in S3A getFileStatus(/) breaks ITestAssumeRole

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17433?focusedWorklogId=532026=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-532026
 ]

ASF GitHub Bot logged work on HADOOP-17433:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 17:54
Start Date: 06/Jan/21 17:54
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2600:
URL: https://github.com/apache/hadoop/pull/2600#issuecomment-755457753


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 24s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m  7s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  3s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m  7s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 17s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  78m 56s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2600/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2600 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a6994c95d252 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d21c1c65761 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2600/1/testReport/ |
   | Max. process+thread count | 536 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2600/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2600: HADOOP-17433. Skipping network I/O in S3A getFileStatus(/) breaks ITestAssumeRole.

2021-01-06 Thread GitBox


hadoop-yetus commented on pull request #2600:
URL: https://github.com/apache/hadoop/pull/2600#issuecomment-755457753


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 24s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m  7s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  3s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m  7s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 17s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  78m 56s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2600/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2600 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a6994c95d252 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d21c1c65761 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2600/1/testReport/ |
   | Max. process+thread count | 536 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2600/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Work logged] (HADOOP-17337) NetworkBinding has a runtime class dependency on a third-party shaded class

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17337?focusedWorklogId=532015=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-532015
 ]

ASF GitHub Bot logged work on HADOOP-17337:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 17:37
Start Date: 06/Jan/21 17:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2599:
URL: https://github.com/apache/hadoop/pull/2599#issuecomment-755449206


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  4s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 13s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 11s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 22s | 
[/diff-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2599/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 2 new + 0 unchanged - 2 fixed 
= 2 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2599/1/artifact/out/whitespace-eol.txt)
 |  The patch has 1 line(s) that end in whitespace. Use git apply 
--whitespace=fix <>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  16m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m  9s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 33s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  82m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2599/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2599 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 8a3527a9aa95 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d21c1c65761 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2599: HADOOP-17337. NetworkBinding has a runtime dependency on shaded httpclient

2021-01-06 Thread GitBox


hadoop-yetus commented on pull request #2599:
URL: https://github.com/apache/hadoop/pull/2599#issuecomment-755449206


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  4s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 13s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 11s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 22s | 
[/diff-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2599/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 2 new + 0 unchanged - 2 fixed 
= 2 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2599/1/artifact/out/whitespace-eol.txt)
 |  The patch has 1 line(s) that end in whitespace. Use git apply 
--whitespace=fix <>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  16m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m  9s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 33s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  82m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2599/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2599 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 8a3527a9aa95 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d21c1c65761 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2599/1/testReport/ |
   | Max. process+thread count | 536 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2599/1/console |
   | 

[jira] [Work logged] (HADOOP-17414) Magic committer files don't have the count of bytes written collected by spark

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17414?focusedWorklogId=531997=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531997
 ]

ASF GitHub Bot logged work on HADOOP-17414:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 17:21
Start Date: 06/Jan/21 17:21
Worklog Time Spent: 10m 
  Work Description: sunchao commented on pull request #2530:
URL: https://github.com/apache/hadoop/pull/2530#issuecomment-755439979


   > @sunchao @liuml07 could either of you take a look @ this?
   
   Yes I can help on this PR, sorry for the delay. Will spend some time this 
week.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531997)
Time Spent: 3h 50m  (was: 3h 40m)

> Magic committer files don't have the count of bytes written collected by spark
> --
>
> Key: HADOOP-17414
> URL: https://issues.apache.org/jira/browse/HADOOP-17414
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> The spark statistics tracking doesn't correctly assess the size of the 
> uploaded files as it only calls getFileStatus on the zero byte objects -not 
> the yet-to-manifest files. Which, given they don't exist yet, isn't easy to 
> do.
> Solution: 
> * Add getXAttr and listXAttr API calls to S3AFileSystem
> * Return all S3 object headers as XAttr attributes prefixed "header." That's 
> custom and standard (e.g header.Content-Length).
> The setXAttr call isn't implemented, so for correctness the FS doesn't
> declare its support for the API in hasPathCapability().
> The magic commit file write sets the custom header 
> set the length of the data final data in the header
> x-hadoop-s3a-magic-data-length in the marker file.
> A matching patch in Spark will look for the XAttr
> "header.x-hadoop-s3a-magic-data-length" when the file
> being probed for output data is zero byte long. 
> As a result, the job tracking statistics will report the
> bytes written but yet to be manifest.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao commented on pull request #2530: HADOOP-17414. Magic committer files don't have the count of bytes written collected by spark

2021-01-06 Thread GitBox


sunchao commented on pull request #2530:
URL: https://github.com/apache/hadoop/pull/2530#issuecomment-755439979


   > @sunchao @liuml07 could either of you take a look @ this?
   
   Yes I can help on this PR, sorry for the delay. Will spend some time this 
week.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16080) hadoop-aws does not work with hadoop-client-api

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16080?focusedWorklogId=531993=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531993
 ]

ASF GitHub Bot logged work on HADOOP-16080:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 17:20
Start Date: 06/Jan/21 17:20
Worklog Time Spent: 10m 
  Work Description: sunchao commented on pull request #2575:
URL: https://github.com/apache/hadoop/pull/2575#issuecomment-755439216


   > some of the tests are parameterized to do test runs with/without dynamoDB. 
They shouldn't be run if the -Ddynamo option wasn't set, but what has 
inevitably happened is that regressions into the test runs have crept in and 
we've not noticed.
   
   I didn't specify the `-Ddynamo` option. The command I used is:
   ```
   mvn -Dparallel-tests -DtestsThreadCount=8 clean verify
   ```
   
   I'm testing against my own S3A endpoint "s3a://sunchao/" which is in 
us-west-1 and I just followed the doc to setup `auth-keys.xml`. I didn't modify 
`core-site.xml`.
   
   > BTW, does this mean your initial PR went in without running the ITests? 
   
   Unfortunately no ... sorry I was not aware of the test steps here (first 
time contributing to hadoop-aws). I'll try to do some remedy in this PR.  Test 
failures I got:
   ```
   [ERROR] Tests run: 24, Failures: 1, Errors: 16, Skipped: 0, Time elapsed: 
20.537 s <<< FAILURE! - in 
org.apache.hadoop.fs.s3a.performance.ITestS3ADeleteCost
   [ERROR] 
testDeleteSingleFileInDir[raw-delete-markers](org.apache.hadoop.fs.s3a.performance.ITestS3ADeleteCost)
  Time elapsed: 2.036 s  <<< FAILURE!
   java.lang.AssertionError: operation returning after fs.delete(simpleFile) 
action_executor_acquired starting=0 current=0 diff=0, action_http_get_request 
starting=0 current=0 diff=0,action_http_head_request starting=4 
current=5 diff=1, committer_bytes_committed starting=0 current=0 diff=0, 
committer_bytes_uploaded starting=0 current=0 diff=0, committer_commit_job 
starting=0  current=0 diff=0, committer_commits.failures starting=0 current=0 
diff=0, committer_commits_aborted starting=0 current=0 diff=0, 
committer_commits_completed starting=0 current=0 diff=0,   
committer_commits_created starting=0 current=0 diff=0, 
committer_commits_reverted starting=0 current=0 diff=0, 
committer_jobs_completed starting=0 current=0 diff=0, committer_jobs_failed 
 starting=0 current=0 diff=0, committer_magic_files_created starting=0 
current=0 diff=0, committer_materialize_file starting=0 current=0 diff=0, 
committer_stage_file_upload starting=0 current=0diff=0, 
committer_tasks_completed starting=0 current=0 diff=0, committer_tasks_failed 
starting=0 current=0 diff=0, delegation_token_issued starting=0 current=0 
diff=0, directories_created starting=2 current=3 diff=1, 
directories_deleted starting=0 current=0 diff=0, fake_directories_created 
starting=0 current=0 diff=0, fake_directories_deleted starting=6 current=8 
diff=2,   files_copied starting=0 current=0 diff=0, files_copied_bytes 
starting=0 current=0 diff=0, files_created starting=1 current=1 diff=0, 
files_delete_rejected starting=0 current=0 diff=0, files_deleted 
starting=0 current=1 diff=1, ignored_errors starting=0 current=0 diff=0, 
multipart_instantiated starting=0 current=0 diff=0, 
multipart_upload_abort_under_path_invoked starting=0 current=0 diff=0, 
multipart_upload_aborted starting=0 current=0 diff=0, 
multipart_upload_completed starting=0 current=0 diff=0, 
multipart_upload_part_put starting=0 current=0 diff=0,  
multipart_upload_part_put_bytes starting=0 current=0 diff=0, 
multipart_upload_started starting=0 current=0 diff=0, 
object_bulk_delete_request starting=3 current=4 diff=1, 
 object_continue_list_request starting=0 current=0 diff=0, object_copy_requests 
starting=0 current=0 diff=0, object_delete_objects starting=6 current=9 diff=3, 
object_delete_request starting=0 current=1 diff=1, object_list_request 
starting=5 current=6 diff=1, object_metadata_request starting=4 current=5 
diff=1, object_multipart_aborted starting=0 current=0 diff=0,   
object_multipart_initiated starting=0 current=0 diff=0, object_put_bytes 
starting=0 current=0 diff=0, object_put_request starting=3 current=4 diff=1, 
object_put_request_completed starting=3   current=4 diff=1, 
object_select_requests starting=0 current=0 diff=0, op_copy_from_local_file 
starting=0 current=0 diff=0, op_create starting=1 current=1 diff=0, 
op_create_non_recursive   starting=0 current=0 diff=0, op_delete 
starting=0 current=1 diff=1, op_exists starting=0 current=0 diff=0, 
op_get_delegation_token starting=0 current=0 diff=0, op_get_file_checksum 
starting=0 current=0 diff=0, op_get_file_status starting=2 current=2 
diff=0, op_glob_status starting=0 current=0 diff=0, 

[GitHub] [hadoop] sunchao commented on pull request #2575: HADOOP-16080. hadoop-aws does not work with hadoop-client-api

2021-01-06 Thread GitBox


sunchao commented on pull request #2575:
URL: https://github.com/apache/hadoop/pull/2575#issuecomment-755439216


   > some of the tests are parameterized to do test runs with/without dynamoDB. 
They shouldn't be run if the -Ddynamo option wasn't set, but what has 
inevitably happened is that regressions into the test runs have crept in and 
we've not noticed.
   
   I didn't specify the `-Ddynamo` option. The command I used is:
   ```
   mvn -Dparallel-tests -DtestsThreadCount=8 clean verify
   ```
   
   I'm testing against my own S3A endpoint "s3a://sunchao/" which is in 
us-west-1 and I just followed the doc to setup `auth-keys.xml`. I didn't modify 
`core-site.xml`.
   
   > BTW, does this mean your initial PR went in without running the ITests? 
   
   Unfortunately no ... sorry I was not aware of the test steps here (first 
time contributing to hadoop-aws). I'll try to do some remedy in this PR.  Test 
failures I got:
   ```
   [ERROR] Tests run: 24, Failures: 1, Errors: 16, Skipped: 0, Time elapsed: 
20.537 s <<< FAILURE! - in 
org.apache.hadoop.fs.s3a.performance.ITestS3ADeleteCost
   [ERROR] 
testDeleteSingleFileInDir[raw-delete-markers](org.apache.hadoop.fs.s3a.performance.ITestS3ADeleteCost)
  Time elapsed: 2.036 s  <<< FAILURE!
   java.lang.AssertionError: operation returning after fs.delete(simpleFile) 
action_executor_acquired starting=0 current=0 diff=0, action_http_get_request 
starting=0 current=0 diff=0,action_http_head_request starting=4 
current=5 diff=1, committer_bytes_committed starting=0 current=0 diff=0, 
committer_bytes_uploaded starting=0 current=0 diff=0, committer_commit_job 
starting=0  current=0 diff=0, committer_commits.failures starting=0 current=0 
diff=0, committer_commits_aborted starting=0 current=0 diff=0, 
committer_commits_completed starting=0 current=0 diff=0,   
committer_commits_created starting=0 current=0 diff=0, 
committer_commits_reverted starting=0 current=0 diff=0, 
committer_jobs_completed starting=0 current=0 diff=0, committer_jobs_failed 
 starting=0 current=0 diff=0, committer_magic_files_created starting=0 
current=0 diff=0, committer_materialize_file starting=0 current=0 diff=0, 
committer_stage_file_upload starting=0 current=0diff=0, committ
 er_tasks_completed starting=0 current=0 diff=0, committer_tasks_failed 
starting=0 current=0 diff=0, delegation_token_issued starting=0 current=0 
diff=0, directories_created starting=2 current=3 diff=1, 
directories_deleted starting=0 current=0 diff=0, fake_directories_created 
starting=0 current=0 diff=0, fake_directories_deleted starting=6 current=8 
diff=2,   files_copied starting=0 current=0 diff=0, files_copied_bytes 
starting=0 current=0 diff=0, files_created starting=1 current=1 diff=0, 
files_delete_rejected starting=0 current=0 diff=0, files_deleted 
starting=0 current=1 diff=1, ignored_errors starting=0 current=0 diff=0, 
multipart_instantiated starting=0 current=0 diff=0, 
multipart_upload_abort_under_path_invoked starting=0 current=0 diff=0, 
multipart_upload_aborted starting=0 current=0 diff=0, 
multipart_upload_completed starting=0 current=0 diff=0, 
multipart_upload_part_put starting=0 current=0 diff=0,  
multipart_upload_part_put_bytes 
 starting=0 current=0 diff=0, multipart_upload_started starting=0 current=0 
diff=0, object_bulk_delete_request starting=3 current=4 diff=1, 
 object_continue_list_request starting=0 current=0 diff=0, 
object_copy_requests starting=0 current=0 diff=0, object_delete_objects 
starting=6 current=9 diff=3, object_delete_request starting=0 current=1 
diff=1, object_list_request starting=5 current=6 diff=1, 
object_metadata_request starting=4 current=5 diff=1, object_multipart_aborted 
starting=0 current=0 diff=0,   object_multipart_initiated 
starting=0 current=0 diff=0, object_put_bytes starting=0 current=0 diff=0, 
object_put_request starting=3 current=4 diff=1, object_put_request_completed 
starting=3   current=4 diff=1, object_select_requests starting=0 current=0 
diff=0, op_copy_from_local_file starting=0 current=0 diff=0, op_create 
starting=1 current=1 diff=0, op_create_non_recursive   starting=0 
current=0 diff=0, op_delete starting=0
  current=1 diff=1, op_exists starting=0 current=0 diff=0, 
op_get_delegation_token starting=0 current=0 diff=0, op_get_file_checksum 
starting=0 current=0 diff=0, op_get_file_status starting=2 current=2 
diff=0, op_glob_status starting=0 current=0 diff=0, op_is_directory starting=0 
current=0 diff=0, op_is_file starting=0 current=0 diff=0,  op_list_files 
starting=0 current=0 diff=0, op_list_located_status starting=0 current=0 
diff=0, op_list_status starting=0 current=0 diff=0, op_mkdirs starting=2 
current=2 diff=0, op_open   starting=0 current=0 diff=0, op_rename 
starting=0 current=0 diff=0, 
s3guard_metadatastore_authoritative_directories_updated starting=0 current=0 

[jira] [Work logged] (HADOOP-17414) Magic committer files don't have the count of bytes written collected by spark

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17414?focusedWorklogId=531989=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531989
 ]

ASF GitHub Bot logged work on HADOOP-17414:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 17:17
Start Date: 06/Jan/21 17:17
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2530:
URL: https://github.com/apache/hadoop/pull/2530#issuecomment-755437582


   @sunchao @liuml07 could either of you take a look @ this?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531989)
Time Spent: 3h 40m  (was: 3.5h)

> Magic committer files don't have the count of bytes written collected by spark
> --
>
> Key: HADOOP-17414
> URL: https://issues.apache.org/jira/browse/HADOOP-17414
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> The spark statistics tracking doesn't correctly assess the size of the 
> uploaded files as it only calls getFileStatus on the zero byte objects -not 
> the yet-to-manifest files. Which, given they don't exist yet, isn't easy to 
> do.
> Solution: 
> * Add getXAttr and listXAttr API calls to S3AFileSystem
> * Return all S3 object headers as XAttr attributes prefixed "header." That's 
> custom and standard (e.g header.Content-Length).
> The setXAttr call isn't implemented, so for correctness the FS doesn't
> declare its support for the API in hasPathCapability().
> The magic commit file write sets the custom header 
> set the length of the data final data in the header
> x-hadoop-s3a-magic-data-length in the marker file.
> A matching patch in Spark will look for the XAttr
> "header.x-hadoop-s3a-magic-data-length" when the file
> being probed for output data is zero byte long. 
> As a result, the job tracking statistics will report the
> bytes written but yet to be manifest.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2530: HADOOP-17414. Magic committer files don't have the count of bytes written collected by spark

2021-01-06 Thread GitBox


steveloughran commented on pull request #2530:
URL: https://github.com/apache/hadoop/pull/2530#issuecomment-755437582


   @sunchao @liuml07 could either of you take a look @ this?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17404) ABFS: Piggyback flush on Append calls for short writes

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17404?focusedWorklogId=531980=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531980
 ]

ASF GitHub Bot logged work on HADOOP-17404:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 17:04
Start Date: 06/Jan/21 17:04
Worklog Time Spent: 10m 
  Work Description: snvijaya commented on pull request #2509:
URL: https://github.com/apache/hadoop/pull/2509#issuecomment-755430384


   Thanks @DadanielZ. Have addressed the review comment. Kindly request your 
review.
   
   Latest test results:
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 498, Failures: 0, Errors: 0, Skipped: 70
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 256, Failures: 0, Errors: 0, Skipped: 165
   
   
   NonHNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 486, Failures: 0, Errors: 0, Skipped: 253
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 254, Failures: 0, Errors: 0, Skipped: 165
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 485, Failures: 0, Errors: 0, Skipped: 24
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 250, Failures: 0, Errors: 0, Skipped: 48
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 454, Failures: 0, Errors: 0, Skipped: 247
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 256, Failures: 0, Errors: 0, Skipped: 48
   
   HNS-AppendBlob
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 482, Failures: 0, Errors: 0, Skipped: 70
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 256, Failures: 0, Errors: 0, Skipped: 189
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531980)
Time Spent: 3.5h  (was: 3h 20m)

> ABFS: Piggyback flush on Append calls for short writes
> --
>
> Key: HADOOP-17404
> URL: https://issues.apache.org/jira/browse/HADOOP-17404
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> When Hflush or Hsync APIs are called, a call is made to store backend to 
> commit the data that was appended. 
> If the data size written by Hadoop app is small, i.e. data size :
>  * before any of HFlush/HSync call is made or
>  * between 2 HFlush/Hsync API calls
> is less than write buffer size, 2 separate calls, one for append and another 
> for flush is made,
> Apps that do such small writes eventually end up with almost similar number 
> of calls for flush and append.
> This PR enables Flush to be piggybacked onto append call for such short write 
> scenarios.
>  
> NOTE: The changes is guarded over a config, and is disabled by default until 
> relevant supported changes is made available on all store production clusters.
> New Config added: fs.azure.write.enableappendwithflush



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on pull request #2509: HADOOP-17404. ABFS: Small write - Merge append and flush

2021-01-06 Thread GitBox


snvijaya commented on pull request #2509:
URL: https://github.com/apache/hadoop/pull/2509#issuecomment-755430384


   Thanks @DadanielZ. Have addressed the review comment. Kindly request your 
review.
   
   Latest test results:
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 498, Failures: 0, Errors: 0, Skipped: 70
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 256, Failures: 0, Errors: 0, Skipped: 165
   
   
   NonHNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 486, Failures: 0, Errors: 0, Skipped: 253
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 254, Failures: 0, Errors: 0, Skipped: 165
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 485, Failures: 0, Errors: 0, Skipped: 24
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 250, Failures: 0, Errors: 0, Skipped: 48
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 454, Failures: 0, Errors: 0, Skipped: 247
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 256, Failures: 0, Errors: 0, Skipped: 48
   
   HNS-AppendBlob
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 482, Failures: 0, Errors: 0, Skipped: 70
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 256, Failures: 0, Errors: 0, Skipped: 189
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17404) ABFS: Piggyback flush on Append calls for short writes

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17404?focusedWorklogId=531979=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531979
 ]

ASF GitHub Bot logged work on HADOOP-17404:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 17:02
Start Date: 06/Jan/21 17:02
Worklog Time Spent: 10m 
  Work Description: snvijaya commented on a change in pull request #2509:
URL: https://github.com/apache/hadoop/pull/2509#discussion_r552811022



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
##
@@ -55,6 +55,7 @@
   public static final String AZURE_WRITE_MAX_CONCURRENT_REQUESTS = 
"fs.azure.write.max.concurrent.requests";
   public static final String AZURE_WRITE_MAX_REQUESTS_TO_QUEUE = 
"fs.azure.write.max.requests.to.queue";
   public static final String AZURE_WRITE_BUFFER_SIZE = 
"fs.azure.write.request.size";
+  public static final String AZURE_ENABLE_SMALL_WRITE_OPTIMIZATION = 
"fs.azure.write.enableappendwithflush";

Review comment:
   Done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531979)
Time Spent: 3h 20m  (was: 3h 10m)

> ABFS: Piggyback flush on Append calls for short writes
> --
>
> Key: HADOOP-17404
> URL: https://issues.apache.org/jira/browse/HADOOP-17404
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> When Hflush or Hsync APIs are called, a call is made to store backend to 
> commit the data that was appended. 
> If the data size written by Hadoop app is small, i.e. data size :
>  * before any of HFlush/HSync call is made or
>  * between 2 HFlush/Hsync API calls
> is less than write buffer size, 2 separate calls, one for append and another 
> for flush is made,
> Apps that do such small writes eventually end up with almost similar number 
> of calls for flush and append.
> This PR enables Flush to be piggybacked onto append call for such short write 
> scenarios.
>  
> NOTE: The changes is guarded over a config, and is disabled by default until 
> relevant supported changes is made available on all store production clusters.
> New Config added: fs.azure.write.enableappendwithflush



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on a change in pull request #2509: HADOOP-17404. ABFS: Small write - Merge append and flush

2021-01-06 Thread GitBox


snvijaya commented on a change in pull request #2509:
URL: https://github.com/apache/hadoop/pull/2509#discussion_r552811022



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
##
@@ -55,6 +55,7 @@
   public static final String AZURE_WRITE_MAX_CONCURRENT_REQUESTS = 
"fs.azure.write.max.concurrent.requests";
   public static final String AZURE_WRITE_MAX_REQUESTS_TO_QUEUE = 
"fs.azure.write.max.requests.to.queue";
   public static final String AZURE_WRITE_BUFFER_SIZE = 
"fs.azure.write.request.size";
+  public static final String AZURE_ENABLE_SMALL_WRITE_OPTIMIZATION = 
"fs.azure.write.enableappendwithflush";

Review comment:
   Done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17451) intermittent failure of S3A tests which make assertions on statistics/IOStatistics

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17451?focusedWorklogId=531977=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531977
 ]

ASF GitHub Bot logged work on HADOOP-17451:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 16:59
Start Date: 06/Jan/21 16:59
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2594:
URL: https://github.com/apache/hadoop/pull/2594#issuecomment-755427098


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   5m 28s |  |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |   8m  8s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2594/2/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |  26m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |  18m  2s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m 37s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 35s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   2m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 17s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 33s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |  19m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 21s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 37s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2594/2/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 5 new + 11 unchanged - 0 fixed = 16 total (was 
11)  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   2m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 46s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 36s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 46s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 174m 21s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2594/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2594 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 855690281b50 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ae4945fb2c8 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2594: HADOOP-17451. IOStatistics test failures in S3A code.

2021-01-06 Thread GitBox


hadoop-yetus commented on pull request #2594:
URL: https://github.com/apache/hadoop/pull/2594#issuecomment-755427098


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   5m 28s |  |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |   8m  8s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2594/2/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |  26m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |  18m  2s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m 37s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 35s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   2m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 17s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 33s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |  19m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 21s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 37s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2594/2/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 5 new + 11 unchanged - 0 fixed = 16 total (was 
11)  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   2m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 46s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 36s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 46s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 174m 21s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2594/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2594 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 855690281b50 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ae4945fb2c8 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2594/2/testReport/ |
   | Max. process+thread count | 1427 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 

[jira] [Updated] (HADOOP-17433) Skipping network I/O in S3A getFileStatus(/) breaks ITestAssumeRole

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17433:

Labels: pull-request-available  (was: )

> Skipping network I/O in S3A getFileStatus(/) breaks ITestAssumeRole
> ---
>
> Key: HADOOP-17433
> URL: https://issues.apache.org/jira/browse/HADOOP-17433
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Mukund Thakur
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Test failure in ITestAssumeRole.testAssumeRoleRestrictedPolicyFS if the test 
> bucket is unguarded. I've been playing with my bucket settings so this 
> probably didn't surface before. 
> test arguments -Dparallel-tests -DtestsThreadCount=4 -Dmarkers=keep  
> -Dfs.s3a.directory.marker.audit=true



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17433) Skipping network I/O in S3A getFileStatus(/) breaks ITestAssumeRole

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17433?focusedWorklogId=531959=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531959
 ]

ASF GitHub Bot logged work on HADOOP-17433:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 16:34
Start Date: 06/Jan/21 16:34
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2600:
URL: https://github.com/apache/hadoop/pull/2600#issuecomment-755411529


   FYI @mukund-thakur @bgaborg 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531959)
Time Spent: 20m  (was: 10m)

> Skipping network I/O in S3A getFileStatus(/) breaks ITestAssumeRole
> ---
>
> Key: HADOOP-17433
> URL: https://issues.apache.org/jira/browse/HADOOP-17433
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Mukund Thakur
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Test failure in ITestAssumeRole.testAssumeRoleRestrictedPolicyFS if the test 
> bucket is unguarded. I've been playing with my bucket settings so this 
> probably didn't surface before. 
> test arguments -Dparallel-tests -DtestsThreadCount=4 -Dmarkers=keep  
> -Dfs.s3a.directory.marker.audit=true



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2600: HADOOP-17433. Skipping network I/O in S3A getFileStatus(/) breaks ITestAssumeRole.

2021-01-06 Thread GitBox


steveloughran commented on pull request #2600:
URL: https://github.com/apache/hadoop/pull/2600#issuecomment-755411529


   FYI @mukund-thakur @bgaborg 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17433) Skipping network I/O in S3A getFileStatus(/) breaks ITestAssumeRole

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17433?focusedWorklogId=531958=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531958
 ]

ASF GitHub Bot logged work on HADOOP-17433:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 16:33
Start Date: 06/Jan/21 16:33
Worklog Time Spent: 10m 
  Work Description: steveloughran opened a new pull request #2600:
URL: https://github.com/apache/hadoop/pull/2600


   
   ran ITestAssumeRole against s3 ireland in IDE and then from cli with/without 
s3guard



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531958)
Remaining Estimate: 0h
Time Spent: 10m

> Skipping network I/O in S3A getFileStatus(/) breaks ITestAssumeRole
> ---
>
> Key: HADOOP-17433
> URL: https://issues.apache.org/jira/browse/HADOOP-17433
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Mukund Thakur
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Test failure in ITestAssumeRole.testAssumeRoleRestrictedPolicyFS if the test 
> bucket is unguarded. I've been playing with my bucket settings so this 
> probably didn't surface before. 
> test arguments -Dparallel-tests -DtestsThreadCount=4 -Dmarkers=keep  
> -Dfs.s3a.directory.marker.audit=true



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #2600: HADOOP-17433. Skipping network I/O in S3A getFileStatus(/) breaks ITestAssumeRole.

2021-01-06 Thread GitBox


steveloughran opened a new pull request #2600:
URL: https://github.com/apache/hadoop/pull/2600


   
   ran ITestAssumeRole against s3 ireland in IDE and then from cli with/without 
s3guard



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13845) s3a to instrument duration of HTTP calls

2021-01-06 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17259848#comment-17259848
 ] 

Steve Loughran commented on HADOOP-13845:
-

HADOOP-17271 does this for HEAD and LIST requests, plus time for the GET to 
start. Anything O(data), O(objects) etc is tricky

> s3a to instrument duration of HTTP calls
> 
>
> Key: HADOOP-13845
> URL: https://issues.apache.org/jira/browse/HADOOP-13845
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Priority: Minor
>
> HADOOP-13844 proposes pulling out the swift duration classes for reuse; this 
> patch proposes instrumenting s3a with it.
> One interesting question: what to do with the values. For now, they could 
> just be printed, but it might be interesting to include in FS stats collected 
> at the end of a run. However, those are all assumed to be simple counters 
> where merging is a matter of addition. These are more metrics



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17337) NetworkBinding has a runtime class dependency on a third-party shaded class

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17337?focusedWorklogId=531949=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531949
 ]

ASF GitHub Bot logged work on HADOOP-17337:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 16:17
Start Date: 06/Jan/21 16:17
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2599:
URL: https://github.com/apache/hadoop/pull/2599#issuecomment-755398014


   testing: done the unit tests but not a full live run (which it will need)
   
   @cwensel -this look good to you? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531949)
Time Spent: 20m  (was: 10m)

> NetworkBinding has a runtime class dependency on a third-party shaded class
> ---
>
> Key: HADOOP-17337
> URL: https://issues.apache.org/jira/browse/HADOOP-17337
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Chris Wensel
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The hadoop-aws library has a dependency on 
> 'com.amazonaws':aws-java-sdk-bundle' which in turn is a fat jar of all AWS 
> SDK libraries and shaded dependencies.
>  
> This dependency is 181MB.
>  
> Some applications using the S3AFilesystem may be sensitive to having a large 
> footprint. For example, building an application using Parquet and bundled 
> with Docker.
>  
> Typically, in prior Hadoop versions, the bundle was replaced by the specific 
> AWS SDK dependencies, dropping the overall footprint.
>  
> In 3.3 (and maybe prior versions) this strategy does not work because of the 
> following exception: 
> {{java.lang.NoClassDefFoundError: 
> com/amazonaws/thirdparty/apache/http/conn/socket/ConnectionSocketFactory}}
> {{ at 
> org.apache.hadoop.fs.s3a.S3AUtils.initProtocolSettings(S3AUtils.java:1335)}}
> {{ at 
> org.apache.hadoop.fs.s3a.S3AUtils.initConnectionSettings(S3AUtils.java:1290)}}
> {{ at org.apache.hadoop.fs.s3a.S3AUtils.createAwsConf(S3AUtils.java:1247)}}
> {{ at 
> org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:61)}}
> {{ at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.bindAWSClient(S3AFileSystem.java:644)}}
> {{ at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:390)}}
> {{ at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3414)}}
> {{ at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:158)}}
> {{ at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3474)}}
> {{ at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3442)}}
> {{ at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:524)}}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2599: HADOOP-17337. NetworkBinding has a runtime dependency on shaded httpclient

2021-01-06 Thread GitBox


steveloughran commented on pull request #2599:
URL: https://github.com/apache/hadoop/pull/2599#issuecomment-755398014


   testing: done the unit tests but not a full live run (which it will need)
   
   @cwensel -this look good to you? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17337) NetworkBinding has a runtime class dependency on a third-party shaded class

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17337?focusedWorklogId=531947=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531947
 ]

ASF GitHub Bot logged work on HADOOP-17337:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 16:14
Start Date: 06/Jan/21 16:14
Worklog Time Spent: 10m 
  Work Description: steveloughran opened a new pull request #2599:
URL: https://github.com/apache/hadoop/pull/2599


   
   
   Adds another class with the dependencies, uses reflection to load and invoke
   that.
   
   Change-Id: Iaad9ede15dc6ac3240cba3dfa80c79825dbd007c
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531947)
Remaining Estimate: 0h
Time Spent: 10m

> NetworkBinding has a runtime class dependency on a third-party shaded class
> ---
>
> Key: HADOOP-17337
> URL: https://issues.apache.org/jira/browse/HADOOP-17337
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Chris Wensel
>Priority: Blocker
> Fix For: 3.3.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The hadoop-aws library has a dependency on 
> 'com.amazonaws':aws-java-sdk-bundle' which in turn is a fat jar of all AWS 
> SDK libraries and shaded dependencies.
>  
> This dependency is 181MB.
>  
> Some applications using the S3AFilesystem may be sensitive to having a large 
> footprint. For example, building an application using Parquet and bundled 
> with Docker.
>  
> Typically, in prior Hadoop versions, the bundle was replaced by the specific 
> AWS SDK dependencies, dropping the overall footprint.
>  
> In 3.3 (and maybe prior versions) this strategy does not work because of the 
> following exception: 
> {{java.lang.NoClassDefFoundError: 
> com/amazonaws/thirdparty/apache/http/conn/socket/ConnectionSocketFactory}}
> {{ at 
> org.apache.hadoop.fs.s3a.S3AUtils.initProtocolSettings(S3AUtils.java:1335)}}
> {{ at 
> org.apache.hadoop.fs.s3a.S3AUtils.initConnectionSettings(S3AUtils.java:1290)}}
> {{ at org.apache.hadoop.fs.s3a.S3AUtils.createAwsConf(S3AUtils.java:1247)}}
> {{ at 
> org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:61)}}
> {{ at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.bindAWSClient(S3AFileSystem.java:644)}}
> {{ at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:390)}}
> {{ at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3414)}}
> {{ at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:158)}}
> {{ at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3474)}}
> {{ at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3442)}}
> {{ at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:524)}}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17337) NetworkBinding has a runtime class dependency on a third-party shaded class

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17337:

Labels: pull-request-available  (was: )

> NetworkBinding has a runtime class dependency on a third-party shaded class
> ---
>
> Key: HADOOP-17337
> URL: https://issues.apache.org/jira/browse/HADOOP-17337
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Chris Wensel
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The hadoop-aws library has a dependency on 
> 'com.amazonaws':aws-java-sdk-bundle' which in turn is a fat jar of all AWS 
> SDK libraries and shaded dependencies.
>  
> This dependency is 181MB.
>  
> Some applications using the S3AFilesystem may be sensitive to having a large 
> footprint. For example, building an application using Parquet and bundled 
> with Docker.
>  
> Typically, in prior Hadoop versions, the bundle was replaced by the specific 
> AWS SDK dependencies, dropping the overall footprint.
>  
> In 3.3 (and maybe prior versions) this strategy does not work because of the 
> following exception: 
> {{java.lang.NoClassDefFoundError: 
> com/amazonaws/thirdparty/apache/http/conn/socket/ConnectionSocketFactory}}
> {{ at 
> org.apache.hadoop.fs.s3a.S3AUtils.initProtocolSettings(S3AUtils.java:1335)}}
> {{ at 
> org.apache.hadoop.fs.s3a.S3AUtils.initConnectionSettings(S3AUtils.java:1290)}}
> {{ at org.apache.hadoop.fs.s3a.S3AUtils.createAwsConf(S3AUtils.java:1247)}}
> {{ at 
> org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:61)}}
> {{ at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.bindAWSClient(S3AFileSystem.java:644)}}
> {{ at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:390)}}
> {{ at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3414)}}
> {{ at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:158)}}
> {{ at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3474)}}
> {{ at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3442)}}
> {{ at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:524)}}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #2599: HADOOP-17337. NetworkBinding has a runtime dependency on shaded httpclient

2021-01-06 Thread GitBox


steveloughran opened a new pull request #2599:
URL: https://github.com/apache/hadoop/pull/2599


   
   
   Adds another class with the dependencies, uses reflection to load and invoke
   that.
   
   Change-Id: Iaad9ede15dc6ac3240cba3dfa80c79825dbd007c
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17312) S3AInputStream to be resilient to faiures in abort(); translate AWS Exceptions

2021-01-06 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17312.
-
Fix Version/s: 3.3.1
   Resolution: Duplicate

> S3AInputStream to be resilient to faiures in abort(); translate AWS Exceptions
> --
>
> Key: HADOOP-17312
> URL: https://issues.apache.org/jira/browse/HADOOP-17312
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
>Priority: Major
> Fix For: 3.3.1
>
>
> Stack overflow issue complaining about ConnectionClosedException during 
> S3AInputStream close(), seems triggered by an EOF exception in abort. That 
> is: we are trying to close the stream and it is failing because the stream is 
> closed. oops.
> https://stackoverflow.com/questions/64412010/pyspark-org-apache-http-connectionclosedexception-premature-end-of-content-leng
> Looking @ the stack, we aren't translating AWS exceptions in abort() to IOEs, 
> which may be a factor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16133) S3A statistic collection underrecords bytes written in helper threads

2021-01-06 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16133.
-
Fix Version/s: 3.3.1
   Resolution: Done

> S3A statistic collection underrecords bytes written in helper threads
> -
>
> Key: HADOOP-16133
> URL: https://issues.apache.org/jira/browse/HADOOP-16133
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.3.1
>
>
> Applications collecting per-thread statistics from S3A get underreporting of 
> bytes written, as all byte written in the worker call update those in a 
> different thread.
> Proposed: 
> * the bytes upload statistics are uploaded in the primary thread as a block 
> is queued for write, not after in the completion phase in the other thread
> * final {{WriteOperationsHelper.writeSuccessful()}} takes the final 
> statistics for its own entertainment
> Really I want context-specific storage statistics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16618) increase the default number of threads and http connections in S3A

2021-01-06 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17259830#comment-17259830
 ] 

Steve Loughran commented on HADOOP-16618:
-

also: socket buffer sizes

> increase the default number of threads and http connections in S3A
> --
>
> Key: HADOOP-16618
> URL: https://issues.apache.org/jira/browse/HADOOP-16618
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1
>Reporter: Steve Loughran
>Priority: Major
>
> Enable bigger thread and http pools in the S3A connector, especially now that 
> the transfer manager is doing parallel block transfer, as is rename()
> We can make a lot more with parallelism in a single thread, and for 
> applications with multiple worker threads, we need bigger defaults



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13704) S3A getContentSummary() to move to listFiles(recursive) to count children; instrument use

2021-01-06 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17259829#comment-17259829
 ] 

Steve Loughran commented on HADOOP-13704:
-

should really fix this rather than repeated open JIRAs on fixing it

> S3A getContentSummary() to move to listFiles(recursive) to count children; 
> instrument use
> -
>
> Key: HADOOP-13704
> URL: https://issues.apache.org/jira/browse/HADOOP-13704
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Hive and a bit of Spark use {{getContentSummary()}} to get some summary stats 
> of a filesystem. This is very expensive on S3A (and any other object store), 
> especially as the base implementation does the recursive tree walk.
> Because of HADOOP-13208, we have a full enumeration of files under a path 
> without directory costs...S3A can/should switch to this to speed up those 
> places where the operation is called.
> Also
> * API call needs FS spec and contract tests
> * S3A could instrument invocation, so as to enable real-world popularity to 
> be measured



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16468) S3AFileSystem.getContentSummary() to use listFiles(recursive)

2021-01-06 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16468.
-
Fix Version/s: hadoop-13704
   Resolution: Duplicate

> S3AFileSystem.getContentSummary() to use listFiles(recursive)
> -
>
> Key: HADOOP-16468
> URL: https://issues.apache.org/jira/browse/HADOOP-16468
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
> Fix For: hadoop-13704
>
>
> HIVE-22054 discusses how they use getContentSummary to see if a directory is 
> empty.
> This is implemented in FileSystem as a recursive treewalk, with all the costs 
> there.
> Hive is moving off it; once that is in it won't be so much of an issue. But 
> if we wanted to speed up older versions of Hive, we could move the operation 
> to using a flat list
> That would give us the file size rapidly; the directory count would have to 
> be worked out by tracking parent dirs of all paths (and all entries ending 
> with /), and adding them up



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17359) [Hadoop-Tools]S3A MultiObjectDeleteException after uploading a file

2021-01-06 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17359.
-
Resolution: Cannot Reproduce

well, if you can't reproduce it and nobody else can, closing as a 
cannot-reproduce.

if you do see it again on a recent hadoop 3.x build, reopen with stack trace 
and anything else you can collect/share

> [Hadoop-Tools]S3A MultiObjectDeleteException after uploading a file
> ---
>
> Key: HADOOP-17359
> URL: https://issues.apache.org/jira/browse/HADOOP-17359
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.10.0
>Reporter: Xun REN
>Priority: Minor
>
> Hello,
>  
> I am using org.apache.hadoop.fs.s3a.S3AFileSystem as implementation for S3 
> related operation.
> When I upload a file onto a path, it returns an error:
> {code:java}
> 20/11/05 11:49:13 ERROR s3a.S3AFileSystem: Partial failure of delete, 1 
> errors20/11/05 11:49:13 ERROR s3a.S3AFileSystem: Partial failure of delete, 1 
> errorscom.amazonaws.services.s3.model.MultiObjectDeleteException: One or more 
> objects could not be deleted (Service: null; Status Code: 200; Error Code: 
> null; Request ID: 767BEC034D0B5B8A; S3 Extended Request ID: 
> JImfJY9hCl/QvninqT9aO+jrkmyRpRcceAg7t1lO936RfOg7izIom76RtpH+5rLqvmBFRx/++g8=; 
> Proxy: null), S3 Extended Request ID: 
> JImfJY9hCl/QvninqT9aO+jrkmyRpRcceAg7t1lO936RfOg7izIom76RtpH+5rLqvmBFRx/++g8= 
> at 
> com.amazonaws.services.s3.AmazonS3Client.deleteObjects(AmazonS3Client.java:2287)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1137) 
> at org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1389) 
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2304)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2270) 
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem$WriteOperationHelper.writeSuccessful(S3AFileSystem.java:2768)
>  at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:371)
>  at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:74)
>  at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:108) at 
> org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:69) at 
> org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:128) at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:488)
>  at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:410)
>  at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:342)
>  at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:277)
>  at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:262)
>  at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:327) at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:299) at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:257)
>  at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:281) at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:265) at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:228)
>  at 
> org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:285)
>  at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119) 
> at org.apache.hadoop.fs.shell.Command.run(Command.java:175) at 
> org.apache.hadoop.fs.FsShell.run(FsShell.java:317) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at 
> org.apache.hadoop.fs.FsShell.main(FsShell.java:380)20/11/05 11:49:13 ERROR 
> s3a.S3AFileSystem: bv/: "AccessDenied" - Access Denied
> {code}
> The problem is that Hadoop tries to create fake directories to map with S3 
> prefix and it cleans them after the operation. The cleaning is done from the 
> parent folder until the root folder.
> If we don't give the corresponding permission for some path, it will 
> encounter this problem:
> [https://github.com/apache/hadoop/blob/rel/release-2.10.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L2296-L2301]
>  
> During uploading, I don't see any "fake" directories are created. Why should 
> we clean them if it is not really created ?
> It is the same for the other operations like rename or mkdir where the 
> "deleteUnnecessaryFakeDirectories" method is called.
> Maybe the solution is to check the 

[GitHub] [hadoop] touchida commented on pull request #2585: HDFS-15759. EC: Verify EC reconstruction correctness on DataNode

2021-01-06 Thread GitBox


touchida commented on pull request #2585:
URL: https://github.com/apache/hadoop/pull/2585#issuecomment-755382135


   Failed unit tests:
   * 
org.apache.hadoop.hdfs.TestMultipleNNPortQOP.testMultipleNNPortOverwriteDownStream
 * This failure is obviously unrelated to this PR. I filed it in 
[HDFS-15762](https://issues.apache.org/jira/browse/HDFS-15762) and sent #2598.
   * 
org.apache.hadoop.hdfs.TestReconstructStripedFileWithValidator.testValidatorWithBadDecoding
 * This is my new unit test. I'm checking the cause.
 * StackTrace:
 ```
 java.io.IOException: Time out waiting for EC block reconstruction.
 at 
org.apache.hadoop.hdfs.StripedFileTestUtil.waitForReconstructionFinished(StripedFileTestUtil.java:540)
 at 
org.apache.hadoop.hdfs.TestReconstructStripedFile.assertFileBlocksReconstruction(TestReconstructStripedFile.java:399)
 at 
org.apache.hadoop.hdfs.TestReconstructStripedFileWithValidator.testValidatorWithBadDecoding(TestReconstructStripedFileWithValidator.java:74)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at java.lang.Thread.run(Thread.java:748)
  ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] touchida commented on pull request #2598: HDFS-15762. TestMultipleNNPortQOP#testMultipleNNPortOverwriteDownStre…

2021-01-06 Thread GitBox


touchida commented on pull request #2598:
URL: https://github.com/apache/hadoop/pull/2598#issuecomment-755361761


   The above build seems to have been aborted when I converted this PR to draft.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17437) Update Hadoop Documentation with a new AWS Credential Provider used with EKS

2021-01-06 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17259760#comment-17259760
 ] 

Steve Loughran commented on HADOOP-17437:
-

yes, do that.

Maybe actually start a section or doc on "s3a with kubernetes" and make this 
the sole initial item...as more go in then it can be added?

> Update Hadoop Documentation with a new AWS Credential Provider used with EKS
> 
>
> Key: HADOOP-17437
> URL: https://issues.apache.org/jira/browse/HADOOP-17437
> Project: Hadoop Common
>  Issue Type: Task
>  Components: auth, fs/s3
>Reporter: Prateek Dubey
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17444) ADLFS: Update SDK version from 2.3.6 to 2.3.9

2021-01-06 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17444:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Fixed in 3.4; wil backport if tested. 

nit: can you leave "fix version" unset until the fix is in -we use it to 
generated the change notes. Use "target version" to say what version you are 
targeting. Thanks

> ADLFS: Update SDK version from 2.3.6 to 2.3.9
> -
>
> Key: HADOOP-17444
> URL: https://issues.apache.org/jira/browse/HADOOP-17444
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Update SDK version from 2.3.6 to 2.3.9



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17444) ADLFS: Update SDK version from 2.3.6 to 2.3.9

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17444?focusedWorklogId=531903=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531903
 ]

ASF GitHub Bot logged work on HADOOP-17444:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 14:35
Start Date: 06/Jan/21 14:35
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2551:
URL: https://github.com/apache/hadoop/pull/2551#issuecomment-755334143


   +1; merged to trunk. Not pulled in to -3.3, but only because the asf repo 
seems to be out of sync and so I can't easily do the cherrypick
   
   @bilaharith could you do a test run with this PR applied to branch-3.3? No 
need for a new PR, just do a test run and let us know how it came out



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531903)
Time Spent: 40m  (was: 0.5h)

> ADLFS: Update SDK version from 2.3.6 to 2.3.9
> -
>
> Key: HADOOP-17444
> URL: https://issues.apache.org/jira/browse/HADOOP-17444
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Update SDK version from 2.3.6 to 2.3.9



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2551: HADOOP-17444. ADLS Gen1: Updating gen1 SDK version from 2.3.6 to 2.3.9

2021-01-06 Thread GitBox


steveloughran commented on pull request #2551:
URL: https://github.com/apache/hadoop/pull/2551#issuecomment-755334143


   +1; merged to trunk. Not pulled in to -3.3, but only because the asf repo 
seems to be out of sync and so I can't easily do the cherrypick
   
   @bilaharith could you do a test run with this PR applied to branch-3.3? No 
need for a new PR, just do a test run and let us know how it came out



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17444) ADLFS: Update SDK version from 2.3.6 to 2.3.9

2021-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17444?focusedWorklogId=531901=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531901
 ]

ASF GitHub Bot logged work on HADOOP-17444:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 14:32
Start Date: 06/Jan/21 14:32
Worklog Time Spent: 10m 
  Work Description: steveloughran merged pull request #2551:
URL: https://github.com/apache/hadoop/pull/2551


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531901)
Time Spent: 0.5h  (was: 20m)

> ADLFS: Update SDK version from 2.3.6 to 2.3.9
> -
>
> Key: HADOOP-17444
> URL: https://issues.apache.org/jira/browse/HADOOP-17444
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Update SDK version from 2.3.6 to 2.3.9



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #2551: HADOOP-17444. ADLS Gen1: Updating gen1 SDK version from 2.3.6 to 2.3.9

2021-01-06 Thread GitBox


steveloughran merged pull request #2551:
URL: https://github.com/apache/hadoop/pull/2551


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17458) S3A to treat "SdkClientException: Data read has a different length than the expected" as EOFException

2021-01-06 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17259749#comment-17259749
 ] 

Steve Loughran commented on HADOOP-17458:
-

HADOOP-17312 shows a similar-but-different error string to look for. we MUST 
NOT use class instanceof , but could look for final name of (possibly shaded) 
class as well as text

> S3A to treat "SdkClientException: Data read has a different length than the 
> expected" as EOFException
> -
>
> Key: HADOOP-17458
> URL: https://issues.apache.org/jira/browse/HADOOP-17458
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Priority: Minor
>
> A test run with network problems caught exceptions 
> "com.amazonaws.SdkClientException: Data read has a different length than the 
> expected:", which then escalated to failure.
> these should be recoverable if they are recognised as such. 
> translateException could do this. Yes, it would have to look @ the text, but 
> as {{signifiesConnectionBroken()}} already does that for "Failed to sanitize 
> XML document destined for handler class", it'd just be adding a new text 
> string to look for.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2598: HDFS-15762. TestMultipleNNPortQOP#testMultipleNNPortOverwriteDownStre…

2021-01-06 Thread GitBox


hadoop-yetus commented on pull request #2598:
URL: https://github.com/apache/hadoop/pull/2598#issuecomment-755322085


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 31s |  |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |   0m 39s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2598/1/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 23s | 
[/branch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2598/1/artifact/out/branch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt)
 |  hadoop-hdfs-project in trunk failed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.  |
   | -1 :x: |  compile  |   0m 23s | 
[/branch-compile-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2598/1/artifact/out/branch-compile-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt)
 |  hadoop-hdfs-project in trunk failed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/buildtool-branch-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2598/1/artifact/out/buildtool-branch-checkstyle-hadoop-hdfs-project.txt)
 |  The patch fails to run checkstyle in hadoop-hdfs-project  |
   | -1 :x: |  mvnsite  |   0m 22s | 
[/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2598/1/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-client.txt)
 |  hadoop-hdfs-client in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 22s | 
[/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2598/1/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |   1m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 22s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2598/1/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt)
 |  hadoop-hdfs-client in trunk failed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.  |
   | -1 :x: |  javadoc  |   0m 23s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2598/1/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt)
 |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.  |
   | -1 :x: |  javadoc  |   0m 22s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2598/1/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt)
 |  hadoop-hdfs-client in trunk failed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.  |
   | -1 :x: |  javadoc  |   0m 22s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2598/1/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt)
 |  hadoop-hdfs in trunk failed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.  |
   | +0 :ok: |  spotbugs  |   3m 46s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   0m 23s | 
[/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2598/1/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt)
 |  hadoop-hdfs-client in trunk failed.  |
   | -1 :x: |  findbugs  |   0m 22s | 

[jira] [Commented] (HADOOP-17458) S3A to treat "SdkClientException: Data read has a different length than the expected" as EOFException

2021-01-06 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17259743#comment-17259743
 ] 

Steve Loughran commented on HADOOP-17458:
-

{code}
[ERROR] 
testDecompressionSequential128K(org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance)
  Time elapsed: 204.334 s  <<< ERROR!
org.apache.hadoop.fs.s3a.AWSClientIOException: read on 
s3a://landsat-pds/scene_list.gz: com.amazonaws.SdkClientException: Data read 
has a different length than the expected: dataLength=0; 
expectedLength=43236817; includeSkipped=true; in.getClass()=class 
com.amazonaws.services.s3.AmazonS3Client$2; markedSupported=false; marked=0; 
resetSinceLastMarked=false; markCount=0; resetCount=0: Data read has a 
different length than the expected: dataLength=0; expectedLength=43236817; 
includeSkipped=true; in.getClass()=class 
com.amazonaws.services.s3.AmazonS3Client$2; markedSupported=false; marked=0; 
resetSinceLastMarked=false; markCount=0; resetCount=0
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:208)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:117)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:320)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:412)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:316)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:291)
at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:516)
at java.io.DataInputStream.read(DataInputStream.java:149)
at 
org.apache.hadoop.io.compress.DecompressorStream.getCompressedData(DecompressorStream.java:179)
at 
org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:163)
at 
org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:105)
at java.io.InputStream.read(InputStream.java:101)
at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:191)
at 
org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:227)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:185)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:391)
at 
org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance.executeDecompression(ITestS3AInputStreamPerformance.java:385)
at 
org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance.testDecompressionSequential128K(ITestS3AInputStreamPerformance.java:359)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.amazonaws.SdkClientException: Data read has a different length 
than the expected: dataLength=0; expectedLength=43236817; includeSkipped=true; 
in.getClass()=class com.amazonaws.services.s3.AmazonS3Client$2; 
markedSupported=false; marked=0; resetSinceLastMarked=false; markCount=0; 
resetCount=0
at 
com.amazonaws.util.LengthCheckInputStream.checkLength(LengthCheckInputStream.java:151)
at 
com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:109)
at 
com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
at 
com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
at 
com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
at 
org.apache.hadoop.fs.s3a.S3AInputStream.lambda$read$3(S3AInputStream.java:520)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:115)
... 31 more

{code}

> S3A to treat "SdkClientException: Data read has a different length than the 
> expected" as EOFException
> 

[jira] [Created] (HADOOP-17458) S3A to treat "dkClientException: Data read has a different length than the expected" as EOFException

2021-01-06 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-17458:
---

 Summary: S3A to treat "dkClientException: Data read has a 
different length than the expected" as EOFException
 Key: HADOOP-17458
 URL: https://issues.apache.org/jira/browse/HADOOP-17458
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.4.0
Reporter: Steve Loughran


A test run with network problems caught exceptions 
"com.amazonaws.SdkClientException: Data read has a different length than the 
expected:", which then escalated to failure.

these should be recoverable if they are recognised as such. 

translateException could do this. Yes, it would have to look @ the text, but as 
{{signifiesConnectionBroken()}} already does that for "Failed to sanitize XML 
document destined for handler class", it'd just be adding a new text string to 
look for.





--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17458) S3A to treat "SdkClientException: Data read has a different length than the expected" as EOFException

2021-01-06 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17458:

Summary: S3A to treat "SdkClientException: Data read has a different length 
than the expected" as EOFException  (was: S3A to treat "dkClientException: Data 
read has a different length than the expected" as EOFException)

> S3A to treat "SdkClientException: Data read has a different length than the 
> expected" as EOFException
> -
>
> Key: HADOOP-17458
> URL: https://issues.apache.org/jira/browse/HADOOP-17458
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Priority: Minor
>
> A test run with network problems caught exceptions 
> "com.amazonaws.SdkClientException: Data read has a different length than the 
> expected:", which then escalated to failure.
> these should be recoverable if they are recognised as such. 
> translateException could do this. Yes, it would have to look @ the text, but 
> as {{signifiesConnectionBroken()}} already does that for "Failed to sanitize 
> XML document destined for handler class", it'd just be adding a new text 
> string to look for.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17338) Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc

2021-01-06 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17259742#comment-17259742
 ] 

Steve Loughran commented on HADOOP-17338:
-

now, I just got the error {{Data read has a different length than the expected: 
dataLength=0; expectedLength=43236817}} on a test run with network problems. 
These should be converted to EOFExceptions, so retried as well, shouldn't they? 
Will file a new JIRA



> Intermittent S3AInputStream failures: Premature end of Content-Length 
> delimited message body etc
> 
>
> Key: HADOOP-17338
> URL: https://issues.apache.org/jira/browse/HADOOP-17338
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
> Attachments: HADOOP-17338.001.patch
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> We are seeing the following two kinds of intermittent exceptions when using 
> S3AInputSteam:
> 1.
> {code:java}
> Caused by: com.amazonaws.thirdparty.apache.http.ConnectionClosedException: 
> Premature end of Content-Length delimited message body (expected: 156463674; 
> received: 150001089
> at 
> com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178)
> at 
> com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:181)
> at java.io.DataInputStream.readFully(DataInputStream.java:195)
> at java.io.DataInputStream.readFully(DataInputStream.java:169)
> at 
> org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:779)
> at 
> org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:511)
> at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:130)
> at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214)
> at 
> org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227)
> at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:208)
> at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:63)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350)
> ... 15 more
> {code}
> 2.
> {code:java}
> Caused by: javax.net.ssl.SSLException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:596)
> at sun.security.ssl.InputRecord.read(InputRecord.java:532)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:990)
> at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:948)
> at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
> at 
> com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
> at 
> com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198)
> at 
> com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176)
> at 
> com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> 

[jira] [Commented] (HADOOP-17456) S3A ITestPartialRenamesDeletes.testPartialDirDelete[bulk-delete=true] failure

2021-01-06 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17259732#comment-17259732
 ] 

Steve Loughran commented on HADOOP-17456:
-

need to assert on count of bulk delete calls, and reset during the test case as 
appropriate. Fix in HADOOP-17451

> S3A ITestPartialRenamesDeletes.testPartialDirDelete[bulk-delete=true] failure
> -
>
> Key: HADOOP-17456
> URL: https://issues.apache.org/jira/browse/HADOOP-17456
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> Failure in {{ITestPartialRenamesDeletes.testPartialDirDelete}}; wrong #of 
> delete requests. 
> build options: -Dparallel-tests -DtestsThreadCount=6 -Dscale -Dmarkers=delete 
> -Ds3guard -Ddynamo
> The assert fails on a line changes in HADOOP-17271; assumption being, there 
> are some test run states where things happen differently. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17455) [s3a] Intermittent failure of ITestS3ADeleteCost.testDeleteSingleFileInDir

2021-01-06 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17259728#comment-17259728
 ] 

Steve Loughran commented on HADOOP-17455:
-

cause is the assert was probing the wrong metric: #of objects included in 
delete requests. When the depth of the test dir was that of a single test, 
things were equal, but in a parallel test run the dir would be deeper and the 
assertion would fail.

Fix will be HADOOP-17451 PR; assert on #of bulk delete requests instead

> [s3a] Intermittent failure of ITestS3ADeleteCost.testDeleteSingleFileInDir
> --
>
> Key: HADOOP-17455
> URL: https://issues.apache.org/jira/browse/HADOOP-17455
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Steve Loughran
>Priority: Major
>
> Test failed against ireland intermittently with the following config:
> {{mvn clean verify -Dparallel-tests -DtestsThreadCount=8}}
> xml based config in auth-keys.xml:
> {code:xml}
> 
> fs.s3a.metadatastore.impl
> org.apache.hadoop.fs.s3a.s3guard.NullMetadataStore
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-17456) S3A ITestPartialRenamesDeletes.testPartialDirDelete[bulk-delete=true] failure

2021-01-06 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17456:

Comment: was deleted

(was: cause is the assert was probing the wrong metric: #of objects included in 
delete requests. When the depth of the test dir was that of a single test, 
things were equal, but in a parallel test run the dir would be deeper and the 
assertion would fail.

Fix will be HADOOP-17451 PR; assert on #of bulk delete requests instead)

> S3A ITestPartialRenamesDeletes.testPartialDirDelete[bulk-delete=true] failure
> -
>
> Key: HADOOP-17456
> URL: https://issues.apache.org/jira/browse/HADOOP-17456
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> Failure in {{ITestPartialRenamesDeletes.testPartialDirDelete}}; wrong #of 
> delete requests. 
> build options: -Dparallel-tests -DtestsThreadCount=6 -Dscale -Dmarkers=delete 
> -Ds3guard -Ddynamo
> The assert fails on a line changes in HADOOP-17271; assumption being, there 
> are some test run states where things happen differently. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >