[hadoop-thirdparty] branch trunk updated: HADOOP-16820. ChangeLog and ReleaseNote are not packaged by createrelease script. (#2)

2020-01-21 Thread vinayakumarb
This is an automated email from the ASF dual-hosted git repository.

vinayakumarb pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop-thirdparty.git


The following commit(s) were added to refs/heads/trunk by this push:
 new efbab22  HADOOP-16820. ChangeLog and ReleaseNote are not packaged by 
createrelease script. (#2)
efbab22 is described below

commit efbab22845d4bbc15149b5b3122ac5439477ab07
Author: Vinayakumar B 
AuthorDate: Wed Jan 22 12:41:13 2020 +0530

HADOOP-16820. ChangeLog and ReleaseNote are not packaged by createrelease 
script. (#2)
---
 dev-support/bin/create-release | 4 ++--
 dev-support/bin/yetus-wrapper  | 9 +
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/dev-support/bin/create-release b/dev-support/bin/create-release
index db767e2..75b80a1 100755
--- a/dev-support/bin/create-release
+++ b/dev-support/bin/create-release
@@ -551,12 +551,12 @@ function makearelease
 
   # Stage CHANGELOG and RELEASENOTES files
   for i in CHANGELOG RELEASENOTES; do
-if [[ $(ls -l 
"${BASEDIR}/src/site/markdown/release/${HADOOP_THIRDPARTY_VERSION}"/${i}*.md | 
wc -l) == 0 ]]; then
+if [[ $(ls -l 
"${BASEDIR}/src/site/markdown/release/thirdparty-${HADOOP_THIRDPARTY_VERSION}"/${i}*.md
 | wc -l) == 0 ]]; then
   echo "No ${i} found. Continuing..."
   continue;
 fi
 run cp -p \
-
"${BASEDIR}/src/site/markdown/release/${HADOOP_THIRDPARTY_VERSION}"/${i}*.md \
+
"${BASEDIR}/src/site/markdown/release/thirdparty-${HADOOP_THIRDPARTY_VERSION}"/${i}*.md
 \
 "${ARTIFACTS_DIR}/${i}.md"
   done
 
diff --git a/dev-support/bin/yetus-wrapper b/dev-support/bin/yetus-wrapper
index b0f71f1..ec6a02b 100755
--- a/dev-support/bin/yetus-wrapper
+++ b/dev-support/bin/yetus-wrapper
@@ -176,6 +176,15 @@ if ! (gunzip -c "${TARBALL}.gz" | tar xpf -); then
   exit 1
 fi
 
+if [[ "${WANTED}" == "releasedocmaker" ]]; then
+  # releasedocmaker expects versions to be in form of x.y.z to generate index 
and readme files.
+  # But thirdparty version will be in form of 'thirdparty-x.y.z'
+  if [[ -x 
"${HADOOP_PATCHPROCESS}/${YETUS_PREFIX}-${HADOOP_YETUS_VERSION}/lib/releasedocmaker/releasedocmaker/__init__.py"
 ]]; then
+sed -i 's@glob(\"@glob(\"thirdparty-@g' 
"${HADOOP_PATCHPROCESS}/${YETUS_PREFIX}-${HADOOP_YETUS_VERSION}/lib/releasedocmaker/releasedocmaker/__init__.py"
+sed -i 's@%s v%s@%s %s@g' 
"${HADOOP_PATCHPROCESS}/${YETUS_PREFIX}-${HADOOP_YETUS_VERSION}/lib/releasedocmaker/releasedocmaker/__init__.py"
+  fi
+fi
+
 if [[ -x 
"${HADOOP_PATCHPROCESS}/${YETUS_PREFIX}-${HADOOP_YETUS_VERSION}/bin/${WANTED}" 
]]; then
   popd >/dev/null
   exec 
"${HADOOP_PATCHPROCESS}/${YETUS_PREFIX}-${HADOOP_YETUS_VERSION}/bin/${WANTED}" 
"${ARGV[@]}"


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: Revert "YARN-9768. RM Renew Delegation token thread should timeout and retry. Contributed by Manikandan R."

2020-01-21 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new b4870bc  Revert "YARN-9768. RM Renew Delegation token thread should 
timeout and retry. Contributed by Manikandan R."
b4870bc is described below

commit b4870bce3a8336dbd638d26b8662037c4d4cdae9
Author: Inigo Goiri 
AuthorDate: Tue Jan 21 17:45:17 2020 -0800

Revert "YARN-9768. RM Renew Delegation token thread should timeout and 
retry. Contributed by Manikandan R."

This reverts commit 0696828a090bc06446f75b29c967697f1d6d845b.
---
 .../apache/hadoop/yarn/conf/YarnConfiguration.java |  14 --
 .../src/main/resources/yarn-default.xml|  24 ---
 .../security/DelegationTokenRenewer.java   | 144 +
 .../security/TestDelegationTokenRenewer.java   | 177 +
 4 files changed, 4 insertions(+), 355 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index be7cc89..06c3fa4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -26,7 +26,6 @@ import java.util.Collections;
 import java.util.HashSet;
 import java.util.List;
 import java.util.Set;
-import java.util.concurrent.TimeUnit;
 
 import org.apache.hadoop.HadoopIllegalArgumentException;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
@@ -730,19 +729,6 @@ public class YarnConfiguration extends Configuration {
   public static final int DEFAULT_RM_DELEGATION_TOKEN_MAX_CONF_SIZE_BYTES =
   12800;
 
-  public static final String RM_DT_RENEWER_THREAD_TIMEOUT =
-  RM_PREFIX + "delegation-token-renewer.thread-timeout";
-  public static final long DEFAULT_RM_DT_RENEWER_THREAD_TIMEOUT =
-  TimeUnit.SECONDS.toMillis(60); // 60 Seconds
-  public static final String RM_DT_RENEWER_THREAD_RETRY_INTERVAL =
-  RM_PREFIX + "delegation-token-renewer.thread-retry-interval";
-  public static final long DEFAULT_RM_DT_RENEWER_THREAD_RETRY_INTERVAL =
-  TimeUnit.SECONDS.toMillis(60); // 60 Seconds
-  public static final String RM_DT_RENEWER_THREAD_RETRY_MAX_ATTEMPTS =
-  RM_PREFIX + "delegation-token-renewer.thread-retry-max-attempts";
-  public static final int DEFAULT_RM_DT_RENEWER_THREAD_RETRY_MAX_ATTEMPTS =
-  10;
-
   public static final String RECOVERY_ENABLED = RM_PREFIX + "recovery.enabled";
   public static final boolean DEFAULT_RM_RECOVERY_ENABLED = false;
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index 5277be4..c96a7e4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -959,30 +959,6 @@
 
   
 
-RM DelegationTokenRenewer thread timeout
-
-yarn.resourcemanager.delegation-token-renewer.thread-timeout
-60s
-  
-
-  
-
-Default maximum number of retries for each RM DelegationTokenRenewer thread
-
-
yarn.resourcemanager.delegation-token-renewer.thread-retry-max-attempts
-10
-  
-
-  
-
-Time interval between each RM DelegationTokenRenewer thread retry attempt
-
-
yarn.resourcemanager.delegation-token-renewer.thread-retry-interval
-60s
-  
-
-  
-
 Thread pool size for RMApplicationHistoryWriter.
 
 
yarn.resourcemanager.history-writer.multi-threaded-dispatcher.pool-size
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
index fd8935d..d3ed503 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
@@ -26,7 +26,6 @@ import java.util.Arrays;
 import java.util.Collection;
 import java.util.Collections;
 import java.util.Date;
-import java.util.HashMap;
 import java.util.HashSet;
 import java.util.Iterator;
 

[hadoop] branch trunk updated (0696828 -> 5e2ce37)

2020-01-21 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 0696828  YARN-9768. RM Renew Delegation token thread should timeout 
and retry. Contributed by Manikandan R.
 add 5e2ce37  HADOOP-16759. Filesystem openFile() builder to take a 
FileStatus param (#1761). Contributed by Steve Loughran

No new revisions were added by this update.

Summary of changes:
 .../org/apache/hadoop/fs/AbstractFileSystem.java   |  16 +--
 .../org/apache/hadoop/fs/ChecksumFileSystem.java   |  13 +-
 .../org/apache/hadoop/fs/DelegateToFileSystem.java |  17 +--
 .../java/org/apache/hadoop/fs/FileContext.java |  12 +-
 .../main/java/org/apache/hadoop/fs/FileSystem.java |  41 +++---
 .../org/apache/hadoop/fs/FilterFileSystem.java |  15 +--
 .../main/java/org/apache/hadoop/fs/FilterFs.java   |   9 +-
 .../hadoop/fs/FutureDataInputStreamBuilder.java|  11 ++
 .../fs/impl/FutureDataInputStreamBuilderImpl.java  |  33 -
 .../apache/hadoop/fs/impl/OpenFileParameters.java  |  94 ++
 .../src/site/markdown/filesystem/filesystem.md |  23 +++-
 .../filesystem/fsdatainputstreambuilder.md |  41 ++
 .../fs/contract/AbstractContractOpenTest.java  |   9 ++
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java| 120 ++
 .../hadoop/fs/s3a/ITestS3ARemoteFileChanged.java   | 141 +++--
 .../fs/s3a/ITestS3GuardOutOfBandOperations.java|  56 +---
 .../org/apache/hadoop/fs/s3a/S3ATestUtils.java |  27 
 .../apache/hadoop/fs/s3a/select/ITestS3Select.java |   7 +-
 18 files changed, 544 insertions(+), 141 deletions(-)
 create mode 100644 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/OpenFileParameters.java


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: YARN-9768. RM Renew Delegation token thread should timeout and retry. Contributed by Manikandan R.

2020-01-21 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 0696828  YARN-9768. RM Renew Delegation token thread should timeout 
and retry. Contributed by Manikandan R.
0696828 is described below

commit 0696828a090bc06446f75b29c967697f1d6d845b
Author: Inigo Goiri 
AuthorDate: Tue Jan 21 13:41:01 2020 -0800

YARN-9768. RM Renew Delegation token thread should timeout and retry. 
Contributed by Manikandan R.
---
 .../apache/hadoop/yarn/conf/YarnConfiguration.java |  14 ++
 .../src/main/resources/yarn-default.xml|  24 +++
 .../security/DelegationTokenRenewer.java   | 144 -
 .../security/TestDelegationTokenRenewer.java   | 177 -
 4 files changed, 355 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 06c3fa4..be7cc89 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -26,6 +26,7 @@ import java.util.Collections;
 import java.util.HashSet;
 import java.util.List;
 import java.util.Set;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.hadoop.HadoopIllegalArgumentException;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
@@ -729,6 +730,19 @@ public class YarnConfiguration extends Configuration {
   public static final int DEFAULT_RM_DELEGATION_TOKEN_MAX_CONF_SIZE_BYTES =
   12800;
 
+  public static final String RM_DT_RENEWER_THREAD_TIMEOUT =
+  RM_PREFIX + "delegation-token-renewer.thread-timeout";
+  public static final long DEFAULT_RM_DT_RENEWER_THREAD_TIMEOUT =
+  TimeUnit.SECONDS.toMillis(60); // 60 Seconds
+  public static final String RM_DT_RENEWER_THREAD_RETRY_INTERVAL =
+  RM_PREFIX + "delegation-token-renewer.thread-retry-interval";
+  public static final long DEFAULT_RM_DT_RENEWER_THREAD_RETRY_INTERVAL =
+  TimeUnit.SECONDS.toMillis(60); // 60 Seconds
+  public static final String RM_DT_RENEWER_THREAD_RETRY_MAX_ATTEMPTS =
+  RM_PREFIX + "delegation-token-renewer.thread-retry-max-attempts";
+  public static final int DEFAULT_RM_DT_RENEWER_THREAD_RETRY_MAX_ATTEMPTS =
+  10;
+
   public static final String RECOVERY_ENABLED = RM_PREFIX + "recovery.enabled";
   public static final boolean DEFAULT_RM_RECOVERY_ENABLED = false;
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index c96a7e4..5277be4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -959,6 +959,30 @@
 
   
 
+RM DelegationTokenRenewer thread timeout
+
+yarn.resourcemanager.delegation-token-renewer.thread-timeout
+60s
+  
+
+  
+
+Default maximum number of retries for each RM DelegationTokenRenewer thread
+
+
yarn.resourcemanager.delegation-token-renewer.thread-retry-max-attempts
+10
+  
+
+  
+
+Time interval between each RM DelegationTokenRenewer thread retry attempt
+
+
yarn.resourcemanager.delegation-token-renewer.thread-retry-interval
+60s
+  
+
+  
+
 Thread pool size for RMApplicationHistoryWriter.
 
 
yarn.resourcemanager.history-writer.multi-threaded-dispatcher.pool-size
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
index d3ed503..fd8935d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
@@ -26,6 +26,7 @@ import java.util.Arrays;
 import java.util.Collection;
 import java.util.Collections;
 import java.util.Date;
+import java.util.HashMap;
 import java.util.HashSet;
 import java.util.Iterator;
 import java.util.List;
@@ -36,10 +37,12 @@ import java.util.Timer;
 import 

[hadoop] branch trunk updated: HDFS-15092. TestRedudantBlocks#testProcessOverReplicatedAndRedudantBlock sometimes fails. Contributed by Fei Hui.

2020-01-21 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 8cfc367  HDFS-15092. 
TestRedudantBlocks#testProcessOverReplicatedAndRedudantBlock sometimes fails. 
Contributed by Fei Hui.
8cfc367 is described below

commit 8cfc3673dcbf1901ca6fad11b5c996e54e32ed6b
Author: Inigo Goiri 
AuthorDate: Tue Jan 21 13:29:20 2020 -0800

HDFS-15092. TestRedudantBlocks#testProcessOverReplicatedAndRedudantBlock 
sometimes fails. Contributed by Fei Hui.
---
 .../hadoop/hdfs/server/namenode/TestRedudantBlocks.java  | 16 +---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestRedudantBlocks.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestRedudantBlocks.java
index ac25da3..1a1fc16 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestRedudantBlocks.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestRedudantBlocks.java
@@ -35,8 +35,10 @@ import org.apache.hadoop.hdfs.protocol.LocatedBlock;
 import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
 import org.apache.hadoop.hdfs.protocol.LocatedStripedBlock;
 import org.apache.hadoop.hdfs.protocol.SystemErasureCodingPolicies;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped;
 import org.apache.hadoop.hdfs.server.datanode.SimulatedFSDataset;
 import org.apache.hadoop.hdfs.util.StripedBlockUtil;
+import org.apache.hadoop.test.GenericTestUtils;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
@@ -108,18 +110,26 @@ public class TestRedudantBlocks {
 blk.setBlockId(groupId + 2);
 cluster.injectBlocks(i, Arrays.asList(blk), bpid);
 
+BlockInfoStriped blockInfo =
+(BlockInfoStriped)cluster.getNamesystem().getBlockManager()
+.getStoredBlock(new Block(groupId));
 // update blocksMap
 cluster.triggerBlockReports();
 // delete redundant block
 cluster.triggerHeartbeats();
 //wait for IBR
-Thread.sleep(1100);
+GenericTestUtils.waitFor(
+() -> cluster.getNamesystem().getBlockManager()
+.countNodes(blockInfo).liveReplicas() >= groupSize -1,
+500, 1);
 
 // trigger reconstruction
 cluster.triggerHeartbeats();
-
 //wait for IBR
-Thread.sleep(1100);
+GenericTestUtils.waitFor(
+() -> cluster.getNamesystem().getBlockManager()
+.countNodes(blockInfo).liveReplicas() >= groupSize,
+500, 1);
 
 HashSet blockIdsSet = new HashSet();
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-15126. TestDatanodeRegistration#testForcedRegistration fails intermittently. Contributed by Ahmed Hussein.

2020-01-21 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new b657822  HDFS-15126. TestDatanodeRegistration#testForcedRegistration 
fails intermittently. Contributed by Ahmed Hussein.
b657822 is described below

commit b657822b98781f042fad5281c20123e803ebae0f
Author: Inigo Goiri 
AuthorDate: Tue Jan 21 13:22:53 2020 -0800

HDFS-15126. TestDatanodeRegistration#testForcedRegistration fails 
intermittently. Contributed by Ahmed Hussein.
---
 .../org/apache/hadoop/hdfs/TestDatanodeRegistration.java | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeRegistration.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeRegistration.java
index 37042db..77aeff4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeRegistration.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeRegistration.java
@@ -364,14 +364,16 @@ public class TestDatanodeRegistration {
   waitForHeartbeat(dn, dnd);
   assertTrue(dnd.isRegistered());
   assertSame(lastReg, dn.getDNRegistrationForBP(bpId));
-  assertTrue(waitForBlockReport(dn, dnd));
+  assertTrue("block report is not processed for DN " + dnd,
+  waitForBlockReport(dn, dnd));
   assertTrue(dnd.isRegistered());
   assertSame(lastReg, dn.getDNRegistrationForBP(bpId));
 
   // check that block report is not processed and registration didn't
   // change.
   dnd.setForceRegistration(true);
-  assertFalse(waitForBlockReport(dn, dnd));
+  assertFalse("block report is processed for DN " + dnd,
+  waitForBlockReport(dn, dnd));
   assertFalse(dnd.isRegistered());
   assertSame(lastReg, dn.getDNRegistrationForBP(bpId));
 
@@ -382,7 +384,8 @@ public class TestDatanodeRegistration {
   newReg = dn.getDNRegistrationForBP(bpId);
   assertNotSame(lastReg, newReg);
   lastReg = newReg;
-  assertTrue(waitForBlockReport(dn, dnd));
+  assertTrue("block report is not processed for DN " + dnd,
+  waitForBlockReport(dn, dnd));
   assertTrue(dnd.isRegistered());
   assertSame(lastReg, dn.getDNRegistrationForBP(bpId));
 
@@ -447,8 +450,9 @@ public class TestDatanodeRegistration {
 public Boolean get() {
   return lastCount != storage.getBlockReportCount();
 }
-  }, 10, 2000);
+  }, 10, 6000);
 } catch (TimeoutException te) {
+  LOG.error("Timeout waiting for block report for {}", dnd);
   return false;
 }
 return true;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop-thirdparty] branch trunk updated: HADOOP-16821. [pb-upgrade] Use 'o.a.h.thirdparty.protobuf' shaded prefix instead of 'protobuf_3_7' (#3)

2020-01-21 Thread vinayakumarb
This is an automated email from the ASF dual-hosted git repository.

vinayakumarb pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop-thirdparty.git


The following commit(s) were added to refs/heads/trunk by this push:
 new eac5a3d  HADOOP-16821. [pb-upgrade] Use 'o.a.h.thirdparty.protobuf' 
shaded prefix instead of 'protobuf_3_7' (#3)
eac5a3d is described below

commit eac5a3df55fcc3b1fd4b50cf2fa129250d4c384b
Author: Vinayakumar B 
AuthorDate: Tue Jan 21 22:59:27 2020 +0530

HADOOP-16821. [pb-upgrade] Use 'o.a.h.thirdparty.protobuf' shaded prefix 
instead of 'protobuf_3_7' (#3)
---
 hadoop-shaded-protobuf_3_7/pom.xml | 2 +-
 pom.xml| 1 +
 src/site/markdown/index.md.vm  | 1 +
 3 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/hadoop-shaded-protobuf_3_7/pom.xml 
b/hadoop-shaded-protobuf_3_7/pom.xml
index 102625c..5a622cd 100644
--- a/hadoop-shaded-protobuf_3_7/pom.xml
+++ b/hadoop-shaded-protobuf_3_7/pom.xml
@@ -74,7 +74,7 @@
   
 
   com/google/protobuf
-  ${shaded.prefix}.protobuf_3_7
+  ${protobuf.shade.prefix}
 
 
   google/
diff --git a/pom.xml b/pom.xml
index 155a0a2..0754cb6 100644
--- a/pom.xml
+++ b/pom.xml
@@ -93,6 +93,7 @@
 
 
 org.apache.hadoop.thirdparty
+${shaded.prefix}.protobuf
 3.7.1
 
 
diff --git a/src/site/markdown/index.md.vm b/src/site/markdown/index.md.vm
index adafd02..f7acb74 100644
--- a/src/site/markdown/index.md.vm
+++ b/src/site/markdown/index.md.vm
@@ -43,3 +43,4 @@ This page provides an overview of the major changes.
 Protobuf-java
 -
 Google Protobuf's 3.7.1 jar is available as 
*org.apache.hadoop.thirdparty:hadoop-shaded-protobuf_3_7* artifact.
+*com.google.protobuf* package is shaded as 
*org.apache.hadoop.thirdparty.protobuf*.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-16346. Stabilize S3A OpenSSL support.

2020-01-21 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new f206b73  HADOOP-16346. Stabilize S3A OpenSSL support.
f206b73 is described below

commit f206b736f0b370d212a399937c7a84e432f12eb5
Author: Sahil Takiar 
AuthorDate: Tue Jan 21 16:37:51 2020 +

HADOOP-16346. Stabilize S3A OpenSSL support.

Introduces `openssl` as an option for `fs.s3a.ssl.channel.mode`.
The new option is documented and marked as experimental.

For details on how to use this, consult the peformance document
in the s3a documentation.

This patch is the successor to HADOOP-16050 "S3A SSL connections
should use OpenSSL" -which was reverted because of
incompatibilities between the wildfly OpenSSL client and the AWS
HTTPS servers (HADOOP-16347). With the Wildfly release moved up
to 1.0.7.Final (HADOOP-16405) everything should now work.

Related issues:

* HADOOP-15669. ABFS: Improve HTTPS Performance
* HADOOP-16050: S3A SSL connections should use OpenSSL
* HADOOP-16371: Option to disable GCM for SSL connections when running on 
Java 8
* HADOOP-16405: Upgrade Wildfly Openssl version to 1.0.7.Final

Contributed by Sahil Takiar

Change-Id: I80a4bc5051519f186b7383b2c1cea140be42444e
---
 hadoop-common-project/hadoop-common/pom.xml|  5 ++
 .../security/ssl/DelegatingSSLSocketFactory.java   | 10 
 .../src/main/resources/core-default.xml| 15 --
 hadoop-project/pom.xml |  8 ++-
 hadoop-tools/hadoop-aws/pom.xml|  5 ++
 .../apache/hadoop/fs/s3a/impl/NetworkBinding.java  |  7 ---
 .../site/markdown/tools/hadoop-aws/performance.md  | 61 ++
 .../fs/contract/s3a/ITestS3AContractSeek.java  | 15 +-
 hadoop-tools/hadoop-azure/pom.xml  |  2 +-
 9 files changed, 103 insertions(+), 25 deletions(-)

diff --git a/hadoop-common-project/hadoop-common/pom.xml 
b/hadoop-common-project/hadoop-common/pom.xml
index 896ac42..aff03c2 100644
--- a/hadoop-common-project/hadoop-common/pom.xml
+++ b/hadoop-common-project/hadoop-common/pom.xml
@@ -346,6 +346,11 @@
 
   org.wildfly.openssl
   wildfly-openssl
+  test
+
+
+  org.wildfly.openssl
+  wildfly-openssl-java
   provided
 
   
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/DelegatingSSLSocketFactory.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/DelegatingSSLSocketFactory.java
index ad97a99..c961364 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/DelegatingSSLSocketFactory.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/DelegatingSSLSocketFactory.java
@@ -58,6 +58,10 @@ import org.wildfly.openssl.SSL;
  * SSL with no modification to the list of enabled ciphers.
  *   
  * 
+ *
+ * In order to load OpenSSL, applications must ensure the wildfly-openssl
+ * artifact is on the classpath. Currently, only ABFS and S3A provide
+ * wildfly-openssl as a runtime dependency.
  */
 public final class DelegatingSSLSocketFactory extends SSLSocketFactory {
 
@@ -170,8 +174,14 @@ public final class DelegatingSSLSocketFactory extends 
SSLSocketFactory {
 OpenSSLProvider.register();
 openSSLProviderRegistered = true;
   }
+  java.util.logging.Logger logger = java.util.logging.Logger.getLogger(
+SSL.class.getName());
+  logger.setLevel(Level.WARNING);
   ctx = SSLContext.getInstance("openssl.TLS");
   ctx.init(null, null, null);
+  // Strong reference needs to be kept to logger until initialization of
+  // SSLContext finished (see HADOOP-16174):
+  logger.setLevel(Level.INFO);
   channelMode = SSLChannelMode.OpenSSL;
   break;
 case Default_JSSE:
diff --git 
a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml 
b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
index 9aadd74..3e9beb9 100644
--- a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
+++ b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
@@ -1978,11 +1978,16 @@
   
 If secure connections to S3 are enabled, configures the SSL
 implementation used to encrypt connections to S3. Supported values are:
-"default_jsse" and "default_jsse_with_gcm". "default_jsse" uses the Java
-Secure Socket Extension package (JSSE). However, when running on Java 8,
-the GCM cipher is removed from the list of enabled ciphers. This is due
-to performance issues with GCM in Java 8. "default_jsse_with_gcm" uses
-the JSSE with the default list of cipher suites.
+"default_jsse", 

[hadoop] branch branch-2.10 updated: HDFS-15125. Pull back HDFS-11353, HDFS-13993, HDFS-13945, and HDFS-14324 to branch-2.10. Contributed by Jim Brennan.

2020-01-21 Thread kihwal
This is an automated email from the ASF dual-hosted git repository.

kihwal pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.10 by this push:
 new c5d43b6  HDFS-15125. Pull back HDFS-11353, HDFS-13993, HDFS-13945, and 
HDFS-14324 to branch-2.10. Contributed by Jim Brennan.
c5d43b6 is described below

commit c5d43b65a904d3b86909b7e3509336d7b4f07a67
Author: Kihwal Lee 
AuthorDate: Tue Jan 21 09:59:14 2020 -0600

HDFS-15125. Pull back HDFS-11353, HDFS-13993, HDFS-13945, and HDFS-14324
to branch-2.10. Contributed by Jim Brennan.
---
 .../datanode/TestDataNodeHotSwapVolumes.java   |  9 +--
 .../server/datanode/TestDataNodeVolumeFailure.java | 73 ++
 .../TestDataNodeVolumeFailureReporting.java| 12 +++-
 .../TestDataNodeVolumeFailureToleration.java   |  6 ++
 4 files changed, 50 insertions(+), 50 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeHotSwapVolumes.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeHotSwapVolumes.java
index ea28ea4..93c1242 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeHotSwapVolumes.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeHotSwapVolumes.java
@@ -939,8 +939,7 @@ public class TestDataNodeHotSwapVolumes {
*/
   @Test(timeout=6)
   public void testDirectlyReloadAfterCheckDiskError()
-  throws IOException, TimeoutException, InterruptedException,
-  ReconfigurationException {
+  throws Exception {
 // The test uses DataNodeTestUtils#injectDataDirFailure() to simulate
 // volume failures which is currently not supported on Windows.
 assumeTrue(!Path.WINDOWS);
@@ -959,11 +958,7 @@ public class TestDataNodeHotSwapVolumes {
 
 DataNodeTestUtils.injectDataDirFailure(dirToFail);
 // Call and wait DataNode to detect disk failure.
-long lastDiskErrorCheck = dn.getLastDiskErrorCheck();
-dn.checkDiskErrorAsync(failedVolume);
-while (dn.getLastDiskErrorCheck() == lastDiskErrorCheck) {
-  Thread.sleep(100);
-}
+DataNodeTestUtils.waitForDiskError(dn, failedVolume);
 
 createFile(new Path("/test1"), 32, (short)2);
 assertEquals(used, failedVolume.getDfsUsed());
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
index bafc7e0..a0ffe20 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
@@ -35,16 +35,15 @@ import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.concurrent.TimeUnit;
-import java.util.concurrent.TimeoutException;
 
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.io.filefilter.TrueFileFilter;
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.conf.ReconfigurationException;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.FileUtil;
 import org.apache.hadoop.fs.FsTracer;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hdfs.BlockReader;
-import org.apache.hadoop.hdfs.client.impl.BlockReaderFactory;
 import org.apache.hadoop.hdfs.ClientContext;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSTestUtil;
@@ -52,6 +51,7 @@ import org.apache.hadoop.hdfs.DFSUtilClient;
 import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.MiniDFSCluster;
 import org.apache.hadoop.hdfs.RemotePeerFactory;
+import org.apache.hadoop.hdfs.client.impl.BlockReaderFactory;
 import org.apache.hadoop.hdfs.client.impl.DfsClientConf;
 import org.apache.hadoop.hdfs.net.Peer;
 import org.apache.hadoop.hdfs.protocol.Block;
@@ -75,20 +75,17 @@ import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.hadoop.util.Shell;
-
-import org.apache.commons.io.FileUtils;
-import org.apache.commons.io.filefilter.TrueFileFilter;
-
-import com.google.common.base.Supplier;
-
 import org.junit.After;
 import org.junit.Before;
+import org.junit.Rule;
 import org.junit.Test;
 import org.junit.internal.AssumptionViolatedException;
-
+import org.junit.rules.Timeout;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import com.google.common.base.Supplier;
+
 /**
  * Fine-grain testing of block files and locations after volume failure.
  */
@@ -114,6 +111,10 @@ public class TestDataNodeVolumeFailure {
   

[hadoop] branch branch-2.10 updated: HADOOP-16793. Redefine log level when ipc connection interrupted in Client#handleSaslConnectionFailure().

2020-01-21 Thread iwasakims
This is an automated email from the ASF dual-hosted git repository.

iwasakims pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.10 by this push:
 new dc010b9  HADOOP-16793. Redefine log level when ipc connection 
interrupted in Client#handleSaslConnectionFailure().
dc010b9 is described below

commit dc010b98443c58e5f9cb41e9aebb2016cf1a0b26
Author: sunlisheng 
AuthorDate: Wed Jan 8 10:20:36 2020 +0800

HADOOP-16793. Redefine log level when ipc connection interrupted in 
Client#handleSaslConnectionFailure().

Signed-off-by: sunlisheng 
(cherry picked from commit d887e49dd4ed2b94bbb53b7608586f5da6cee037)
---
 .../src/main/java/org/apache/hadoop/ipc/Client.java | 13 +++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
index 5b86aa6..bb19f79 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
@@ -763,8 +763,17 @@ public class Client implements AutoCloseable {
   throw (IOException) new IOException(msg).initCause(ex);
 }
   } else {
-LOG.warn("Exception encountered while connecting to "
-+ "the server : " + ex);
+// With RequestHedgingProxyProvider, one rpc call will send 
multiple
+// requests to all namenodes. After one request return 
successfully,
+// all other requests will be interrupted. It's not a big problem,
+// and should not print a warning log.
+if (ex instanceof InterruptedIOException) {
+  LOG.debug("Exception encountered while connecting to the server",
+  ex);
+} else {
+  LOG.warn("Exception encountered while connecting to the server ",
+  ex);
+}
   }
   if (ex instanceof RemoteException)
 throw (RemoteException) ex;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2 updated: Remove WARN log when ipc connection interrupted in Client#handleSaslConnectionFailure()

2020-01-21 Thread iwasakims
This is an automated email from the ASF dual-hosted git repository.

iwasakims pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 55139ee  Remove WARN log when ipc connection interrupted in 
Client#handleSaslConnectionFailure()
55139ee is described below

commit 55139ee194bbdfda8343b13292bd1ad3ea13fd38
Author: sunlisheng 
AuthorDate: Wed Jan 8 10:20:36 2020 +0800

Remove WARN log when ipc connection interrupted in 
Client#handleSaslConnectionFailure()

Signed-off-by: sunlisheng 
(cherry picked from commit d887e49dd4ed2b94bbb53b7608586f5da6cee037)
---
 .../src/main/java/org/apache/hadoop/ipc/Client.java | 13 +++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
index 5b86aa6..bb19f79 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
@@ -763,8 +763,17 @@ public class Client implements AutoCloseable {
   throw (IOException) new IOException(msg).initCause(ex);
 }
   } else {
-LOG.warn("Exception encountered while connecting to "
-+ "the server : " + ex);
+// With RequestHedgingProxyProvider, one rpc call will send 
multiple
+// requests to all namenodes. After one request return 
successfully,
+// all other requests will be interrupted. It's not a big problem,
+// and should not print a warning log.
+if (ex instanceof InterruptedIOException) {
+  LOG.debug("Exception encountered while connecting to the server",
+  ex);
+} else {
+  LOG.warn("Exception encountered while connecting to the server ",
+  ex);
+}
   }
   if (ex instanceof RemoteException)
 throw (RemoteException) ex;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: Remove WARN log when ipc connection interrupted in Client#handleSaslConnectionFailure()

2020-01-21 Thread iwasakims
This is an automated email from the ASF dual-hosted git repository.

iwasakims pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 96c653d  Remove WARN log when ipc connection interrupted in 
Client#handleSaslConnectionFailure()
96c653d is described below

commit 96c653d0d5d287627ca20136f6b951427d4bd631
Author: sunlisheng 
AuthorDate: Wed Jan 8 10:20:36 2020 +0800

Remove WARN log when ipc connection interrupted in 
Client#handleSaslConnectionFailure()

Signed-off-by: sunlisheng 
(cherry picked from commit d887e49dd4ed2b94bbb53b7608586f5da6cee037)
---
 .../src/main/java/org/apache/hadoop/ipc/Client.java | 13 +++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
index 32e71a0..3be5707 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
@@ -761,8 +761,17 @@ public class Client implements AutoCloseable {
   throw (IOException) new IOException(msg).initCause(ex);
 }
   } else {
-LOG.warn("Exception encountered while connecting to "
-+ "the server : " + ex);
+// With RequestHedgingProxyProvider, one rpc call will send 
multiple
+// requests to all namenodes. After one request return 
successfully,
+// all other requests will be interrupted. It's not a big problem,
+// and should not print a warning log.
+if (ex instanceof InterruptedIOException) {
+  LOG.debug("Exception encountered while connecting to the server",
+  ex);
+} else {
+  LOG.warn("Exception encountered while connecting to the server ",
+  ex);
+}
   }
   if (ex instanceof RemoteException)
 throw (RemoteException) ex;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: Remove WARN log when ipc connection interrupted in Client#handleSaslConnectionFailure()

2020-01-21 Thread iwasakims
This is an automated email from the ASF dual-hosted git repository.

iwasakims pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new c36fbcb  Remove WARN log when ipc connection interrupted in 
Client#handleSaslConnectionFailure()
c36fbcb is described below

commit c36fbcbf1711d01f2b586795b52b47b112c51612
Author: sunlisheng 
AuthorDate: Wed Jan 8 10:20:36 2020 +0800

Remove WARN log when ipc connection interrupted in 
Client#handleSaslConnectionFailure()

Signed-off-by: sunlisheng 
(cherry picked from commit d887e49dd4ed2b94bbb53b7608586f5da6cee037)
---
 .../src/main/java/org/apache/hadoop/ipc/Client.java | 13 +++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
index 32e71a0..3be5707 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
@@ -761,8 +761,17 @@ public class Client implements AutoCloseable {
   throw (IOException) new IOException(msg).initCause(ex);
 }
   } else {
-LOG.warn("Exception encountered while connecting to "
-+ "the server : " + ex);
+// With RequestHedgingProxyProvider, one rpc call will send 
multiple
+// requests to all namenodes. After one request return 
successfully,
+// all other requests will be interrupted. It's not a big problem,
+// and should not print a warning log.
+if (ex instanceof InterruptedIOException) {
+  LOG.debug("Exception encountered while connecting to the server",
+  ex);
+} else {
+  LOG.warn("Exception encountered while connecting to the server ",
+  ex);
+}
   }
   if (ex instanceof RemoteException)
 throw (RemoteException) ex;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: Remove WARN log when ipc connection interrupted in Client#handleSaslConnectionFailure()

2020-01-21 Thread iwasakims
This is an automated email from the ASF dual-hosted git repository.

iwasakims pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d887e49  Remove WARN log when ipc connection interrupted in 
Client#handleSaslConnectionFailure()
d887e49 is described below

commit d887e49dd4ed2b94bbb53b7608586f5da6cee037
Author: sunlisheng 
AuthorDate: Wed Jan 8 10:20:36 2020 +0800

Remove WARN log when ipc connection interrupted in 
Client#handleSaslConnectionFailure()

Signed-off-by: sunlisheng 
---
 .../src/main/java/org/apache/hadoop/ipc/Client.java | 13 +++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
index 358c0d7..688eed6 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
@@ -761,8 +761,17 @@ public class Client implements AutoCloseable {
   throw (IOException) new IOException(msg).initCause(ex);
 }
   } else {
-LOG.warn("Exception encountered while connecting to "
-+ "the server : " + ex);
+// With RequestHedgingProxyProvider, one rpc call will send 
multiple
+// requests to all namenodes. After one request return 
successfully,
+// all other requests will be interrupted. It's not a big problem,
+// and should not print a warning log.
+if (ex instanceof InterruptedIOException) {
+  LOG.debug("Exception encountered while connecting to the server",
+  ex);
+} else {
+  LOG.warn("Exception encountered while connecting to the server ",
+  ex);
+}
   }
   if (ex instanceof RemoteException)
 throw (RemoteException) ex;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2.8 updated: HADOOP-16808. Use forkCount and reuseForks parameters instead of forkMode in the config of maven surefire plugin. Contributed by Xieming Li.

2020-01-21 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-2.8
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.8 by this push:
 new 1ffe08b  HADOOP-16808. Use forkCount and reuseForks parameters instead 
of forkMode in the config of maven surefire plugin. Contributed by Xieming Li.
1ffe08b is described below

commit 1ffe08b45bef6bc42d4fc961a3a2dbe98417bde7
Author: Akira Ajisaka 
AuthorDate: Tue Jan 21 18:03:24 2020 +0900

HADOOP-16808. Use forkCount and reuseForks parameters instead of forkMode 
in the config of maven surefire plugin. Contributed by Xieming Li.

(cherry picked from commit f6d20daf404fab28b596171172afa4558facb504)
---
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml | 3 ++-
 hadoop-tools/hadoop-distcp/pom.xml | 3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
index b345c9d..10184ad 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
@@ -416,7 +416,8 @@
 org.apache.maven.plugins
 maven-surefire-plugin
 
-  once
+  1
+  true
   
600
   
 
${project.build.directory}/test-classes/krb5.conf
diff --git a/hadoop-tools/hadoop-distcp/pom.xml 
b/hadoop-tools/hadoop-distcp/pom.xml
index a42a233..11a6c38 100644
--- a/hadoop-tools/hadoop-distcp/pom.xml
+++ b/hadoop-tools/hadoop-distcp/pom.xml
@@ -124,7 +124,8 @@
 org.apache.maven.plugins
 maven-surefire-plugin
 
-  always
+  1
+  false
   600
   -Xmx1024m
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2.9 updated: HADOOP-16808. Use forkCount and reuseForks parameters instead of forkMode in the config of maven surefire plugin. Contributed by Xieming Li.

2020-01-21 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-2.9
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.9 by this push:
 new 15a3316  HADOOP-16808. Use forkCount and reuseForks parameters instead 
of forkMode in the config of maven surefire plugin. Contributed by Xieming Li.
15a3316 is described below

commit 15a3316902b7190e7265ba9f1b7ebaec58ba5b90
Author: Akira Ajisaka 
AuthorDate: Tue Jan 21 18:03:24 2020 +0900

HADOOP-16808. Use forkCount and reuseForks parameters instead of forkMode 
in the config of maven surefire plugin. Contributed by Xieming Li.

(cherry picked from commit f6d20daf404fab28b596171172afa4558facb504)
---
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml | 3 ++-
 hadoop-tools/hadoop-distcp/pom.xml | 3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
index 8926506..df6335f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
@@ -416,7 +416,8 @@
 org.apache.maven.plugins
 maven-surefire-plugin
 
-  once
+  1
+  true
   
600
   
 
${project.build.directory}/test-classes/krb5.conf
diff --git a/hadoop-tools/hadoop-distcp/pom.xml 
b/hadoop-tools/hadoop-distcp/pom.xml
index d2504fa..6ba686d 100644
--- a/hadoop-tools/hadoop-distcp/pom.xml
+++ b/hadoop-tools/hadoop-distcp/pom.xml
@@ -124,7 +124,8 @@
 org.apache.maven.plugins
 maven-surefire-plugin
 
-  always
+  1
+  false
   600
   -Xmx1024m
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2.10 updated: HADOOP-16808. Use forkCount and reuseForks parameters instead of forkMode in the config of maven surefire plugin. Contributed by Xieming Li.

2020-01-21 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.10 by this push:
 new 0c0be3b  HADOOP-16808. Use forkCount and reuseForks parameters instead 
of forkMode in the config of maven surefire plugin. Contributed by Xieming Li.
0c0be3b is described below

commit 0c0be3b9f055a5c08258ccd22440021f3222ec24
Author: Akira Ajisaka 
AuthorDate: Tue Jan 21 18:03:24 2020 +0900

HADOOP-16808. Use forkCount and reuseForks parameters instead of forkMode 
in the config of maven surefire plugin. Contributed by Xieming Li.

(cherry picked from commit f6d20daf404fab28b596171172afa4558facb504)
---
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml | 3 ++-
 hadoop-tools/hadoop-distcp/pom.xml | 3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
index ac9cdb0..cfd8c6f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
@@ -416,7 +416,8 @@
 org.apache.maven.plugins
 maven-surefire-plugin
 
-  once
+  1
+  true
   
600
   
 
${project.build.directory}/test-classes/krb5.conf
diff --git a/hadoop-tools/hadoop-distcp/pom.xml 
b/hadoop-tools/hadoop-distcp/pom.xml
index 1ad1a46..e7e94ed 100644
--- a/hadoop-tools/hadoop-distcp/pom.xml
+++ b/hadoop-tools/hadoop-distcp/pom.xml
@@ -124,7 +124,8 @@
 org.apache.maven.plugins
 maven-surefire-plugin
 
-  always
+  1
+  false
   600
   -Xmx1024m
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated (429d5db -> d4f75e2)

2020-01-21 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a change to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 429d5db  HADOOP-16785. followup to abfs close() fix.
 add d4f75e2  HADOOP-16808. Use forkCount and reuseForks parameters instead 
of forkMode in the config of maven surefire plugin. Contributed by Xieming Li.

No new revisions were added by this update.

Summary of changes:
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml | 3 ++-
 hadoop-tools/hadoop-distcp/pom.xml | 3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HADOOP-16808. Use forkCount and reuseForks parameters instead of forkMode in the config of maven surefire plugin. Contributed by Xieming Li.

2020-01-21 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 2c84ea9  HADOOP-16808. Use forkCount and reuseForks parameters instead 
of forkMode in the config of maven surefire plugin. Contributed by Xieming Li.
2c84ea9 is described below

commit 2c84ea96e3672e26d4f461a580fc55f96b06f1fd
Author: Akira Ajisaka 
AuthorDate: Tue Jan 21 18:03:24 2020 +0900

HADOOP-16808. Use forkCount and reuseForks parameters instead of forkMode 
in the config of maven surefire plugin. Contributed by Xieming Li.

(cherry picked from commit f6d20daf404fab28b596171172afa4558facb504)
---
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml | 3 ++-
 hadoop-tools/hadoop-distcp/pom.xml | 3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
index c305c56..5ba45f8 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
@@ -369,7 +369,8 @@
 org.apache.maven.plugins
 maven-surefire-plugin
 
-  once
+  1
+  true
   
600
   
 
${project.build.directory}/test-classes/krb5.conf
diff --git a/hadoop-tools/hadoop-distcp/pom.xml 
b/hadoop-tools/hadoop-distcp/pom.xml
index bd3873f..a28c874 100644
--- a/hadoop-tools/hadoop-distcp/pom.xml
+++ b/hadoop-tools/hadoop-distcp/pom.xml
@@ -123,7 +123,8 @@
 org.apache.maven.plugins
 maven-surefire-plugin
 
-  always
+  1
+  false
   600
   -Xmx1024m
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-16808. Use forkCount and reuseForks parameters instead of forkMode in the config of maven surefire plugin. Contributed by Xieming Li.

2020-01-21 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new f6d20da  HADOOP-16808. Use forkCount and reuseForks parameters instead 
of forkMode in the config of maven surefire plugin. Contributed by Xieming Li.
f6d20da is described below

commit f6d20daf404fab28b596171172afa4558facb504
Author: Akira Ajisaka 
AuthorDate: Tue Jan 21 18:03:24 2020 +0900

HADOOP-16808. Use forkCount and reuseForks parameters instead of forkMode 
in the config of maven surefire plugin. Contributed by Xieming Li.
---
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml | 3 ++-
 hadoop-tools/hadoop-distcp/pom.xml | 3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
index 69b2634..d97e8d7 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
@@ -368,7 +368,8 @@
 org.apache.maven.plugins
 maven-surefire-plugin
 
-  once
+  1
+  true
   
600
   
 
${project.build.directory}/test-classes/krb5.conf
diff --git a/hadoop-tools/hadoop-distcp/pom.xml 
b/hadoop-tools/hadoop-distcp/pom.xml
index fe1681b..cce4e47 100644
--- a/hadoop-tools/hadoop-distcp/pom.xml
+++ b/hadoop-tools/hadoop-distcp/pom.xml
@@ -128,7 +128,8 @@
 org.apache.maven.plugins
 maven-surefire-plugin
 
-  always
+  1
+  false
   600
   -Xmx1024m
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org