[hadoop] branch trunk updated: HDFS-14879. Header was wrong in Snapshot web UI. Contributed by hemanthboyina

2019-10-04 Thread tasanuma
This is an automated email from the ASF dual-hosted git repository.

tasanuma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new b23bdaf  HDFS-14879. Header was wrong in Snapshot web UI. Contributed 
by hemanthboyina
b23bdaf is described below

commit b23bdaf085dbc561c785cef1613bacaf6735d909
Author: Takanobu Asanuma 
AuthorDate: Fri Oct 4 16:47:06 2019 +0900

HDFS-14879. Header was wrong in Snapshot web UI. Contributed by 
hemanthboyina
---
 hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
index 86eaee9..05c04b5 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
@@ -278,7 +278,7 @@
 
 
 
-Snapshotted directories: {@size 
key=Snapshots}{/size}
+Snapshots: {@size 
key=Snapshots}{/size}
 
 
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HDFS-14879. Header was wrong in Snapshot web UI. Contributed by hemanthboyina

2019-10-04 Thread tasanuma
This is an automated email from the ASF dual-hosted git repository.

tasanuma pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new c48f273  HDFS-14879. Header was wrong in Snapshot web UI. Contributed 
by hemanthboyina
c48f273 is described below

commit c48f2730b407fe95a1e9196f93f5ad7aac57d1ed
Author: Takanobu Asanuma 
AuthorDate: Fri Oct 4 16:47:06 2019 +0900

HDFS-14879. Header was wrong in Snapshot web UI. Contributed by 
hemanthboyina

(cherry picked from commit b23bdaf085dbc561c785cef1613bacaf6735d909)
---
 hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
index 33410da..7fb12b2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
@@ -271,7 +271,7 @@
 
 
 
-Snapshotted directories: {@size 
key=Snapshots}{/size}
+Snapshots: {@size 
key=Snapshots}{/size}
 
 
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HDFS-14879. Header was wrong in Snapshot web UI. Contributed by hemanthboyina

2019-10-04 Thread tasanuma
This is an automated email from the ASF dual-hosted git repository.

tasanuma pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 732a68c  HDFS-14879. Header was wrong in Snapshot web UI. Contributed 
by hemanthboyina
732a68c is described below

commit 732a68cfb4785bf6e3cf04be784d60e172e3fbbb
Author: Takanobu Asanuma 
AuthorDate: Fri Oct 4 16:47:06 2019 +0900

HDFS-14879. Header was wrong in Snapshot web UI. Contributed by 
hemanthboyina

(cherry picked from commit b23bdaf085dbc561c785cef1613bacaf6735d909)
---
 hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
index eeceb05..2350b5e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
@@ -273,7 +273,7 @@
 
 
 
-Snapshotted directories: {@size 
key=Snapshots}{/size}
+Snapshots: {@size 
key=Snapshots}{/size}
 
 
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: YARN-9782. Avoid DNS resolution while running SLS. Contributed by Abhishek Modi.

2019-10-04 Thread abmodi
This is an automated email from the ASF dual-hosted git repository.

abmodi pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 2478cba  YARN-9782. Avoid DNS resolution while running SLS. 
Contributed by Abhishek Modi.
2478cba is described below

commit 2478cbafe6deaf3a190360120234610d6208394b
Author: Abhishek Modi 
AuthorDate: Fri Oct 4 14:45:10 2019 +0530

YARN-9782. Avoid DNS resolution while running SLS. Contributed by Abhishek 
Modi.
---
 .../java/org/apache/hadoop/yarn/sls/SLSRunner.java | 25 ++
 .../hadoop/yarn/sls/conf/SLSConfiguration.java |  3 ++
 .../apache/hadoop/yarn/sls/BaseSLSRunnerTest.java  |  4 ++-
 .../org/apache/hadoop/yarn/sls/TestSLSRunner.java  | 39 ++
 4 files changed, 70 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
index 6ed28d9..f99038e 100644
--- 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
+++ 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
@@ -22,6 +22,7 @@ import java.io.FileInputStream;
 import java.io.IOException;
 import java.io.InputStreamReader;
 import java.io.Reader;
+import java.security.Security;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.HashMap;
@@ -150,6 +151,10 @@ public class SLSRunner extends Configured implements Tool {
 SLS, RUMEN, SYNTH
   }
 
+  public static final String NETWORK_CACHE_TTL = "networkaddress.cache.ttl";
+  public static final String NETWORK_NEGATIVE_CACHE_TTL =
+  "networkaddress.cache.negative.ttl";
+
   private TraceType inputType;
   private SynthTraceJobProducer stjp;
 
@@ -241,6 +246,9 @@ public class SLSRunner extends Configured implements Tool {
 
   public void start() throws IOException, ClassNotFoundException, 
YarnException,
   InterruptedException {
+
+enableDNSCaching(getConf());
+
 // start resource manager
 startRM();
 // start node managers
@@ -260,6 +268,23 @@ public class SLSRunner extends Configured implements Tool {
 runner.start();
   }
 
+  /**
+   * Enables DNS Caching based on config. If DNS caching is enabled, then set
+   * the DNS cache to infinite time. Since in SLS random nodes are added, DNS
+   * resolution can take significant time which can cause erroneous results.
+   * For more details, check https://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.html";>
+   * Java Networking Properties
+   * @param conf Configuration object.
+   */
+  static void enableDNSCaching(Configuration conf) {
+if (conf.getBoolean(SLSConfiguration.DNS_CACHING_ENABLED,
+SLSConfiguration.DNS_CACHING_ENABLED_DEFAULT)) {
+  Security.setProperty(NETWORK_CACHE_TTL, "-1");
+  Security.setProperty(NETWORK_NEGATIVE_CACHE_TTL, "-1");
+}
+  }
+
   private void startRM() throws ClassNotFoundException, YarnException {
 Configuration rmConf = new YarnConfiguration(getConf());
 String schedulerClass = rmConf.get(YarnConfiguration.RM_SCHEDULER);
diff --git 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/conf/SLSConfiguration.java
 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/conf/SLSConfiguration.java
index 34b89b6..119960c 100644
--- 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/conf/SLSConfiguration.java
+++ 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/conf/SLSConfiguration.java
@@ -28,6 +28,9 @@ import org.apache.hadoop.yarn.api.records.Resource;
 public class SLSConfiguration {
   // sls
   public static final String PREFIX = "yarn.sls.";
+  public static final String DNS_CACHING_ENABLED = PREFIX
+  + "dns.caching.enabled";
+  public static final boolean DNS_CACHING_ENABLED_DEFAULT = false;
   // runner
   public static final String RUNNER_PREFIX = PREFIX + "runner.";
   public static final String RUNNER_POOL_SIZE = RUNNER_PREFIX + "pool.size";
diff --git 
a/hadoop-tools/hadoop-sls/src/test/java/org/apache/hadoop/yarn/sls/BaseSLSRunnerTest.java
 
b/hadoop-tools/hadoop-sls/src/test/java/org/apache/hadoop/yarn/sls/BaseSLSRunnerTest.java
index 668be14..bfbd592 100644
--- 
a/hadoop-tools/hadoop-sls/src/test/java/org/apache/hadoop/yarn/sls/BaseSLSRunnerTest.java
+++ 
b/hadoop-tools/hadoop-sls/src/test/java/org/apache/hadoop/yarn/sls/BaseSLSRunnerTest.java
@@ -64,7 +64,9 @@ public abstract class BaseSLSRunnerTest {
 
   @After
   public void tearDown() throws InterruptedException {
-sls.stop();
+if (sls != null) {
+  sls.stop();
+}
   }
 
   public void runSLS(Configuration conf, long timeout) throws Exception {
diff --git 
a/hadoop-tools/hadoop-sls/src/test/java/org/apache/hadoop/yarn/sls/TestSLSRunner.java
 
b/hadoop-tools/hadoop-sls/src/test/jav

[hadoop] branch trunk updated: HDDS-2222 (#1578)

2019-10-04 Thread szetszwo
This is an automated email from the ASF dual-hosted git repository.

szetszwo pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 4cf0b36  HDDS- (#1578)
4cf0b36 is described below

commit 4cf0b3660f620dd8a67201b75f4c88492c9adfb3
Author: Tsz-Wo Nicholas Sze 
AuthorDate: Fri Oct 4 17:50:21 2019 +0800

HDDS- (#1578)

Thanks @jnp  for reviewing this.  Merging now.
---
 .../hadoop/ozone/common/ChecksumByteBuffer.java| 114 +
 .../ozone/common/PureJavaCrc32ByteBuffer.java  | 556 
 .../ozone/common/PureJavaCrc32CByteBuffer.java | 559 +
 .../ozone/common/TestChecksumByteBuffer.java   | 102 
 4 files changed, 1331 insertions(+)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
new file mode 100644
index 000..2c0feff
--- /dev/null
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ * Some portions of this file Copyright (c) 2004-2006 Intel Corportation
+ * and licensed under the BSD license.
+ */
+package org.apache.hadoop.ozone.common;
+
+import org.apache.ratis.util.Preconditions;
+
+import java.nio.ByteBuffer;
+import java.util.zip.Checksum;
+
+/**
+ * A sub-interface of {@link Checksum}
+ * with a method to update checksum from a {@link ByteBuffer}.
+ */
+public interface ChecksumByteBuffer extends Checksum {
+  /**
+   * Updates the current checksum with the specified bytes in the buffer.
+   * Upon return, the buffer's position will be equal to its limit.
+   *
+   * @param buffer the bytes to update the checksum with
+   */
+  void update(ByteBuffer buffer);
+
+  @Override
+  default void update(byte[] b, int off, int len) {
+update(ByteBuffer.wrap(b, off, len).asReadOnlyBuffer());
+  }
+
+  /**
+   * An abstract class implementing {@link ChecksumByteBuffer}
+   * with a 32-bit checksum and a lookup table.
+   */
+  abstract class CrcIntTable implements ChecksumByteBuffer {
+/** Current CRC value with bit-flipped. */
+private int crc;
+
+CrcIntTable() {
+  reset();
+  Preconditions.assertTrue(getTable().length == 8 * (1 << 8));
+}
+
+abstract int[] getTable();
+
+@Override
+public final long getValue() {
+  return (~crc) & 0xL;
+}
+
+@Override
+public final void reset() {
+  crc = 0x;
+}
+
+@Override
+public final void update(int b) {
+  crc = (crc >>> 8) ^ getTable()[(((crc ^ b) << 24) >>> 24)];
+}
+
+@Override
+public final void update(ByteBuffer b) {
+  crc = update(crc, b, getTable());
+}
+
+private static int update(int crc, ByteBuffer b, int[] table) {
+  for(; b.remaining() > 7;) {
+final int c0 = (b.get() ^ crc) & 0xff;
+final int c1 = (b.get() ^ (crc >>>= 8)) & 0xff;
+final int c2 = (b.get() ^ (crc >>>= 8)) & 0xff;
+final int c3 = (b.get() ^ (crc >>> 8)) & 0xff;
+crc = (table[0x700 + c0] ^ table[0x600 + c1])
+^ (table[0x500 + c2] ^ table[0x400 + c3]);
+
+final int c4 = b.get() & 0xff;
+final int c5 = b.get() & 0xff;
+final int c6 = b.get() & 0xff;
+final int c7 = b.get() & 0xff;
+
+crc ^= (table[0x300 + c4] ^ table[0x200 + c5])
+^ (table[0x100 + c6] ^ table[c7]);
+  }
+
+  // loop unroll - duff's device style
+  switch (b.remaining()) {
+case 7: crc = (crc >>> 8) ^ table[((crc ^ b.get()) & 0xff)];
+case 6: crc = (crc >>> 8) ^ table[((crc ^ b.get()) & 0xff)];
+case 5: crc = (crc >>> 8) ^ table[((crc ^ b.get()) & 0xff)];
+case 4: crc = (crc >>> 8) ^ table[((crc ^ b.get()) & 0xff)];
+case 3: crc = (crc >>> 8) ^ table[((crc ^ b.get()) & 0xff)];
+case 2: crc = (crc >>> 8) ^ table[((crc ^ b.get()) & 0xff)];
+case 1: crc = (crc >>> 8) ^ table[((crc ^ b.get()) & 0xff)];
+default: // noop
+  }
+
+

[hadoop] branch revert-1578-HDDS-2222 created (now acb7556)

2019-10-04 Thread szetszwo
This is an automated email from the ASF dual-hosted git repository.

szetszwo pushed a change to branch revert-1578-HDDS-
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at acb7556  Revert "HDDS- (#1578)"

This branch includes the following new commits:

 new acb7556  Revert "HDDS- (#1578)"

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/01: Revert "HDDS-2222 (#1578)"

2019-10-04 Thread szetszwo
This is an automated email from the ASF dual-hosted git repository.

szetszwo pushed a commit to branch revert-1578-HDDS-
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit acb75561fb0d7899d84c64cd76ec95de567a9aea
Author: Tsz-Wo Nicholas Sze 
AuthorDate: Fri Oct 4 19:18:55 2019 +0800

Revert "HDDS- (#1578)"

This reverts commit 4cf0b3660f620dd8a67201b75f4c88492c9adfb3.
---
 .../hadoop/ozone/common/ChecksumByteBuffer.java| 114 -
 .../ozone/common/PureJavaCrc32ByteBuffer.java  | 556 
 .../ozone/common/PureJavaCrc32CByteBuffer.java | 559 -
 .../ozone/common/TestChecksumByteBuffer.java   | 102 
 4 files changed, 1331 deletions(-)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
deleted file mode 100644
index 2c0feff..000
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
+++ /dev/null
@@ -1,114 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- *
- * Some portions of this file Copyright (c) 2004-2006 Intel Corportation
- * and licensed under the BSD license.
- */
-package org.apache.hadoop.ozone.common;
-
-import org.apache.ratis.util.Preconditions;
-
-import java.nio.ByteBuffer;
-import java.util.zip.Checksum;
-
-/**
- * A sub-interface of {@link Checksum}
- * with a method to update checksum from a {@link ByteBuffer}.
- */
-public interface ChecksumByteBuffer extends Checksum {
-  /**
-   * Updates the current checksum with the specified bytes in the buffer.
-   * Upon return, the buffer's position will be equal to its limit.
-   *
-   * @param buffer the bytes to update the checksum with
-   */
-  void update(ByteBuffer buffer);
-
-  @Override
-  default void update(byte[] b, int off, int len) {
-update(ByteBuffer.wrap(b, off, len).asReadOnlyBuffer());
-  }
-
-  /**
-   * An abstract class implementing {@link ChecksumByteBuffer}
-   * with a 32-bit checksum and a lookup table.
-   */
-  abstract class CrcIntTable implements ChecksumByteBuffer {
-/** Current CRC value with bit-flipped. */
-private int crc;
-
-CrcIntTable() {
-  reset();
-  Preconditions.assertTrue(getTable().length == 8 * (1 << 8));
-}
-
-abstract int[] getTable();
-
-@Override
-public final long getValue() {
-  return (~crc) & 0xL;
-}
-
-@Override
-public final void reset() {
-  crc = 0x;
-}
-
-@Override
-public final void update(int b) {
-  crc = (crc >>> 8) ^ getTable()[(((crc ^ b) << 24) >>> 24)];
-}
-
-@Override
-public final void update(ByteBuffer b) {
-  crc = update(crc, b, getTable());
-}
-
-private static int update(int crc, ByteBuffer b, int[] table) {
-  for(; b.remaining() > 7;) {
-final int c0 = (b.get() ^ crc) & 0xff;
-final int c1 = (b.get() ^ (crc >>>= 8)) & 0xff;
-final int c2 = (b.get() ^ (crc >>>= 8)) & 0xff;
-final int c3 = (b.get() ^ (crc >>> 8)) & 0xff;
-crc = (table[0x700 + c0] ^ table[0x600 + c1])
-^ (table[0x500 + c2] ^ table[0x400 + c3]);
-
-final int c4 = b.get() & 0xff;
-final int c5 = b.get() & 0xff;
-final int c6 = b.get() & 0xff;
-final int c7 = b.get() & 0xff;
-
-crc ^= (table[0x300 + c4] ^ table[0x200 + c5])
-^ (table[0x100 + c6] ^ table[c7]);
-  }
-
-  // loop unroll - duff's device style
-  switch (b.remaining()) {
-case 7: crc = (crc >>> 8) ^ table[((crc ^ b.get()) & 0xff)];
-case 6: crc = (crc >>> 8) ^ table[((crc ^ b.get()) & 0xff)];
-case 5: crc = (crc >>> 8) ^ table[((crc ^ b.get()) & 0xff)];
-case 4: crc = (crc >>> 8) ^ table[((crc ^ b.get()) & 0xff)];
-case 3: crc = (crc >>> 8) ^ table[((crc ^ b.get()) & 0xff)];
-case 2: crc = (crc >>> 8) ^ table[((crc ^ b.get()) & 0xff)];
-case 1: crc = (crc >>> 8) ^ table[((crc ^ b.get()) & 0xff)];
-default: // noop
-  }
-
-  return crc;
-}
-  }
-}
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/

[hadoop] branch trunk updated: Revert "HDDS-2222 (#1578)" (#1594)

2019-10-04 Thread szetszwo
This is an automated email from the ASF dual-hosted git repository.

szetszwo pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new a9849f6  Revert "HDDS- (#1578)" (#1594)
a9849f6 is described below

commit a9849f65ba79fa4efd80ead0ac7b4d37eee54f92
Author: Tsz-Wo Nicholas Sze 
AuthorDate: Fri Oct 4 19:19:45 2019 +0800

Revert "HDDS- (#1578)" (#1594)

This reverts commit 4cf0b3660f620dd8a67201b75f4c88492c9adfb3.
---
 .../hadoop/ozone/common/ChecksumByteBuffer.java| 114 -
 .../ozone/common/PureJavaCrc32ByteBuffer.java  | 556 
 .../ozone/common/PureJavaCrc32CByteBuffer.java | 559 -
 .../ozone/common/TestChecksumByteBuffer.java   | 102 
 4 files changed, 1331 deletions(-)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
deleted file mode 100644
index 2c0feff..000
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
+++ /dev/null
@@ -1,114 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- *
- * Some portions of this file Copyright (c) 2004-2006 Intel Corportation
- * and licensed under the BSD license.
- */
-package org.apache.hadoop.ozone.common;
-
-import org.apache.ratis.util.Preconditions;
-
-import java.nio.ByteBuffer;
-import java.util.zip.Checksum;
-
-/**
- * A sub-interface of {@link Checksum}
- * with a method to update checksum from a {@link ByteBuffer}.
- */
-public interface ChecksumByteBuffer extends Checksum {
-  /**
-   * Updates the current checksum with the specified bytes in the buffer.
-   * Upon return, the buffer's position will be equal to its limit.
-   *
-   * @param buffer the bytes to update the checksum with
-   */
-  void update(ByteBuffer buffer);
-
-  @Override
-  default void update(byte[] b, int off, int len) {
-update(ByteBuffer.wrap(b, off, len).asReadOnlyBuffer());
-  }
-
-  /**
-   * An abstract class implementing {@link ChecksumByteBuffer}
-   * with a 32-bit checksum and a lookup table.
-   */
-  abstract class CrcIntTable implements ChecksumByteBuffer {
-/** Current CRC value with bit-flipped. */
-private int crc;
-
-CrcIntTable() {
-  reset();
-  Preconditions.assertTrue(getTable().length == 8 * (1 << 8));
-}
-
-abstract int[] getTable();
-
-@Override
-public final long getValue() {
-  return (~crc) & 0xL;
-}
-
-@Override
-public final void reset() {
-  crc = 0x;
-}
-
-@Override
-public final void update(int b) {
-  crc = (crc >>> 8) ^ getTable()[(((crc ^ b) << 24) >>> 24)];
-}
-
-@Override
-public final void update(ByteBuffer b) {
-  crc = update(crc, b, getTable());
-}
-
-private static int update(int crc, ByteBuffer b, int[] table) {
-  for(; b.remaining() > 7;) {
-final int c0 = (b.get() ^ crc) & 0xff;
-final int c1 = (b.get() ^ (crc >>>= 8)) & 0xff;
-final int c2 = (b.get() ^ (crc >>>= 8)) & 0xff;
-final int c3 = (b.get() ^ (crc >>> 8)) & 0xff;
-crc = (table[0x700 + c0] ^ table[0x600 + c1])
-^ (table[0x500 + c2] ^ table[0x400 + c3]);
-
-final int c4 = b.get() & 0xff;
-final int c5 = b.get() & 0xff;
-final int c6 = b.get() & 0xff;
-final int c7 = b.get() & 0xff;
-
-crc ^= (table[0x300 + c4] ^ table[0x200 + c5])
-^ (table[0x100 + c6] ^ table[c7]);
-  }
-
-  // loop unroll - duff's device style
-  switch (b.remaining()) {
-case 7: crc = (crc >>> 8) ^ table[((crc ^ b.get()) & 0xff)];
-case 6: crc = (crc >>> 8) ^ table[((crc ^ b.get()) & 0xff)];
-case 5: crc = (crc >>> 8) ^ table[((crc ^ b.get()) & 0xff)];
-case 4: crc = (crc >>> 8) ^ table[((crc ^ b.get()) & 0xff)];
-case 3: crc = (crc >>> 8) ^ table[((crc ^ b.get()) & 0xff)];
-case 2: crc = (crc >>> 8) ^ table[((crc ^ b.get()) & 0xff)];
-case 1: crc = (crc >>> 8) ^ table[((crc ^ b.get

[hadoop] branch trunk updated (a9849f6 -> bffcd33)

2019-10-04 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from a9849f6  Revert "HDDS- (#1578)" (#1594)
 add bffcd33  HDDS-2230. Invalid entries in ozonesecure-mr config

No new revisions were added by this update.

Summary of changes:
 .../compose/ozonesecure-mr/docker-compose.yaml | 29 ++
 .../src/main/compose/ozonesecure-mr/docker-config  | 28 +++--
 2 files changed, 39 insertions(+), 18 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (bffcd33 -> d061c84)

2019-10-04 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from bffcd33  HDDS-2230. Invalid entries in ozonesecure-mr config
 add d061c84  HDDS-2140. Add robot test for GDPR feature

No new revisions were added by this update.

Summary of changes:
 .../dist/src/main/smoketest/gdpr/gdpr.robot| 89 ++
 1 file changed, 89 insertions(+)
 create mode 100644 hadoop-ozone/dist/src/main/smoketest/gdpr/gdpr.robot


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-2199. In SCMNodeManager dnsToUuidMap cannot track multiple DNs on the same host

2019-10-04 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 6171a41  HDDS-2199. In SCMNodeManager dnsToUuidMap cannot track 
multiple DNs on the same host
6171a41 is described below

commit 6171a41b4c29a4039b53209df546c4c42a278464
Author: S O'Donnell 
AuthorDate: Fri Oct 4 14:00:06 2019 +0200

HDDS-2199. In SCMNodeManager dnsToUuidMap cannot track multiple DNs on the 
same host

Closes #1551
---
 .../apache/hadoop/hdds/scm/node/NodeManager.java   |  8 +--
 .../hadoop/hdds/scm/node/SCMNodeManager.java   | 51 
 .../hdds/scm/server/SCMBlockProtocolServer.java|  7 ++-
 .../hadoop/hdds/scm/container/MockNodeManager.java | 36 ++--
 .../hadoop/hdds/scm/node/TestSCMNodeManager.java   | 67 +-
 .../testutils/ReplicationNodeManagerMock.java  |  5 +-
 6 files changed, 149 insertions(+), 25 deletions(-)

diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
index d8890fb..fd8bb87 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
@@ -192,11 +192,11 @@ public interface NodeManager extends 
StorageContainerNodeProtocol,
   DatanodeDetails getNodeByUuid(String uuid);
 
   /**
-   * Given datanode address(Ipaddress or hostname), returns the DatanodeDetails
-   * for the node.
+   * Given datanode address(Ipaddress or hostname), returns a list of
+   * DatanodeDetails for the datanodes running at that address.
*
* @param address datanode address
-   * @return the given datanode, or null if not found
+   * @return the given datanode, or empty list if none found
*/
-  DatanodeDetails getNodeByAddress(String address);
+  List getNodesByAddress(String address);
 }
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
index d3df858..ed65ed3 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
@@ -25,11 +25,13 @@ import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
+import java.util.LinkedList;
 import java.util.UUID;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ScheduledFuture;
 import java.util.stream.Collectors;
 
+import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
@@ -98,7 +100,7 @@ public class SCMNodeManager implements NodeManager {
   private final NetworkTopology clusterMap;
   private final DNSToSwitchMapping dnsToSwitchMapping;
   private final boolean useHostname;
-  private final ConcurrentHashMap dnsToUuidMap =
+  private final ConcurrentHashMap> dnsToUuidMap =
   new ConcurrentHashMap<>();
 
   /**
@@ -260,7 +262,7 @@ public class SCMNodeManager implements NodeManager {
   }
   nodeStateManager.addNode(datanodeDetails);
   clusterMap.add(datanodeDetails);
-  dnsToUuidMap.put(dnsName, datanodeDetails.getUuidString());
+  addEntryTodnsToUuidMap(dnsName, datanodeDetails.getUuidString());
   // Updating Node Report, as registration is successful
   processNodeReport(datanodeDetails, nodeReport);
   LOG.info("Registered Data node : {}", datanodeDetails);
@@ -276,6 +278,26 @@ public class SCMNodeManager implements NodeManager {
   }
 
   /**
+   * Add an entry to the dnsToUuidMap, which maps hostname / IP to the DNs
+   * running on that host. As each address can have many DNs running on it,
+   * this is a one to many mapping.
+   * @param dnsName String representing the hostname or IP of the node
+   * @param uuid String representing the UUID of the registered node.
+   */
+  @SuppressFBWarnings(value="AT_OPERATION_SEQUENCE_ON_CONCURRENT_ABSTRACTION",
+  justification="The method is synchronized and this is the only place "+
+  "dnsToUuidMap is modified")
+  private synchronized void addEntryTodnsToUuidMap(
+  String dnsName, String uuid) {
+Set dnList = dnsToUuidMap.get(dnsName);
+if (dnList == null) {
+  dnList = ConcurrentHashMap.newKeySet();
+  dnsToUuidMap.put(dnsName, dnList);
+}
+dnList.add(uuid);
+  }
+
+  /**
* Send heartbeat to indicate the datanode is alive and doing well.
*
* @param datanodeDetails - DatanodeDetailsProto.
@@ -584,29 +606,34 @@ pu

[hadoop] branch HDDS-1880-Decom updated (fd5e877 -> ec70207)

2019-10-04 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch HDDS-1880-Decom
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from fd5e877  Merge branch 'trunk' into HDDS-1880-Decom
 add 3d78b12  YARN-9762. Add submission context label to audit logs. 
Contributed by Manoj Kumar
 add 3fd3d74  HDDS-2161. Create RepeatedKeyInfo structure to be saved in 
deletedTable
 add 6cbe5d3  HDDS-2160. Add acceptance test for ozonesecure-mr compose. 
Contributed by Xiaoyu Yao. (#1490)
 add 0a716bd  HDDS-2159. Fix Race condition in ProfileServlet#pid.
 add bfe1dac  HADOOP-16560. [YARN] use protobuf-maven-plugin to generate 
protobuf classes (#1496)
 add e8e7d7b  HADOOP-16561. [MAPREDUCE] use protobuf-maven-plugin to 
generate protobuf classes (#1500)
 add 8f1a135  HDDS-2081. Fix 
TestRatisPipelineProvider#testCreatePipelinesDnExclude. Contributed by 
Aravindan Vijayan. (#1506)
 add 51c64b3  HDFS-13660. DistCp job fails when new data is appended in the 
file while the DistCp copy job is running
 add 91f50b9  HDDS-2167. Hadoop31-mr acceptance test is failing due to the 
shading
 add 43203b4  HDFS-14868. RBF: Fix typo in TestRouterQuota. Contributed by 
Jinglun.
 add 816d3cb  HDFS-14837. Review of Block.java. Contributed by David 
Mollitor.
 add afa1006  HDFS-14843. Double Synchronization in 
BlockReportLeaseManager. Contributed by David Mollitor.
 add f16cf87  HDDS-2170. Add Object IDs and Update ID to Volume Object 
(#1510)
 add eb96a30  HDFS-14655. [SBN Read] Namenode crashes if one of The JN is 
down. Contributed by Ayush Saxena.
 add 66400c1  HDFS-14808. EC: Improper size values for corrupt ec block in 
LOG. Contributed by Ayush Saxena.
 add c2731d4  YARN-9730. Support forcing configured partitions to be 
exclusive based on app node label
 add 6917754  HDDS-2172.Ozone shell should remove description about REST 
protocol support. Contributed by Siddharth Wagle.
 add a346381  HDDS-2168. TestOzoneManagerDoubleBufferWithOMResponse 
sometimes fails with out of memory error (#1509)
 add 3f89084  HDFS-14845. Ignore AuthenticationFilterInitializer for 
HttpFSServerWebServer and honor hadoop.http.authentication configs.
 add bec0864  YARN-9808. Zero length files in container log output haven't 
got a header. Contributed by Adam Antal
 add c724577  YARN-6715. Fix documentation about NodeHealthScriptRunner. 
Contributed by Peter Bacsko
 add 8baebb5  HDDS-2171. Dangling links in test report due to incompatible 
realpath
 add e6fb6ee  HDDS-1738. Add nullable annotation for OMResponse classes
 add e346e36  HADOOP-15691 Add PathCapabilities to FileSystem and 
FileContext.
 add 16f626f  HDDS-2165. Freon fails if bucket does not exists
 add c89d22d  HADOOP-16602. mvn package fails in hadoop-aws.
 add bdaaa3b  HDFS-14832. RBF: Add Icon for ReadOnly False. Contributed by 
hemanthboyina
 add f647185  HDDS-2067. Create generic service facade with 
tracing/metrics/logging support
 add 606e341  Addendum to YARN-9730. Support forcing configured partitions 
to be exclusive based on app node label
 add 587a8ee  HDFS-14874. Fix TestHDFSCLI and TestDFSShell test break 
because of logging change in mkdir (#1522). Contributed by Gabor Bota.
 add 7b6219a  HDDS-2182. Fix checkstyle violations introduced by HDDS-1738
 add a3f6893  HDFS-14873. Fix dfsadmin doc for triggerBlockReport. 
Contributed by Fei Hui.
 add 1a2a352  HDFS-11934. Add assertion to 
TestDefaultNameNodePort#testGetAddressFromConf. Contributed by Nikhil Navadiya.
 add 18a8c24  YARN-9857. TestDelegationTokenRenewer throws NPE but tests 
pass. Contributed by Ahmed Hussein
 add 06998a1  HDDS-2180. Add Object ID and update ID on VolumeList Object. 
(#1526)
 add b1e55cf  HDFS-14461. RBF: Fix intermittently failing kerberos related 
unit test. Contributed by Xiaoqiao He.
 add 2adcc3c  HDFS-14785. [SBN read] Change client logging to be less 
aggressive. Contributed by Chen Liang.
 add c55ac6a  HDDS-2174. Delete GDPR Encryption Key from metadata when a 
Key is deleted
 add b6ef8cc  HDD-2193. Adding container related metrics in SCM.
 add 0371e95  HDDS-2179. ConfigFileGenerator fails with Java 10 or newer
 add 9bf7a6e  HDDS-2149. Replace findbugs with spotbugs
 add 2870668  Make upstream aware of 3.1.3 release.
 add 8a9ede5  HADOOP-15616. Incorporate Tencent Cloud COS File System 
Implementation. Contributed by Yang Yu.
 add a93a139  HDDS-2185. createmrenv failure not reflected in acceptance 
test result
 add ce58c05  HDFS-14849. Erasure Coding: the internal block is replicated 
many times when datanode is decommissioning. Contributed by HuangTao.
 add 13b427f  HDFS-14564: Add libhdfs APIs for readFully; add readFully to 
ByteBufferPositionedReadable (#963) Contributed by Sahil Takiar.
 add 14b4fbc  HDDS-1146. Adding container related metrics in SCM. (#15

[hadoop] 01/01: Merge remote-tracking branch 'origin/trunk' into HDDS-1880-Decom

2019-10-04 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch HDDS-1880-Decom
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit ec70207838d5b29fa0b534b13c103865e50a35e8
Merge: fd5e877 6171a41
Author: Márton Elek 
AuthorDate: Fri Oct 4 14:17:38 2019 +0200

Merge remote-tracking branch 'origin/trunk' into HDDS-1880-Decom

 BUILDING.txt   |  31 -
 dev-support/docker/Dockerfile  |  19 +-
 .../hadoop-cos/dev-support/findbugs-exclude.xml|  18 +
 hadoop-cloud-storage-project/hadoop-cos/pom.xml| 140 
 .../site/markdown/cloud-storage/index.md   | 367 ++
 .../hadoop-cos/site/resources/css/site.css |  29 +
 .../java/org/apache/hadoop/fs/cosn/BufferPool.java | 245 +++
 .../hadoop/fs/cosn/ByteBufferInputStream.java  |  89 +++
 .../hadoop/fs/cosn/ByteBufferOutputStream.java |  74 ++
 .../apache/hadoop/fs/cosn/ByteBufferWrapper.java   | 103 +++
 .../java/org/apache/hadoop/fs/cosn/Constants.java  |  35 +-
 .../main/java/org/apache/hadoop/fs/cosn/CosN.java  |  31 +-
 .../org/apache/hadoop/fs/cosn/CosNConfigKeys.java  |  86 +++
 .../apache/hadoop/fs/cosn/CosNCopyFileContext.java |  66 ++
 .../apache/hadoop/fs/cosn/CosNCopyFileTask.java|  68 ++
 .../apache/hadoop/fs/cosn/CosNFileReadTask.java| 125 
 .../org/apache/hadoop/fs/cosn/CosNFileSystem.java  | 814 +
 .../org/apache/hadoop/fs/cosn/CosNInputStream.java | 365 +
 .../apache/hadoop/fs/cosn/CosNOutputStream.java| 284 +++
 .../java/org/apache/hadoop/fs/cosn/CosNUtils.java  | 167 +
 .../hadoop/fs/cosn/CosNativeFileSystemStore.java   | 768 +++
 .../org/apache/hadoop/fs/cosn/FileMetadata.java|  68 ++
 .../hadoop/fs/cosn/NativeFileSystemStore.java  |  99 +++
 .../org/apache/hadoop/fs/cosn/PartialListing.java  |  64 ++
 .../main/java/org/apache/hadoop/fs/cosn/Unit.java  |  27 +-
 .../fs/cosn/auth/COSCredentialProviderList.java| 139 
 .../EnvironmentVariableCredentialProvider.java |  55 ++
 .../fs/cosn/auth/NoAuthWithCOSException.java   |  32 +-
 .../fs/cosn/auth/SimpleCredentialProvider.java |  54 ++
 .../apache/hadoop/fs/cosn/auth/package-info.java   |  19 +-
 .../org/apache/hadoop/fs/cosn/package-info.java|  19 +-
 .../apache/hadoop/fs/cosn/CosNTestConfigKey.java   |  30 +-
 .../org/apache/hadoop/fs/cosn/CosNTestUtils.java   |  78 ++
 .../apache/hadoop/fs/cosn/TestCosNInputStream.java | 167 +
 .../hadoop/fs/cosn/TestCosNOutputStream.java   |  87 +++
 .../hadoop/fs/cosn/contract/CosNContract.java  |  36 +-
 .../fs/cosn/contract/TestCosNContractCreate.java   |  26 +-
 .../fs/cosn/contract/TestCosNContractDelete.java   |  26 +-
 .../fs/cosn/contract/TestCosNContractDistCp.java   |  54 ++
 .../contract/TestCosNContractGetFileStatus.java|  27 +-
 .../fs/cosn/contract/TestCosNContractMkdir.java|  26 +-
 .../fs/cosn/contract/TestCosNContractOpen.java |  26 +-
 .../fs/cosn/contract/TestCosNContractRename.java   |  26 +-
 .../fs/cosn/contract/TestCosNContractRootDir.java  |  27 +-
 .../fs/cosn/contract/TestCosNContractSeek.java |  26 +-
 .../hadoop/fs/cosn/contract/package-info.java  |  19 +-
 .../src/test/resources/contract/cosn.xml   | 120 +++
 .../hadoop-cos/src/test/resources/core-site.xml| 107 +++
 .../hadoop-cos/src/test/resources/log4j.properties |  18 +
 hadoop-cloud-storage-project/pom.xml   |   1 +
 .../apache/hadoop/crypto/CryptoInputStream.java|  67 +-
 .../org/apache/hadoop/fs/AbstractFileSystem.java   |  16 +-
 .../hadoop/fs/ByteBufferPositionedReadable.java|  24 +
 .../org/apache/hadoop/fs/ChecksumFileSystem.java   |  22 +
 .../apache/hadoop/fs/CommonPathCapabilities.java   | 126 
 .../org/apache/hadoop/fs/DelegateToFileSystem.java |   7 +
 .../org/apache/hadoop/fs/FSDataInputStream.java|  23 +-
 .../java/org/apache/hadoop/fs/FileContext.java |  23 +-
 .../main/java/org/apache/hadoop/fs/FileSystem.java |  30 +-
 .../org/apache/hadoop/fs/FilterFileSystem.java |   7 +
 .../main/java/org/apache/hadoop/fs/FilterFs.java   |   5 +
 .../main/java/org/apache/hadoop/fs/Globber.java| 208 +-
 .../java/org/apache/hadoop/fs/HarFileSystem.java   |  19 +-
 .../org/apache/hadoop/fs/PathCapabilities.java |  61 ++
 .../org/apache/hadoop/fs/RawLocalFileSystem.java   |  19 +
 .../hadoop/fs/http/AbstractHttpFileSystem.java |  18 +
 .../apache/hadoop/fs/impl/FsLinkResolution.java|  98 +++
 .../hadoop/fs/impl/PathCapabilitiesSupport.java|  40 +-
 .../java/org/apache/hadoop/fs/shell/Mkdir.java |   4 +-
 .../hadoop/fs/viewfs/ChRootedFileSystem.java   |   6 +
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  32 +
 .../apache/hadoop/util/NodeHealthScriptRunner.java |   1 +
 .../src/site/markdown/DeprecatedProperties.md  |   4 +
 .../src/site/markdown/filesystem/filesystem.md |   5 +-
 .../src/site/markdown/filesyst

[hadoop] branch trunk updated: HDDS-2216. Rename HADOOP_RUNNER_VERSION to OZONE_RUNNER_VERSION in compose .env files.

2019-10-04 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new bca014b  HDDS-2216. Rename HADOOP_RUNNER_VERSION to 
OZONE_RUNNER_VERSION in compose .env files.
bca014b is described below

commit bca014b0e03fb37711022ee6ed4272c346cdf5c9
Author: cxorm 
AuthorDate: Thu Oct 3 20:47:36 2019 +0800

HDDS-2216. Rename HADOOP_RUNNER_VERSION to OZONE_RUNNER_VERSION in compose 
.env files.

Closes #1570.
---
 hadoop-ozone/dev-support/checks/blockade.sh  |  2 +-
 hadoop-ozone/dist/src/main/compose/ozone-hdfs/.env   |  2 +-
 .../dist/src/main/compose/ozone-hdfs/docker-compose.yaml |  6 +++---
 hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop27/.env|  2 +-
 .../src/main/compose/ozone-mr/hadoop27/docker-compose.yaml   |  8 
 hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop31/.env|  2 +-
 .../src/main/compose/ozone-mr/hadoop31/docker-compose.yaml   |  8 
 hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop32/.env|  2 +-
 .../src/main/compose/ozone-mr/hadoop32/docker-compose.yaml   |  8 
 hadoop-ozone/dist/src/main/compose/ozone-om-ha/.env  |  2 +-
 .../dist/src/main/compose/ozone-om-ha/docker-compose.yaml| 10 +-
 hadoop-ozone/dist/src/main/compose/ozone-recon/.env  |  2 +-
 .../dist/src/main/compose/ozone-recon/docker-compose.yaml|  8 
 hadoop-ozone/dist/src/main/compose/ozone-topology/.env   |  2 +-
 .../dist/src/main/compose/ozone-topology/docker-compose.yaml | 12 ++--
 hadoop-ozone/dist/src/main/compose/ozone/.env|  2 +-
 hadoop-ozone/dist/src/main/compose/ozone/docker-compose.yaml |  6 +++---
 hadoop-ozone/dist/src/main/compose/ozoneblockade/.env|  2 +-
 .../dist/src/main/compose/ozoneblockade/docker-compose.yaml  |  8 
 hadoop-ozone/dist/src/main/compose/ozoneperf/.env|  2 +-
 .../dist/src/main/compose/ozoneperf/docker-compose.yaml  | 10 +-
 hadoop-ozone/dist/src/main/compose/ozones3-haproxy/.env  |  2 +-
 .../src/main/compose/ozones3-haproxy/docker-compose.yaml | 12 ++--
 hadoop-ozone/dist/src/main/compose/ozones3/.env  |  2 +-
 .../dist/src/main/compose/ozones3/docker-compose.yaml|  8 
 hadoop-ozone/dist/src/main/compose/ozonescripts/.env |  2 +-
 hadoop-ozone/dist/src/main/compose/ozonesecure-mr/.env   |  2 +-
 .../dist/src/main/compose/ozonesecure-mr/docker-compose.yaml |  8 
 hadoop-ozone/dist/src/main/compose/ozonesecure/.env  |  2 +-
 .../dist/src/main/compose/ozonesecure/docker-compose.yaml| 10 +-
 .../network-tests/src/test/blockade/ozone/cluster.py |  4 ++--
 31 files changed, 79 insertions(+), 79 deletions(-)

diff --git a/hadoop-ozone/dev-support/checks/blockade.sh 
b/hadoop-ozone/dev-support/checks/blockade.sh
index f8b25c1..a48d2b5 100755
--- a/hadoop-ozone/dev-support/checks/blockade.sh
+++ b/hadoop-ozone/dev-support/checks/blockade.sh
@@ -21,7 +21,7 @@ OZONE_VERSION=$(grep "" "$DIR/../../pom.xml" | 
sed 's/<[^>]*>//g'
 cd "$DIR/../../dist/target/ozone-$OZONE_VERSION/tests" || exit 1
 
 source 
${DIR}/../../dist/target/ozone-${OZONE_VERSION}/compose/ozoneblockade/.env
-export HADOOP_RUNNER_VERSION
+export OZONE_RUNNER_VERSION
 export HDDS_VERSION
 
 python -m pytest -s blockade
diff --git a/hadoop-ozone/dist/src/main/compose/ozone-hdfs/.env 
b/hadoop-ozone/dist/src/main/compose/ozone-hdfs/.env
index 8916fc3..df9065c 100644
--- a/hadoop-ozone/dist/src/main/compose/ozone-hdfs/.env
+++ b/hadoop-ozone/dist/src/main/compose/ozone-hdfs/.env
@@ -15,4 +15,4 @@
 # limitations under the License.
 
 HADOOP_VERSION=3
-HADOOP_RUNNER_VERSION=${docker.ozone-runner.version}
\ No newline at end of file
+OZONE_RUNNER_VERSION=${docker.ozone-runner.version}
diff --git a/hadoop-ozone/dist/src/main/compose/ozone-hdfs/docker-compose.yaml 
b/hadoop-ozone/dist/src/main/compose/ozone-hdfs/docker-compose.yaml
index cd06635..7d8295d 100644
--- a/hadoop-ozone/dist/src/main/compose/ozone-hdfs/docker-compose.yaml
+++ b/hadoop-ozone/dist/src/main/compose/ozone-hdfs/docker-compose.yaml
@@ -37,7 +37,7 @@ services:
   env_file:
 - ./docker-config
om:
-  image: apache/ozone-runner:${HADOOP_RUNNER_VERSION}
+  image: apache/ozone-runner:${OZONE_RUNNER_VERSION}
   volumes:
  - ../..:/opt/hadoop
   ports:
@@ -48,7 +48,7 @@ services:
   - ./docker-config
   command: ["ozone","om"]
scm:
-  image: apache/ozone-runner:${HADOOP_RUNNER_VERSION}
+  image: apache/ozone-runner:${OZONE_RUNNER_VERSION}
   volumes:
  - ../..:/opt/hadoop
   ports:
@@ -59,7 +59,7 @@ services:
   ENSURE_SCM_INITIALIZED: /data/metadata/scm/current/VERSION
   command: ["ozone","scm"]
s3g:
-  image: apache/ozone-runner:${HA

[hadoop] branch trunk updated: HADOOP-16207 Improved S3A MR tests.

2019-10-04 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new f44abc3  HADOOP-16207 Improved S3A MR tests.
f44abc3 is described below

commit f44abc3e11676579bdea94fce045d081ae38e6c3
Author: Steve Loughran 
AuthorDate: Fri Oct 4 14:11:22 2019 +0100

HADOOP-16207 Improved S3A MR tests.

Contributed by Steve Loughran.

Replaces the committer-specific terasort and MR test jobs with 
parameterization
of the (now single tests) and use of file:// over hdfs:// as the cluster FS.

The parameterization ensures that only one of the specific committer tests
run at a time -overloads of the test machines are less likely, and so the
suites can be pulled back into the parallel phase.

There's also more detailed validation of the stage outputs of the 
terasorting;
if one test fails the rest are all skipped. This and the fact that job
output is stored under target/yarn-${timestamp} means failures should
be more debuggable.

Change-Id: Iefa370ba73c6419496e6e69dd6673d00f37ff095
---
 hadoop-tools/hadoop-aws/pom.xml|   4 -
 .../fs/s3a/commit/staging/StagingCommitter.java|   3 +-
 .../commit/staging/StagingCommitterConstants.java  |   2 +-
 .../hadoop/fs/s3a/commit/AbstractCommitITest.java  |  18 +-
 .../fs/s3a/commit/AbstractITCommitMRJob.java   | 223 ---
 .../fs/s3a/commit/AbstractYarnClusterITest.java| 196 +--
 .../commit/integration/ITestS3ACommitterMRJob.java | 644 +
 .../fs/s3a/commit/magic/ITestMagicCommitMRJob.java | 120 
 .../integration/ITestDirectoryCommitMRJob.java |  61 --
 .../integration/ITestPartitionCommitMRJob.java |  62 --
 .../integration/ITestStagingCommitMRJob.java   |  94 ---
 .../ITestStagingCommitMRJobBadDest.java|  89 ---
 .../terasort/ITestTerasortDirectoryCommitter.java  |  62 --
 .../terasort/ITestTerasortMagicCommitter.java  |  73 ---
 ...mmitTerasortIT.java => ITestTerasortOnS3A.java} | 238 ++--
 .../hadoop-aws/src/test/resources/log4j.properties |   2 +-
 16 files changed, 987 insertions(+), 904 deletions(-)

diff --git a/hadoop-tools/hadoop-aws/pom.xml b/hadoop-tools/hadoop-aws/pom.xml
index ff330e5..bd204b0 100644
--- a/hadoop-tools/hadoop-aws/pom.xml
+++ b/hadoop-tools/hadoop-aws/pom.xml
@@ -188,8 +188,6 @@
 **/ITestDynamoDBMetadataStoreScale.java
 
 **/ITestTerasort*.java
-
-**/ITest*CommitMRJob.java
 
 **/ITestS3GuardDDBRootOperations.java
   
@@ -231,8 +229,6 @@
 
 
 **/ITestTerasort*.java
-
-**/ITest*CommitMRJob.java
 
 **/ITestS3AContractRootDir.java
 **/ITestS3GuardDDBRootOperations.java
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/staging/StagingCommitter.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/staging/StagingCommitter.java
index 7ec4478..833edd4 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/staging/StagingCommitter.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/staging/StagingCommitter.java
@@ -677,7 +677,8 @@ public class StagingCommitter extends AbstractS3ACommitter {
 // we will try to abort the ones that had already succeeded.
 int commitCount = taskOutput.size();
 final Queue commits = new ConcurrentLinkedQueue<>();
-LOG.info("{}: uploading from staging directory to S3", getRole());
+LOG.info("{}: uploading from staging directory to S3 {}", getRole(),
+attemptPath);
 LOG.info("{}: Saving pending data information to {}",
 getRole(), commitsAttemptPath);
 if (taskOutput.isEmpty()) {
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/staging/StagingCommitterConstants.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/staging/StagingCommitterConstants.java
index c5fb967..c41715b 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/staging/StagingCommitterConstants.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/staging/StagingCommitterConstants.java
@@ -34,7 +34,7 @@ public final class StagingCommitterConstants {
   /**
* The temporary path for staging data, if not explicitly set.
* By using an unqualified path, this will be qualified to be relative
-   * to the users' home directory, so protectec from access for others.
+   * to the users' home directory, so protected from access for others.
*/
   publi

[hadoop] branch trunk updated: HDDS-2222. Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C. (#1595)

2019-10-04 Thread szetszwo
This is an automated email from the ASF dual-hosted git repository.

szetszwo pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 531cc93  HDDS-. Add a method to update ByteBuffer in 
PureJavaCrc32/PureJavaCrc32C. (#1595)
531cc93 is described below

commit 531cc938fe84eb895eec110240181d8dc492c32e
Author: Tsz-Wo Nicholas Sze 
AuthorDate: Fri Oct 4 21:16:28 2019 +0800

HDDS-. Add a method to update ByteBuffer in 
PureJavaCrc32/PureJavaCrc32C. (#1595)
---
 .../common/dev-support/findbugsExcludeFile.xml |   5 +
 .../hadoop/ozone/common/ChecksumByteBuffer.java| 114 +
 .../ozone/common/PureJavaCrc32ByteBuffer.java  | 556 
 .../ozone/common/PureJavaCrc32CByteBuffer.java | 559 +
 .../ozone/common/TestChecksumByteBuffer.java   | 102 
 5 files changed, 1336 insertions(+)

diff --git a/hadoop-hdds/common/dev-support/findbugsExcludeFile.xml 
b/hadoop-hdds/common/dev-support/findbugsExcludeFile.xml
index c7db679..4441b69 100644
--- a/hadoop-hdds/common/dev-support/findbugsExcludeFile.xml
+++ b/hadoop-hdds/common/dev-support/findbugsExcludeFile.xml
@@ -25,4 +25,9 @@
 
 
   
+  
+
+
+
+  
 
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
new file mode 100644
index 000..2c0feff
--- /dev/null
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ * Some portions of this file Copyright (c) 2004-2006 Intel Corportation
+ * and licensed under the BSD license.
+ */
+package org.apache.hadoop.ozone.common;
+
+import org.apache.ratis.util.Preconditions;
+
+import java.nio.ByteBuffer;
+import java.util.zip.Checksum;
+
+/**
+ * A sub-interface of {@link Checksum}
+ * with a method to update checksum from a {@link ByteBuffer}.
+ */
+public interface ChecksumByteBuffer extends Checksum {
+  /**
+   * Updates the current checksum with the specified bytes in the buffer.
+   * Upon return, the buffer's position will be equal to its limit.
+   *
+   * @param buffer the bytes to update the checksum with
+   */
+  void update(ByteBuffer buffer);
+
+  @Override
+  default void update(byte[] b, int off, int len) {
+update(ByteBuffer.wrap(b, off, len).asReadOnlyBuffer());
+  }
+
+  /**
+   * An abstract class implementing {@link ChecksumByteBuffer}
+   * with a 32-bit checksum and a lookup table.
+   */
+  abstract class CrcIntTable implements ChecksumByteBuffer {
+/** Current CRC value with bit-flipped. */
+private int crc;
+
+CrcIntTable() {
+  reset();
+  Preconditions.assertTrue(getTable().length == 8 * (1 << 8));
+}
+
+abstract int[] getTable();
+
+@Override
+public final long getValue() {
+  return (~crc) & 0xL;
+}
+
+@Override
+public final void reset() {
+  crc = 0x;
+}
+
+@Override
+public final void update(int b) {
+  crc = (crc >>> 8) ^ getTable()[(((crc ^ b) << 24) >>> 24)];
+}
+
+@Override
+public final void update(ByteBuffer b) {
+  crc = update(crc, b, getTable());
+}
+
+private static int update(int crc, ByteBuffer b, int[] table) {
+  for(; b.remaining() > 7;) {
+final int c0 = (b.get() ^ crc) & 0xff;
+final int c1 = (b.get() ^ (crc >>>= 8)) & 0xff;
+final int c2 = (b.get() ^ (crc >>>= 8)) & 0xff;
+final int c3 = (b.get() ^ (crc >>> 8)) & 0xff;
+crc = (table[0x700 + c0] ^ table[0x600 + c1])
+^ (table[0x500 + c2] ^ table[0x400 + c3]);
+
+final int c4 = b.get() & 0xff;
+final int c5 = b.get() & 0xff;
+final int c6 = b.get() & 0xff;
+final int c7 = b.get() & 0xff;
+
+crc ^= (table[0x300 + c4] ^ table[0x200 + c5])
+^ (table[0x100 + c6] ^ table[c7]);
+  }
+
+  // loop unroll - duff's device style
+  switch (b.remaining()) {
+case 7: crc = (crc >>> 8) ^ table[((c

[hadoop] branch trunk updated (531cc93 -> f826420)

2019-10-04 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 531cc93  HDDS-. Add a method to update ByteBuffer in 
PureJavaCrc32/PureJavaCrc32C. (#1595)
 add f826420  HDDS-2230. Invalid entries in ozonesecure-mr config. 
(Addendum)

No new revisions were added by this update.

Summary of changes:
 hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml | 3 +++
 1 file changed, 3 insertions(+)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HDFS-13693. Remove unnecessary search in INodeDirectory.addChild during image loading. Contributed by Lisheng Sun.

2019-10-04 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 7025724  HDFS-13693. Remove unnecessary search in 
INodeDirectory.addChild during image loading. Contributed by Lisheng Sun.
7025724 is described below

commit 702572434c92aa34d70837f8757b8974e04c9da9
Author: Ayush Saxena 
AuthorDate: Tue Jul 23 08:37:55 2019 +0530

HDFS-13693. Remove unnecessary search in INodeDirectory.addChild during 
image loading. Contributed by Lisheng Sun.

(cherry picked from commit 377f95bbe8d2d171b5d7b0bfa7559e67ca4aae46)
---
 .../hdfs/server/namenode/FSImageFormatPBINode.java   |  4 +++-
 .../hadoop/hdfs/server/namenode/INodeDirectory.java  | 16 
 2 files changed, 19 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
index bc455e0..6825a5c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
@@ -269,7 +269,7 @@ public final class FSImageFormatPBINode {
 + "name before upgrading to this release.");
   }
   // NOTE: This does not update space counts for parents
-  if (!parent.addChild(child)) {
+  if (!parent.addChildAtLoading(child)) {
 return;
   }
   dir.cacheName(child);
@@ -551,6 +551,8 @@ public final class FSImageFormatPBINode {
   ++numImageErrors;
 }
 if (!inode.isReference()) {
+  // Serialization must ensure that children are in order, related
+  // to HDFS-13693
   b.addChildren(inode.getId());
 } else {
   refList.add(inode.asReference());
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
index 8fa9bcf..e71cb0a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
@@ -573,6 +573,22 @@ public class INodeDirectory extends 
INodeWithAdditionalFields
   }
 
   /**
+   * During image loading, the search is unnecessary since the insert position
+   * should always be at the end of the map given the sequence they are
+   * serialized on disk.
+   */
+  public boolean addChildAtLoading(INode node) {
+int pos;
+if (!node.isReference()) {
+  pos = (children == null) ? (-1) : (-children.size() - 1);
+  addChild(node, pos);
+  return true;
+} else {
+  return addChild(node);
+}
+  }
+
+  /**
* Add the node to the children list at the given insertion point.
* The basic add method which actually calls children.add(..).
*/


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HDFS-13693. Remove unnecessary search in INodeDirectory.addChild during image loading. Contributed by Lisheng Sun.

2019-10-04 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 8ce5015  HDFS-13693. Remove unnecessary search in 
INodeDirectory.addChild during image loading. Contributed by Lisheng Sun.
8ce5015 is described below

commit 8ce5015e94bd973c9e888342117267f14059944f
Author: Ayush Saxena 
AuthorDate: Tue Jul 23 08:37:55 2019 +0530

HDFS-13693. Remove unnecessary search in INodeDirectory.addChild during 
image loading. Contributed by Lisheng Sun.

(cherry picked from commit 377f95bbe8d2d171b5d7b0bfa7559e67ca4aae46)
(cherry picked from commit e3f54a7babf942efbe879aabef65a5b6df57fb65)
---
 .../hdfs/server/namenode/FSImageFormatPBINode.java   |  4 +++-
 .../hadoop/hdfs/server/namenode/INodeDirectory.java  | 16 
 2 files changed, 19 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
index 3193c4f..5facc40 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
@@ -268,7 +268,7 @@ public final class FSImageFormatPBINode {
 + "name before upgrading to this release.");
   }
   // NOTE: This does not update space counts for parents
-  if (!parent.addChild(child)) {
+  if (!parent.addChildAtLoading(child)) {
 return;
   }
   dir.cacheName(child);
@@ -550,6 +550,8 @@ public final class FSImageFormatPBINode {
   ++numImageErrors;
 }
 if (!inode.isReference()) {
+  // Serialization must ensure that children are in order, related
+  // to HDFS-13693
   b.addChildren(inode.getId());
 } else {
   refList.add(inode.asReference());
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
index 8fa9bcf..e71cb0a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
@@ -573,6 +573,22 @@ public class INodeDirectory extends 
INodeWithAdditionalFields
   }
 
   /**
+   * During image loading, the search is unnecessary since the insert position
+   * should always be at the end of the map given the sequence they are
+   * serialized on disk.
+   */
+  public boolean addChildAtLoading(INode node) {
+int pos;
+if (!node.isReference()) {
+  pos = (children == null) ? (-1) : (-children.size() - 1);
+  addChild(node, pos);
+  return true;
+} else {
+  return addChild(node);
+}
+  }
+
+  /**
* Add the node to the children list at the given insertion point.
* The basic add method which actually calls children.add(..).
*/


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: YARN-9873. Mutation API Config Change updates Version Number. Contributed by Prabhu Joseph

2019-10-04 Thread sunilg
This is an automated email from the ASF dual-hosted git repository.

sunilg pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 4510970  YARN-9873. Mutation API Config Change updates Version Number. 
Contributed by Prabhu Joseph
4510970 is described below

commit 4510970e2f7728d036c750b596985e5ffa357b60
Author: Sunil G 
AuthorDate: Fri Oct 4 21:49:07 2019 +0530

YARN-9873. Mutation API Config Change updates Version Number. Contributed 
by Prabhu Joseph
---
 .../scheduler/MutableConfigurationProvider.java|  6 
 .../conf/FSSchedulerConfigurationStore.java|  7 +
 .../capacity/conf/InMemoryConfigurationStore.java  |  8 +
 .../capacity/conf/LeveldbConfigurationStore.java   | 23 ++-
 .../conf/MutableCSConfigurationProvider.java   |  5 
 .../capacity/conf/YarnConfigurationStore.java  |  6 
 .../capacity/conf/ZKConfigurationStore.java| 19 
 .../server/resourcemanager/webapp/RMWSConsts.java  |  3 ++
 .../resourcemanager/webapp/RMWebServices.java  | 34 +-
 .../capacity/conf/TestZKConfigurationStore.java| 15 ++
 10 files changed, 124 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/MutableConfigurationProvider.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/MutableConfigurationProvider.java
index 9e843df..eff8aa8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/MutableConfigurationProvider.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/MutableConfigurationProvider.java
@@ -65,6 +65,12 @@ public interface MutableConfigurationProvider {
*/
   Configuration getConfiguration();
 
+  /**
+   * Get the last updated scheduler config version.
+   * @return Last updated scheduler config version.
+   */
+  long getConfigVersion() throws Exception;
+
   void formatConfigurationInStore(Configuration conf) throws Exception;
 
   /**
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/FSSchedulerConfigurationStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/FSSchedulerConfigurationStore.java
index 80053be..f59fa0a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/FSSchedulerConfigurationStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/FSSchedulerConfigurationStore.java
@@ -148,6 +148,13 @@ public class FSSchedulerConfigurationStore extends 
YarnConfigurationStore {
 tempConfigPath = null;
   }
 
+  @Override
+  public long getConfigVersion() throws Exception {
+String version = getLatestConfigPath().getName().
+substring(YarnConfiguration.CS_CONFIGURATION_FILE.length() + 1);
+return Long.parseLong(version);
+  }
+
   private void finalizeFileSystemFile() throws IOException {
 // call confirmMutation() make sure tempConfigPath is not null
 Path finalConfigPath = getFinalConfigPath(tempConfigPath);
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/InMemoryConfigurationStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/InMemoryConfigurationStore.java
index 4871443..59d140e 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/InMemoryConfigurationStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/InMemoryConfigurationStore.java
@@ -33,11 +33,13 @@ public class InMemoryConfigurationStore extends 
YarnConfigurationStore {
 
   private Configuration schedConf;
   private LogMutation pendingMutation;
+  private long config

[hadoop] branch branch-3.2 updated: YARN-9873. Mutation API Config Change updates Version Number. Contributed by Prabhu Joseph

2019-10-04 Thread sunilg
This is an automated email from the ASF dual-hosted git repository.

sunilg pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 3a0afcf  YARN-9873. Mutation API Config Change updates Version Number. 
Contributed by Prabhu Joseph
3a0afcf is described below

commit 3a0afcfb7f196bee8e534a3e80211bfa077b2aee
Author: Sunil G 
AuthorDate: Fri Oct 4 21:49:07 2019 +0530

YARN-9873. Mutation API Config Change updates Version Number. Contributed 
by Prabhu Joseph

(cherry picked from commit 4510970e2f7728d036c750b596985e5ffa357b60)
---
 .../scheduler/MutableConfigurationProvider.java|  6 
 .../conf/FSSchedulerConfigurationStore.java|  7 +
 .../capacity/conf/InMemoryConfigurationStore.java  |  8 +
 .../capacity/conf/LeveldbConfigurationStore.java   | 23 ++-
 .../conf/MutableCSConfigurationProvider.java   |  5 
 .../capacity/conf/YarnConfigurationStore.java  |  6 
 .../capacity/conf/ZKConfigurationStore.java| 19 
 .../server/resourcemanager/webapp/RMWSConsts.java  |  3 ++
 .../resourcemanager/webapp/RMWebServices.java  | 34 +-
 .../capacity/conf/TestZKConfigurationStore.java| 15 ++
 10 files changed, 124 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/MutableConfigurationProvider.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/MutableConfigurationProvider.java
index 9e843df..eff8aa8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/MutableConfigurationProvider.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/MutableConfigurationProvider.java
@@ -65,6 +65,12 @@ public interface MutableConfigurationProvider {
*/
   Configuration getConfiguration();
 
+  /**
+   * Get the last updated scheduler config version.
+   * @return Last updated scheduler config version.
+   */
+  long getConfigVersion() throws Exception;
+
   void formatConfigurationInStore(Configuration conf) throws Exception;
 
   /**
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/FSSchedulerConfigurationStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/FSSchedulerConfigurationStore.java
index ddc5c8a..14cd0a4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/FSSchedulerConfigurationStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/FSSchedulerConfigurationStore.java
@@ -148,6 +148,13 @@ public class FSSchedulerConfigurationStore extends 
YarnConfigurationStore {
 tempConfigPath = null;
   }
 
+  @Override
+  public long getConfigVersion() throws Exception {
+String version = getLatestConfigPath().getName().
+substring(YarnConfiguration.CS_CONFIGURATION_FILE.length() + 1);
+return Long.parseLong(version);
+  }
+
   private void finalizeFileSystemFile() throws IOException {
 // call confirmMutation() make sure tempConfigPath is not null
 Path finalConfigPath = getFinalConfigPath(tempConfigPath);
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/InMemoryConfigurationStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/InMemoryConfigurationStore.java
index 4871443..59d140e 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/InMemoryConfigurationStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/InMemoryConfigurationStore.java
@@ -33,11 +33,13 @@ public class InMemoryConfigurationStore extends 
YarnConfigurationStore {
 
   private

[hadoop] branch branch-3.1 updated: HDFS-14686. HttpFS: HttpFSFileSystem#getErasureCodingPolicy always returns null (#1192) Contributed by Siyao Meng.

2019-10-04 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 61dc877  HDFS-14686. HttpFS: HttpFSFileSystem#getErasureCodingPolicy 
always returns null (#1192) Contributed by Siyao Meng.
61dc877 is described below

commit 61dc877b008d0758ba7462778701f14234e29126
Author: Siyao Meng <50227127+smen...@users.noreply.github.com>
AuthorDate: Thu Aug 1 17:15:22 2019 -0700

HDFS-14686. HttpFS: HttpFSFileSystem#getErasureCodingPolicy always returns 
null (#1192) Contributed by Siyao Meng.

(cherry picked from commit 17e8cf501b384af93726e4f2e6f5e28c6e3a8f65)
---
 .../src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java  | 4 +++-
 .../main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java  | 4 +++-
 .../src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java  | 2 ++
 .../java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java | 2 ++
 4 files changed, 10 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
index 2d1d411..263e93e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
@@ -429,10 +429,12 @@ class JsonUtilClient {
 final long length = ((Number) m.get("length")).longValue();
 final long fileCount = ((Number) m.get("fileCount")).longValue();
 final long directoryCount = ((Number) m.get("directoryCount")).longValue();
+final String ecPolicy = ((String) m.get("ecPolicy"));
 ContentSummary.Builder builder = new ContentSummary.Builder()
 .length(length)
 .fileCount(fileCount)
-.directoryCount(directoryCount);
+.directoryCount(directoryCount)
+.erasureCodingPolicy(ecPolicy);
 builder = buildQuotaUsage(builder, m, ContentSummary.Builder.class);
 return builder.build();
   }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
index 041670e..20533cf 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
@@ -190,6 +190,7 @@ public class HttpFSFileSystem extends FileSystem
 
   public static final String CONTENT_SUMMARY_JSON = "ContentSummary";
   public static final String CONTENT_SUMMARY_DIRECTORY_COUNT_JSON = 
"directoryCount";
+  public static final String CONTENT_SUMMARY_ECPOLICY_JSON = "ecPolicy";
   public static final String CONTENT_SUMMARY_FILE_COUNT_JSON = "fileCount";
   public static final String CONTENT_SUMMARY_LENGTH_JSON = "length";
 
@@ -1135,7 +1136,8 @@ public class HttpFSFileSystem extends FileSystem
 ContentSummary.Builder builder = new ContentSummary.Builder()
 .length((Long) json.get(CONTENT_SUMMARY_LENGTH_JSON))
 .fileCount((Long) json.get(CONTENT_SUMMARY_FILE_COUNT_JSON))
-.directoryCount((Long) json.get(CONTENT_SUMMARY_DIRECTORY_COUNT_JSON));
+.directoryCount((Long) json.get(CONTENT_SUMMARY_DIRECTORY_COUNT_JSON))
+.erasureCodingPolicy((String) json.get(CONTENT_SUMMARY_ECPOLICY_JSON));
 builder = buildQuotaUsage(builder, json, ContentSummary.Builder.class);
 return builder.build();
   }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
index 59f91ca..a9a9c70 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
@@ -261,6 +261,8 @@ public class FSOperations {
 Map json = new LinkedHashMap();
 json.put(HttpFSFileSystem.CONTENT_SUMMARY_DIRECTORY_COUNT_JSON,
 contentSummary.getDirectoryCount());
+json.put(HttpFSFileSystem.CONTENT_SUMMARY_ECPOLICY_JSON,
+contentSummary.getErasureCodingPolicy());
 json.put(HttpFSFileSystem.CONTENT_SUMMARY_FILE_COUNT_JSON,
 contentSummary.getFileCount());
 json.put(HttpFSFileSystem.CONTENT_SUMMARY_LENGTH_JSON,
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoo

[hadoop] branch branch-3.2 updated: HDFS-14686. HttpFS: HttpFSFileSystem#getErasureCodingPolicy always returns null (#1192) Contributed by Siyao Meng.

2019-10-04 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 673c9d5  HDFS-14686. HttpFS: HttpFSFileSystem#getErasureCodingPolicy 
always returns null (#1192) Contributed by Siyao Meng.
673c9d5 is described below

commit 673c9d53ca5926c818815e01cb178612d1cd433e
Author: Siyao Meng <50227127+smen...@users.noreply.github.com>
AuthorDate: Thu Aug 1 17:15:22 2019 -0700

HDFS-14686. HttpFS: HttpFSFileSystem#getErasureCodingPolicy always returns 
null (#1192) Contributed by Siyao Meng.

(cherry picked from commit 17e8cf501b384af93726e4f2e6f5e28c6e3a8f65)
---
 .../src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java  | 4 +++-
 .../main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java  | 4 +++-
 .../src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java  | 2 ++
 .../java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java | 2 ++
 4 files changed, 10 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
index 34ad50f..f9b847c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
@@ -432,10 +432,12 @@ public class JsonUtilClient {
 final long length = ((Number) m.get("length")).longValue();
 final long fileCount = ((Number) m.get("fileCount")).longValue();
 final long directoryCount = ((Number) m.get("directoryCount")).longValue();
+final String ecPolicy = ((String) m.get("ecPolicy"));
 ContentSummary.Builder builder = new ContentSummary.Builder()
 .length(length)
 .fileCount(fileCount)
-.directoryCount(directoryCount);
+.directoryCount(directoryCount)
+.erasureCodingPolicy(ecPolicy);
 builder = buildQuotaUsage(builder, m, ContentSummary.Builder.class);
 return builder.build();
   }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
index 1efafe7..ac909dd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
@@ -193,6 +193,7 @@ public class HttpFSFileSystem extends FileSystem
 
   public static final String CONTENT_SUMMARY_JSON = "ContentSummary";
   public static final String CONTENT_SUMMARY_DIRECTORY_COUNT_JSON = 
"directoryCount";
+  public static final String CONTENT_SUMMARY_ECPOLICY_JSON = "ecPolicy";
   public static final String CONTENT_SUMMARY_FILE_COUNT_JSON = "fileCount";
   public static final String CONTENT_SUMMARY_LENGTH_JSON = "length";
 
@@ -1140,7 +1141,8 @@ public class HttpFSFileSystem extends FileSystem
 ContentSummary.Builder builder = new ContentSummary.Builder()
 .length((Long) json.get(CONTENT_SUMMARY_LENGTH_JSON))
 .fileCount((Long) json.get(CONTENT_SUMMARY_FILE_COUNT_JSON))
-.directoryCount((Long) json.get(CONTENT_SUMMARY_DIRECTORY_COUNT_JSON));
+.directoryCount((Long) json.get(CONTENT_SUMMARY_DIRECTORY_COUNT_JSON))
+.erasureCodingPolicy((String) json.get(CONTENT_SUMMARY_ECPOLICY_JSON));
 builder = buildQuotaUsage(builder, json, ContentSummary.Builder.class);
 return builder.build();
   }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
index 3f79256..043f3e1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
@@ -265,6 +265,8 @@ public class FSOperations {
 Map json = new LinkedHashMap();
 json.put(HttpFSFileSystem.CONTENT_SUMMARY_DIRECTORY_COUNT_JSON,
 contentSummary.getDirectoryCount());
+json.put(HttpFSFileSystem.CONTENT_SUMMARY_ECPOLICY_JSON,
+contentSummary.getErasureCodingPolicy());
 json.put(HttpFSFileSystem.CONTENT_SUMMARY_FILE_COUNT_JSON,
 contentSummary.getFileCount());
 json.put(HttpFSFileSystem.CONTENT_SUMMARY_LENGTH_JSON,
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apach

[hadoop] branch trunk updated (4510970 -> 3f16651)

2019-10-04 Thread bharat
This is an automated email from the ASF dual-hosted git repository.

bharat pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 4510970  YARN-9873. Mutation API Config Change updates Version Number. 
Contributed by Prabhu Joseph
 add 3f16651  HDDS-2237. KeyDeletingService throws NPE if it's started too 
early (#1584)

No new revisions were added by this update.

Summary of changes:
 .../src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java   | 1 -
 .../test/java/org/apache/hadoop/ozone/om/TestKeyDeletingService.java   | 3 +++
 2 files changed, 3 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-14890. Fixed namenode and journalnode startup on Windows. Contributed by Siddharth Wagle

2019-10-04 Thread eyang
This is an automated email from the ASF dual-hosted git repository.

eyang pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new aa24add  HDFS-14890.  Fixed namenode and journalnode startup on 
Windows.  Contributed by Siddharth Wagle
aa24add is described below

commit aa24add8f0e9812d1f787efb3c40155b0fdeed9c
Author: Eric Yang 
AuthorDate: Fri Oct 4 13:13:10 2019 -0400

HDFS-14890.  Fixed namenode and journalnode startup on Windows.
 Contributed by Siddharth Wagle
---
 .../java/org/apache/hadoop/hdfs/server/common/Storage.java| 11 ---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
index 2ba943a..e7da44e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
@@ -447,9 +447,14 @@ public abstract class Storage extends StorageInfo {
 throw new IOException("Cannot create directory " + curDir);
   }
   if (permission != null) {
-Set permissions =
-PosixFilePermissions.fromString(permission.toString());
-Files.setPosixFilePermissions(curDir.toPath(), permissions);
+try {
+  Set permissions =
+  PosixFilePermissions.fromString(permission.toString());
+  Files.setPosixFilePermissions(curDir.toPath(), permissions);
+} catch (UnsupportedOperationException uoe) {
+  // Default to FileUtil for non posix file systems
+  FileUtil.setPermission(curDir, permission);
+}
   }
 }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HDFS-14890. Fixed namenode and journalnode startup on Windows. Contributed by Siddharth Wagle

2019-10-04 Thread eyang
This is an automated email from the ASF dual-hosted git repository.

eyang pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 8bb2b00  HDFS-14890.  Fixed namenode and journalnode startup on 
Windows.  Contributed by Siddharth Wagle
8bb2b00 is described below

commit 8bb2b00d38978859b22b892034eb3f559b820942
Author: Eric Yang 
AuthorDate: Fri Oct 4 13:13:10 2019 -0400

HDFS-14890.  Fixed namenode and journalnode startup on Windows.
 Contributed by Siddharth Wagle

(cherry picked from commit aa24add8f0e9812d1f787efb3c40155b0fdeed9c)
---
 .../java/org/apache/hadoop/hdfs/server/common/Storage.java| 11 ---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
index 2ba943a..e7da44e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
@@ -447,9 +447,14 @@ public abstract class Storage extends StorageInfo {
 throw new IOException("Cannot create directory " + curDir);
   }
   if (permission != null) {
-Set permissions =
-PosixFilePermissions.fromString(permission.toString());
-Files.setPosixFilePermissions(curDir.toPath(), permissions);
+try {
+  Set permissions =
+  PosixFilePermissions.fromString(permission.toString());
+  Files.setPosixFilePermissions(curDir.toPath(), permissions);
+} catch (UnsupportedOperationException uoe) {
+  // Default to FileUtil for non posix file systems
+  FileUtil.setPermission(curDir, permission);
+}
   }
 }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] annotated tag ozone-0.4.1-alpha-RC0 created (now 9062dac)

2019-10-04 Thread nanda
This is an automated email from the ASF dual-hosted git repository.

nanda pushed a change to annotated tag ozone-0.4.1-alpha-RC0
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at 9062dac  (tag)
 tagging 8a83e16da081434e3e2d52fd545391d6bfb47952 (commit)
  by Nanda kumar
  on Fri Oct 4 13:15:45 2019 +0530

- Log -
Apache Hadoop Ozone 0.4.1-alpha RC0 release.
-BEGIN PGP SIGNATURE-

iQJFBAABCAAvFiEEwYvqJzsezX8Fpk0czmyKsSBHgN8FAl2W+O4RHG5hbmRhQGFw
YWNoZS5vcmcACgkQzmyKsSBHgN+QOA/6AlhzccUSD87ROsGYzMzNcB/uWPhYu3AC
7QWgYmTfEld4k4ipyQLmLaMGqrmtubaJr683v76HkDBkWeyMMs7iZOLRWkSF98gm
8VVHgLUPS608p/l9eaA0Bn60SNP+fGQ3SqOOWyI1atQbZhr1cN1kmoU73uhsBS9J
4jDlWe45G8xil56a4DVElimyLGeCnpeP5MmdUmoO1QrJEz5INS/FxK1hCOLK23W5
AFNQwSV7aE6rv/JpZGdFB0Ix24EO1imkVwE9Itj+Td5hDTiwbmjCXJW6OlOSYvMy
TK0qdFXCAXffX6jf1f/ebTz1aFP5zRzgCPVCEEQ2MSjo1tbGKIR5ujvKqhA8cA+n
tBmE7IFpOCMf1IiY52+Jcwd6R75mO3hnRykC/LNstgvN3+HXalCnpRR8p4bMatd5
EJ83fh00pRw8UQytX0AaFvaGAGkVV2KVAI5froaOJGij330IgTwN/gLMJKySMAmd
fm94Hh5buZAs8L6G8p3dbloYq50UTd0Ex55eFrYgxEUtN1S4g5d597cgPgxLdD/v
pg8nszz4g8/yYN0Knd+yAOlGLV6mUCgeFbmvjhKWkhIAEzi/y1Ar/pnNG5TpNFXV
j5JgM8qbVFGVA3001hVWOfFV/gj0gPBZgW8+E57Pej+bWaUtoo0sCBYeywM+2UiC
sadHZ8e5YBA=
=ptWC
-END PGP SIGNATURE-
---

This annotated tag includes the following new commits:

 new 8a83e16  Preparing for release 0.4.1-alpha.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/01: Preparing for release 0.4.1-alpha.

2019-10-04 Thread nanda
This is an automated email from the ASF dual-hosted git repository.

nanda pushed a commit to annotated tag ozone-0.4.1-alpha-RC0
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 8a83e16da081434e3e2d52fd545391d6bfb47952
Author: Nanda kumar 
AuthorDate: Fri Oct 4 12:33:16 2019 +0530

Preparing for release 0.4.1-alpha.
---
 hadoop-hdds/client/pom.xml  | 4 ++--
 hadoop-hdds/common/pom.xml  | 6 +++---
 hadoop-hdds/config/pom.xml  | 4 ++--
 hadoop-hdds/container-service/pom.xml   | 4 ++--
 hadoop-hdds/docs/content/recipe/SparkOzoneFSK8S.md  | 4 ++--
 hadoop-hdds/docs/pom.xml| 4 ++--
 hadoop-hdds/framework/pom.xml   | 4 ++--
 hadoop-hdds/pom.xml | 4 ++--
 hadoop-hdds/server-scm/pom.xml  | 4 ++--
 hadoop-hdds/tools/pom.xml   | 4 ++--
 hadoop-ozone/Jenkinsfile| 2 +-
 hadoop-ozone/client/pom.xml | 4 ++--
 hadoop-ozone/common/pom.xml | 4 ++--
 hadoop-ozone/csi/pom.xml| 4 ++--
 hadoop-ozone/datanode/pom.xml   | 4 ++--
 hadoop-ozone/dist/pom.xml   | 4 ++--
 hadoop-ozone/fault-injection-test/network-tests/pom.xml | 2 +-
 hadoop-ozone/fault-injection-test/pom.xml   | 4 ++--
 hadoop-ozone/integration-test/pom.xml   | 4 ++--
 hadoop-ozone/objectstore-service/pom.xml| 4 ++--
 hadoop-ozone/ozone-manager/pom.xml  | 4 ++--
 hadoop-ozone/ozone-recon-codegen/pom.xml| 2 +-
 hadoop-ozone/ozone-recon/pom.xml| 2 +-
 hadoop-ozone/ozonefs-lib-current/pom.xml| 4 ++--
 hadoop-ozone/ozonefs-lib-legacy/pom.xml | 4 ++--
 hadoop-ozone/ozonefs/pom.xml| 4 ++--
 hadoop-ozone/pom.xml| 6 +++---
 hadoop-ozone/s3gateway/pom.xml  | 4 ++--
 hadoop-ozone/tools/pom.xml  | 4 ++--
 hadoop-ozone/upgrade/pom.xml| 4 ++--
 pom.ozone.xml   | 2 +-
 31 files changed, 59 insertions(+), 59 deletions(-)

diff --git a/hadoop-hdds/client/pom.xml b/hadoop-hdds/client/pom.xml
index 1f139d7..ed19115 100644
--- a/hadoop-hdds/client/pom.xml
+++ b/hadoop-hdds/client/pom.xml
@@ -20,11 +20,11 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd";>
   
 org.apache.hadoop
 hadoop-hdds
-0.4.1-SNAPSHOT
+0.4.1-alpha
   
 
   hadoop-hdds-client
-  0.4.1-SNAPSHOT
+  0.4.1-alpha
   Apache Hadoop Distributed Data Store Client 
Library
   Apache Hadoop HDDS Client
   jar
diff --git a/hadoop-hdds/common/pom.xml b/hadoop-hdds/common/pom.xml
index 5fc4a5e..207f474 100644
--- a/hadoop-hdds/common/pom.xml
+++ b/hadoop-hdds/common/pom.xml
@@ -20,16 +20,16 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd";>
   
 org.apache.hadoop
 hadoop-hdds
-0.4.1-SNAPSHOT
+0.4.1-alpha
   
   hadoop-hdds-common
-  0.4.1-SNAPSHOT
+  0.4.1-alpha
   Apache Hadoop Distributed Data Store Common
   Apache Hadoop HDDS Common
   jar
 
   
-0.4.1-SNAPSHOT
+0.4.1-alpha
 2.11.0
 3.4.2
 ${hdds.version}
diff --git a/hadoop-hdds/config/pom.xml b/hadoop-hdds/config/pom.xml
index 880faa1..4143a3b 100644
--- a/hadoop-hdds/config/pom.xml
+++ b/hadoop-hdds/config/pom.xml
@@ -20,10 +20,10 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd";>
   
 org.apache.hadoop
 hadoop-hdds
-0.4.1-SNAPSHOT
+0.4.1-alpha
   
   hadoop-hdds-config
-  0.4.1-SNAPSHOT
+  0.4.1-alpha
   Apache Hadoop Distributed Data Store Config Tools
   Apache Hadoop HDDS Config
   jar
diff --git a/hadoop-hdds/container-service/pom.xml 
b/hadoop-hdds/container-service/pom.xml
index 730c1ab..2b66a5c 100644
--- a/hadoop-hdds/container-service/pom.xml
+++ b/hadoop-hdds/container-service/pom.xml
@@ -20,10 +20,10 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd";>
   
 org.apache.hadoop
 hadoop-hdds
-0.4.1-SNAPSHOT
+0.4.1-alpha
   
   hadoop-hdds-container-service
-  0.4.1-SNAPSHOT
+  0.4.1-alpha
   Apache Hadoop Distributed Data Store Container 
Service
   Apache Hadoop HDDS Container Service
   jar
diff --git a/hadoop-hdds/docs/content/recipe/SparkOzoneFSK8S.md 
b/hadoop-hdds/docs/content/recipe/SparkOzoneFSK8S.md
index 7202ebd..1fc9155 100644
--- a/hadoop-hdds/docs/content/recipe/SparkOzoneFSK8S.md
+++ b/hadoop-hdds/docs/content/recipe/SparkOzoneFSK8S.md
@@ -88,7 +88,7 @@ _Note_: You may also use 
`org.apache.hadoop.fs.ozone.OzoneFileSystem` without th
 Copy the `ozonefs.jar` file from an ozone distribution (__use the legacy 
version!__)
 
 ```
-kubectl cp 
om-0:/opt/hadoop/share/ozone/lib/hadoop-ozone-filesystem-lib-legacy-0.4.1-SNAPSHOT

[hadoop] branch branch-3.2 updated: HDFS-14162. [SBN read] Allow Balancer to work with Observer node. Add a new ProxyCombiner allowing for multiple related protocols to be combined. Allow AlignmentCon

2019-10-04 Thread xkrogen
This is an automated email from the ASF dual-hosted git repository.

xkrogen pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 69b0c51  HDFS-14162. [SBN read] Allow Balancer to work with Observer 
node. Add a new ProxyCombiner allowing for multiple related protocols to be 
combined. Allow AlignmentContext to be passed in NameNodeProxyFactory. 
Contributed by Erik Krogen.
69b0c51 is described below

commit 69b0c513a9b11cc7f795747732173b36aacbe794
Author: Erik Krogen 
AuthorDate: Thu Dec 20 17:49:22 2018 -0800

HDFS-14162. [SBN read] Allow Balancer to work with Observer node. Add a new 
ProxyCombiner allowing for multiple related protocols to be combined. Allow 
AlignmentContext to be passed in NameNodeProxyFactory. Contributed by Erik 
Krogen.

(cherry picked from 64f28f9efa2ef3cd9dd54a6c5009029721e030ed)
---
 .../java/org/apache/hadoop/ipc/ProxyCombiner.java  | 137 +
 .../hdfs/server/namenode/ha/HAProxyFactory.java|   9 ++
 .../namenode/ha/ObserverReadProxyProvider.java |   2 +-
 .../org/apache/hadoop/hdfs/NameNodeProxies.java| 117 +++---
 .../hdfs/server/balancer/NameNodeConnector.java|  11 +-
 .../server/namenode/ha/NameNodeHAProxyFactory.java |   9 +-
 .../hdfs/server/protocol/BalancerProtocols.java|  30 +
 .../balancer/TestBalancerWithHANameNodes.java  | 101 ++-
 .../hadoop/hdfs/server/namenode/ha/HATestUtil.java |  12 +-
 9 files changed, 343 insertions(+), 85 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProxyCombiner.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProxyCombiner.java
new file mode 100644
index 000..fbafabc
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProxyCombiner.java
@@ -0,0 +1,137 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ipc;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.lang.reflect.InvocationHandler;
+import java.lang.reflect.Method;
+import java.lang.reflect.Proxy;
+
+import org.apache.hadoop.io.MultipleIOException;
+import org.apache.hadoop.ipc.Client.ConnectionId;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * A utility class used to combine two protocol proxies.
+ * See {@link #combine(Class, Object...)}.
+ */
+public final class ProxyCombiner {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ProxyCombiner.class);
+
+  private ProxyCombiner() { }
+
+  /**
+   * Combine two or more proxies which together comprise a single proxy
+   * interface. This can be used for a protocol interface which {@code extends}
+   * multiple other protocol interfaces. The returned proxy will implement
+   * all of the methods of the combined proxy interface, delegating calls
+   * to which proxy implements that method. If multiple proxies implement the
+   * same method, the first in the list will be used for delegation.
+   *
+   * This will check that every method on the combined interface is
+   * implemented by at least one of the supplied proxy objects.
+   *
+   * @param combinedProxyInterface The interface of the combined proxy.
+   * @param proxies The proxies which should be used as delegates.
+   * @param  The type of the proxy that will be returned.
+   * @return The combined proxy.
+   */
+  @SuppressWarnings("unchecked")
+  public static  T combine(Class combinedProxyInterface,
+  Object... proxies) {
+methodLoop:
+for (Method m : combinedProxyInterface.getMethods()) {
+  for (Object proxy : proxies) {
+try {
+  proxy.getClass().getMethod(m.getName(), m.getParameterTypes());
+  continue methodLoop; // go to the next method
+} catch (NoSuchMethodException nsme) {
+  // Continue to try the next proxy
+}
+  }
+  throw new IllegalStateException("The proxies specified for "
+  + combinedProxyInterface + " do not cover method " + m);
+}
+
+InvocationHandler handler = new CombinedProxyI

[hadoop] branch branch-3.1 updated: HDFS-14162. [SBN read] Allow Balancer to work with Observer node. Add a new ProxyCombiner allowing for multiple related protocols to be combined. Allow AlignmentCon

2019-10-04 Thread xkrogen
This is an automated email from the ASF dual-hosted git repository.

xkrogen pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 56caaca  HDFS-14162. [SBN read] Allow Balancer to work with Observer 
node. Add a new ProxyCombiner allowing for multiple related protocols to be 
combined. Allow AlignmentContext to be passed in NameNodeProxyFactory. 
Contributed by Erik Krogen.
56caaca is described below

commit 56caacac1f84ceb2eff90c39a63111a219c93439
Author: Erik Krogen 
AuthorDate: Thu Dec 20 17:49:22 2018 -0800

HDFS-14162. [SBN read] Allow Balancer to work with Observer node. Add a new 
ProxyCombiner allowing for multiple related protocols to be combined. Allow 
AlignmentContext to be passed in NameNodeProxyFactory. Contributed by Erik 
Krogen.

(cherry picked from 64f28f9efa2ef3cd9dd54a6c5009029721e030ed)
(cherry picked from 69b0c513a9b11cc7f795747732173b36aacbe794)
---
 .../java/org/apache/hadoop/ipc/ProxyCombiner.java  | 137 +
 .../hdfs/server/namenode/ha/HAProxyFactory.java|   9 ++
 .../namenode/ha/ObserverReadProxyProvider.java |   2 +-
 .../org/apache/hadoop/hdfs/NameNodeProxies.java| 108 ++--
 .../hdfs/server/balancer/NameNodeConnector.java|  11 +-
 .../server/namenode/ha/NameNodeHAProxyFactory.java |   9 +-
 .../hdfs/server/protocol/BalancerProtocols.java|  30 +
 .../balancer/TestBalancerWithHANameNodes.java  | 101 ++-
 .../hadoop/hdfs/server/namenode/ha/HATestUtil.java |  12 +-
 9 files changed, 338 insertions(+), 81 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProxyCombiner.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProxyCombiner.java
new file mode 100644
index 000..fbafabc
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProxyCombiner.java
@@ -0,0 +1,137 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ipc;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.lang.reflect.InvocationHandler;
+import java.lang.reflect.Method;
+import java.lang.reflect.Proxy;
+
+import org.apache.hadoop.io.MultipleIOException;
+import org.apache.hadoop.ipc.Client.ConnectionId;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * A utility class used to combine two protocol proxies.
+ * See {@link #combine(Class, Object...)}.
+ */
+public final class ProxyCombiner {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ProxyCombiner.class);
+
+  private ProxyCombiner() { }
+
+  /**
+   * Combine two or more proxies which together comprise a single proxy
+   * interface. This can be used for a protocol interface which {@code extends}
+   * multiple other protocol interfaces. The returned proxy will implement
+   * all of the methods of the combined proxy interface, delegating calls
+   * to which proxy implements that method. If multiple proxies implement the
+   * same method, the first in the list will be used for delegation.
+   *
+   * This will check that every method on the combined interface is
+   * implemented by at least one of the supplied proxy objects.
+   *
+   * @param combinedProxyInterface The interface of the combined proxy.
+   * @param proxies The proxies which should be used as delegates.
+   * @param  The type of the proxy that will be returned.
+   * @return The combined proxy.
+   */
+  @SuppressWarnings("unchecked")
+  public static  T combine(Class combinedProxyInterface,
+  Object... proxies) {
+methodLoop:
+for (Method m : combinedProxyInterface.getMethods()) {
+  for (Object proxy : proxies) {
+try {
+  proxy.getClass().getMethod(m.getName(), m.getParameterTypes());
+  continue methodLoop; // go to the next method
+} catch (NoSuchMethodException nsme) {
+  // Continue to try the next proxy
+}
+  }
+  throw new IllegalStateException("The proxies specified for "
+  + combinedProxyInterface + " do not cover method " + 

[hadoop] branch trunk updated: HADOOP-16570. S3A committers encounter scale issues.

2019-10-04 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 6574f27  HADOOP-16570. S3A committers encounter scale issues.
6574f27 is described below

commit 6574f27fa348542411bff888b184cd7ce34e5d9e
Author: Steve Loughran 
AuthorDate: Fri Oct 4 18:53:53 2019 +0100

HADOOP-16570. S3A committers encounter scale issues.

Contributed by Steve Loughran.

This addresses two scale issues which has surfaced in large scale benchmarks
of the S3A Committers.

* Thread pools are not cleaned up.
  This now happens, with tests.

* OOM on job commit for jobs with many thousands of tasks,
  each generating tens of (very large) files.

Instead of loading all pending commits into memory as a single list, the 
list
of files to load is the sole list which is passed around; .pendingset files 
are
loaded and processed in isolation -and reloaded if necessary for any
abort/rollback operation.

The parallel commit/abort/revert operations now work at the .pendingset 
level,
rather than that of individual pending commit files. The existing 
parallelized
Tasks API is still used to commit those files, but with a null thread pool, 
so
as to serialize the operations.

Change-Id: I5c8240cd31800eaa83d112358770ca0eb2bca797
---
 .../java/org/apache/hadoop/fs/s3a/Constants.java   |   6 +
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java|  11 +-
 .../apache/hadoop/fs/s3a/S3AInstrumentation.java   |   3 +
 .../hadoop/fs/s3a/commit/AbstractS3ACommitter.java | 466 ++---
 .../fs/s3a/commit/AbstractS3ACommitterFactory.java |   2 +-
 .../hadoop/fs/s3a/commit/CommitConstants.java  |   6 +
 .../hadoop/fs/s3a/commit/files/SuccessData.java|   6 +
 .../fs/s3a/commit/magic/MagicS3GuardCommitter.java |   8 +-
 .../commit/staging/DirectoryStagingCommitter.java  |  17 +-
 .../staging/PartitionedStagingCommitter.java   | 101 -
 .../fs/s3a/commit/staging/StagingCommitter.java|  43 +-
 .../org/apache/hadoop/fs/s3a/ITestS3AClosedFS.java |  18 +-
 .../org/apache/hadoop/fs/s3a/S3ATestUtils.java |  42 +-
 .../fs/s3a/commit/AbstractITCommitProtocol.java|  74 +++-
 .../org/apache/hadoop/fs/s3a/commit/TestTasks.java |   2 +-
 .../commit/integration/ITestS3ACommitterMRJob.java |   6 +-
 .../fs/s3a/commit/staging/StagingTestBase.java |  66 ++-
 .../staging/TestDirectoryCommitterScale.java   | 314 ++
 .../s3a/commit/staging/TestStagingCommitter.java   |  29 +-
 .../TestStagingDirectoryOutputCommitter.java   |  10 +-
 .../staging/TestStagingPartitionedJobCommit.java   |  43 +-
 .../staging/TestStagingPartitionedTaskCommit.java  |  46 +-
 22 files changed, 1123 insertions(+), 196 deletions(-)

diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
index 014a494..fdbdf37 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
@@ -837,4 +837,10 @@ public final class Constants {
   public static final String AWS_SERVICE_IDENTIFIER_S3 = "S3";
   public static final String AWS_SERVICE_IDENTIFIER_DDB = "DDB";
   public static final String AWS_SERVICE_IDENTIFIER_STS = "STS";
+
+  /**
+   * How long to wait for the thread pool to terminate when cleaning up.
+   * Value: {@value} seconds.
+   */
+  public static final int THREAD_POOL_SHUTDOWN_DELAY_SECONDS = 30;
 }
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
index 9431884..26f16a7 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
@@ -154,6 +154,7 @@ import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.util.Progressable;
 import org.apache.hadoop.util.ReflectionUtils;
 import org.apache.hadoop.util.SemaphoredDelegatingExecutor;
+import org.apache.hadoop.util.concurrent.HadoopExecutors;
 
 import static 
org.apache.hadoop.fs.impl.AbstractFSBuilderImpl.rejectUnknownMandatoryKeys;
 import static 
org.apache.hadoop.fs.impl.PathCapabilitiesSupport.validatePathCapabilityArgs;
@@ -3062,6 +3063,12 @@ public class S3AFileSystem extends FileSystem implements 
StreamCapabilities,
 transfers.shutdownNow(true);
 transfers = null;
   }
+  HadoopExecutors.shutdown(boundedThreadPool, LOG,
+  THREAD_POOL_SHUTDOWN_DELAY_SECONDS, TimeUnit.SECONDS);
+  boundedThreadPool = null;
+  HadoopExecutors.shutdown(unboundedThreadPool, LOG,
+  

[hadoop] branch trunk updated (6574f27 -> 10bdc59)

2019-10-04 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 6574f27  HADOOP-16570. S3A committers encounter scale issues.
 add 10bdc59  HADOOP-16579. Upgrade to Apache Curator 4.2.0 excluding ZK 
(#1531). Contributed by Norbert Kalmár.

No new revisions were added by this update.

Summary of changes:
 hadoop-project/pom.xml | 26 +-
 1 file changed, 25 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HDFS-2470. NN should automatically set permissions on dfs.namenode.*.dir. Contributed by Siddharth Wagle.

2019-10-04 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 8bb9807  HDFS-2470. NN should automatically set permissions on 
dfs.namenode.*.dir. Contributed by Siddharth Wagle.
8bb9807 is described below

commit 8bb98076bef835229ef3498dd740921c47ed3e24
Author: Arpit Agarwal 
AuthorDate: Mon Aug 26 15:43:52 2019 -0700

HDFS-2470. NN should automatically set permissions on dfs.namenode.*.dir. 
Contributed by Siddharth Wagle.

(cherry picked from commit a64a43b77fb1032dcb66730a6b6257a24726c256)
(cherry picked from commit 8b1238171752d03712ae69d8464108ef0803ae10)

 Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStartup.java
---
 .../java/org/apache/hadoop/hdfs/DFSConfigKeys.java |  8 +++
 .../hadoop/hdfs/qjournal/server/JNStorage.java |  6 -
 .../apache/hadoop/hdfs/server/common/Storage.java  | 28 ++
 .../hadoop/hdfs/server/namenode/FSImage.java   |  2 +-
 .../hadoop/hdfs/server/namenode/NNStorage.java | 24 ++-
 .../src/main/resources/hdfs-default.xml| 20 
 .../hadoop/hdfs/server/namenode/TestEditLog.java   | 14 +++
 .../hadoop/hdfs/server/namenode/TestStartup.java   | 27 -
 8 files changed, 115 insertions(+), 14 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index d6ed729..02ce2f4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -538,6 +538,10 @@ public class DFSConfigKeys extends CommonConfigurationKeys 
{
   public static final String  DFS_NAMENODE_HTTPS_ADDRESS_DEFAULT = "0.0.0.0:" 
+ DFS_NAMENODE_HTTPS_PORT_DEFAULT;
   public static final String  DFS_NAMENODE_NAME_DIR_KEY =
   HdfsClientConfigKeys.DeprecatedKeys.DFS_NAMENODE_NAME_DIR_KEY;
+  public static final String DFS_NAMENODE_NAME_DIR_PERMISSION_KEY =
+  "dfs.namenode.storage.dir.perm";
+  public static final String DFS_NAMENODE_NAME_DIR_PERMISSION_DEFAULT =
+  "700";
   public static final String  DFS_NAMENODE_EDITS_DIR_KEY =
   HdfsClientConfigKeys.DeprecatedKeys.DFS_NAMENODE_EDITS_DIR_KEY;
   public static final String  DFS_NAMENODE_SHARED_EDITS_DIR_KEY = 
"dfs.namenode.shared.edits.dir";
@@ -1029,6 +1033,10 @@ public class DFSConfigKeys extends 
CommonConfigurationKeys {
   public static final int DFS_JOURNALNODE_RPC_PORT_DEFAULT = 8485;
   public static final String  DFS_JOURNALNODE_RPC_BIND_HOST_KEY = 
"dfs.journalnode.rpc-bind-host";
   public static final String  DFS_JOURNALNODE_RPC_ADDRESS_DEFAULT = "0.0.0.0:" 
+ DFS_JOURNALNODE_RPC_PORT_DEFAULT;
+  public static final String DFS_JOURNAL_EDITS_DIR_PERMISSION_KEY =
+  "dfs.journalnode.edits.dir.perm";
+  public static final String DFS_JOURNAL_EDITS_DIR_PERMISSION_DEFAULT =
+  "700";
 
   public static final String  DFS_JOURNALNODE_HTTP_ADDRESS_KEY = 
"dfs.journalnode.http-address";
   public static final int DFS_JOURNALNODE_HTTP_PORT_DEFAULT = 8480;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JNStorage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JNStorage.java
index e886432..ee99e2b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JNStorage.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JNStorage.java
@@ -26,6 +26,8 @@ import java.util.regex.Pattern;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.StartupOption;
 import org.apache.hadoop.hdfs.server.common.InconsistentFSStateException;
@@ -65,7 +67,9 @@ class JNStorage extends Storage {
   StorageErrorReporter errorReporter) throws IOException {
 super(NodeType.JOURNAL_NODE);
 
-sd = new StorageDirectory(logDir);
+sd = new StorageDirectory(logDir, null, false, new FsPermission(conf.get(
+DFSConfigKeys.DFS_JOURNAL_EDITS_DIR_PERMISSION_KEY,
+DFSConfigKeys.DFS_JOURNAL_EDITS_DIR_PERMISSION_DEFAULT)));
 this.addStorageDir(sd);
 this.fjm = new FileJournalManager(conf, sd, errorReporter);
 
diff 

[hadoop] branch branch-3.2 updated: HDFS-14245. [SBN read] Enable ObserverReadProxyProvider to work with non-ClientProtocol proxy types. Contributed by Erik Krogen.

2019-10-04 Thread xkrogen
This is an automated email from the ASF dual-hosted git repository.

xkrogen pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 6630c9b  HDFS-14245. [SBN read] Enable ObserverReadProxyProvider to 
work with non-ClientProtocol proxy types. Contributed by Erik Krogen.
6630c9b is described below

commit 6630c9b75d65deefb5550e355eef7783909a57bc
Author: Erik Krogen 
AuthorDate: Wed Apr 17 14:38:24 2019 -0700

HDFS-14245. [SBN read] Enable ObserverReadProxyProvider to work with 
non-ClientProtocol proxy types. Contributed by Erik Krogen.

(cherry picked from 5847e0014343f60f853cb796781ca1fa03a72efd)
---
 .../ha/AbstractNNFailoverProxyProvider.java|  3 +-
 .../namenode/ha/ObserverReadProxyProvider.java | 54 --
 .../namenode/ha/TestDelegationTokensWithHA.java|  2 +-
 .../hdfs/server/namenode/ha/TestObserverNode.java  | 12 +
 .../namenode/ha/TestObserverReadProxyProvider.java | 29 
 5 files changed, 83 insertions(+), 17 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
index 572cb1c..646b100 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
@@ -115,7 +115,8 @@ public abstract class AbstractNNFailoverProxyProvider 
implements
 /**
  * The currently known state of the NameNode represented by this ProxyInfo.
  * This may be out of date if the NameNode has changed state since the last
- * time the state was checked.
+ * time the state was checked. If the NameNode could not be contacted, this
+ * will store null to indicate an unknown state.
  */
 private HAServiceState cachedState;
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java
index 2ccf885..ac4b1e7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java
@@ -66,7 +66,7 @@ import com.google.common.annotations.VisibleForTesting;
  */
 @InterfaceAudience.Private
 @InterfaceStability.Evolving
-public class ObserverReadProxyProvider
+public class ObserverReadProxyProvider
 extends AbstractNNFailoverProxyProvider {
   @VisibleForTesting
   static final Logger LOG = LoggerFactory.getLogger(
@@ -189,7 +189,13 @@ public class ObserverReadProxyProvider
 AUTO_MSYNC_PERIOD_DEFAULT, TimeUnit.MILLISECONDS);
 
 // TODO : make this configurable or remove this variable
-this.observerReadEnabled = true;
+if (wrappedProxy instanceof ClientProtocol) {
+  this.observerReadEnabled = true;
+} else {
+  LOG.info("Disabling observer reads for {} because the requested proxy "
+  + "class does not implement {}", uri, 
ClientProtocol.class.getName());
+  this.observerReadEnabled = false;
+}
   }
 
   public AlignmentContext getAlignmentContext() {
@@ -267,7 +273,7 @@ public class ObserverReadProxyProvider
   private HAServiceState getHAServiceState(NNProxyInfo proxyInfo) {
 IOException ioe;
 try {
-  return proxyInfo.proxy.getHAServiceState();
+  return getProxyAsClientProtocol(proxyInfo.proxy).getHAServiceState();
 } catch (RemoteException re) {
   // Though a Standby will allow a getHAServiceState call, it won't allow
   // delegation token lookup, so if DT is used it throws StandbyException
@@ -284,7 +290,19 @@ public class ObserverReadProxyProvider
   LOG.debug("Failed to connect to {} while fetching HAServiceState",
   proxyInfo.getAddress(), ioe);
 }
-return HAServiceState.STANDBY;
+return null;
+  }
+
+  /**
+   * Return the input proxy, cast as a {@link ClientProtocol}. This catches any
+   * {@link ClassCastException} and wraps it in a more helpful message. This
+   * should ONLY be called if the caller is certain that the proxy is, in fact,
+   * a {@link ClientProtocol}.
+   */
+  private ClientProtocol getProxyAsClientProtocol(T proxy) {
+assert proxy instanceof ClientProtocol : "BUG: Attempted to use proxy "
++ "of class " + proxy.getClass() + " as if it was a ClientProtocol.";
+return (ClientProtocol) proxy;
   }
 
   /**
@@ -299,7 +31

[hadoop] branch branch-3.1 updated: HDFS-14245. [SBN read] Enable ObserverReadProxyProvider to work with non-ClientProtocol proxy types. Contributed by Erik Krogen.

2019-10-04 Thread xkrogen
This is an automated email from the ASF dual-hosted git repository.

xkrogen pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 9fdb849  HDFS-14245. [SBN read] Enable ObserverReadProxyProvider to 
work with non-ClientProtocol proxy types. Contributed by Erik Krogen.
9fdb849 is described below

commit 9fdb849e034573bb44abd593eefa1e13a3261376
Author: Erik Krogen 
AuthorDate: Wed Apr 17 14:38:24 2019 -0700

HDFS-14245. [SBN read] Enable ObserverReadProxyProvider to work with 
non-ClientProtocol proxy types. Contributed by Erik Krogen.

(cherry picked from 5847e0014343f60f853cb796781ca1fa03a72efd)
(cherry picked from 6630c9b75d65deefb5550e355eef7783909a57bc)
---
 .../ha/AbstractNNFailoverProxyProvider.java|  3 +-
 .../namenode/ha/ObserverReadProxyProvider.java | 54 --
 .../namenode/ha/TestDelegationTokensWithHA.java|  2 +-
 .../hdfs/server/namenode/ha/TestObserverNode.java  | 12 +
 .../namenode/ha/TestObserverReadProxyProvider.java | 29 
 5 files changed, 83 insertions(+), 17 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
index 572cb1c..646b100 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
@@ -115,7 +115,8 @@ public abstract class AbstractNNFailoverProxyProvider 
implements
 /**
  * The currently known state of the NameNode represented by this ProxyInfo.
  * This may be out of date if the NameNode has changed state since the last
- * time the state was checked.
+ * time the state was checked. If the NameNode could not be contacted, this
+ * will store null to indicate an unknown state.
  */
 private HAServiceState cachedState;
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java
index 2ccf885..ac4b1e7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java
@@ -66,7 +66,7 @@ import com.google.common.annotations.VisibleForTesting;
  */
 @InterfaceAudience.Private
 @InterfaceStability.Evolving
-public class ObserverReadProxyProvider
+public class ObserverReadProxyProvider
 extends AbstractNNFailoverProxyProvider {
   @VisibleForTesting
   static final Logger LOG = LoggerFactory.getLogger(
@@ -189,7 +189,13 @@ public class ObserverReadProxyProvider
 AUTO_MSYNC_PERIOD_DEFAULT, TimeUnit.MILLISECONDS);
 
 // TODO : make this configurable or remove this variable
-this.observerReadEnabled = true;
+if (wrappedProxy instanceof ClientProtocol) {
+  this.observerReadEnabled = true;
+} else {
+  LOG.info("Disabling observer reads for {} because the requested proxy "
+  + "class does not implement {}", uri, 
ClientProtocol.class.getName());
+  this.observerReadEnabled = false;
+}
   }
 
   public AlignmentContext getAlignmentContext() {
@@ -267,7 +273,7 @@ public class ObserverReadProxyProvider
   private HAServiceState getHAServiceState(NNProxyInfo proxyInfo) {
 IOException ioe;
 try {
-  return proxyInfo.proxy.getHAServiceState();
+  return getProxyAsClientProtocol(proxyInfo.proxy).getHAServiceState();
 } catch (RemoteException re) {
   // Though a Standby will allow a getHAServiceState call, it won't allow
   // delegation token lookup, so if DT is used it throws StandbyException
@@ -284,7 +290,19 @@ public class ObserverReadProxyProvider
   LOG.debug("Failed to connect to {} while fetching HAServiceState",
   proxyInfo.getAddress(), ioe);
 }
-return HAServiceState.STANDBY;
+return null;
+  }
+
+  /**
+   * Return the input proxy, cast as a {@link ClientProtocol}. This catches any
+   * {@link ClassCastException} and wraps it in a more helpful message. This
+   * should ONLY be called if the caller is certain that the proxy is, in fact,
+   * a {@link ClientProtocol}.
+   */
+  private ClientProtocol getProxyAsClientProtocol(T proxy) {
+assert proxy instanceof ClientProtocol : "BUG: Attempted to use proxy "
++ "of class " + proxy.getClass() + " as if it was a ClientProtocol.

[hadoop] 01/03: HDFS-14497. Write lock held by metasave impact following RPC processing. Contributed by He Xiaoqiao.

2019-10-04 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit d91c68729c18a2f853beb66b506ef9dff3526a38
Author: He Xiaoqiao 
AuthorDate: Thu May 30 13:27:48 2019 -0700

HDFS-14497. Write lock held by metasave impact following RPC processing. 
Contributed by He Xiaoqiao.

Signed-off-by: Wei-Chiu Chuang 
(cherry picked from commit 33c62f8f4e94442825fe286c2b18518925d980e6)

 Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java

(cherry picked from commit 80392e94b6dca16229fc35426d107184be68c908)
---
 .../hdfs/server/blockmanagement/BlockManager.java  |  2 +-
 .../hadoop/hdfs/server/namenode/FSNamesystem.java  | 28 ++
 .../hadoop/hdfs/server/namenode/TestMetaSave.java  | 60 ++
 3 files changed, 80 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 4e450e2..321f44e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -726,7 +726,7 @@ public class BlockManager implements BlockStatsMXBean {
 
   /** Dump meta data to out. */
   public void metaSave(PrintWriter out) {
-assert namesystem.hasWriteLock(); // TODO: block manager read lock and NS 
write lock
+assert namesystem.hasReadLock(); // TODO: block manager read lock and NS 
write lock
 final List live = new ArrayList();
 final List dead = new ArrayList();
 datanodeManager.fetchDatanodes(live, dead, false);
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 7ebc8da..108cfbc 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -122,6 +122,7 @@ import java.lang.management.ManagementFactory;
 import java.net.InetAddress;
 import java.net.InetSocketAddress;
 import java.net.URI;
+import java.nio.file.Files;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
@@ -588,6 +589,12 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   private String nameNodeHostName = null;
 
   /**
+   * HDFS-14497: Concurrency control when many metaSave request to write
+   * meta to same out stream after switch to read lock.
+   */
+  private Object metaSaveLock = new Object();
+
+  /**
* Notify that loading of this FSDirectory is complete, and
* it is imageLoaded for use
*/
@@ -1757,23 +1764,26 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 String operationName = "metaSave";
 checkSuperuserPrivilege(operationName);
 checkOperation(OperationCategory.READ);
-writeLock();
+readLock();
 try {
   checkOperation(OperationCategory.READ);
-  File file = new File(System.getProperty("hadoop.log.dir"), filename);
-  PrintWriter out = new PrintWriter(new BufferedWriter(
-  new OutputStreamWriter(new FileOutputStream(file), Charsets.UTF_8)));
-  metaSave(out);
-  out.flush();
-  out.close();
+  synchronized(metaSaveLock) {
+File file = new File(System.getProperty("hadoop.log.dir"), filename);
+PrintWriter out = new PrintWriter(new BufferedWriter(
+new OutputStreamWriter(Files.newOutputStream(file.toPath()),
+Charsets.UTF_8)));
+metaSave(out);
+out.flush();
+out.close();
+  }
 } finally {
-  writeUnlock(operationName);
+  readUnlock(operationName);
 }
 logAuditEvent(true, operationName, null);
   }
 
   private void metaSave(PrintWriter out) {
-assert hasWriteLock();
+assert hasReadLock();
 long totalInodes = this.dir.totalInodes();
 long totalBlocks = this.getBlocksTotal();
 out.println(totalInodes + " files and directories, " + totalBlocks
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetaSave.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetaSave.java
index 8cc1433..d4748f3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetaSave.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetaSave.java

[hadoop] branch branch-3.1 updated (9fdb849 -> 166d38c)

2019-10-04 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a change to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 9fdb849  HDFS-14245. [SBN read] Enable ObserverReadProxyProvider to 
work with non-ClientProtocol proxy types. Contributed by Erik Krogen.
 new d91c687  HDFS-14497. Write lock held by metasave impact following RPC 
processing. Contributed by He Xiaoqiao.
 new c61c114  HDFS-14497. Addendum: Write lock held by metasave impact 
following RPC processing.
 new 166d38c  HDFS-14890.  Fixed namenode and journalnode startup on 
Windows.  Contributed by Siddharth Wagle

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../hdfs/server/blockmanagement/BlockManager.java  |  2 +-
 .../apache/hadoop/hdfs/server/common/Storage.java  | 11 ++--
 .../hadoop/hdfs/server/namenode/FSNamesystem.java  | 28 ++
 .../hadoop/hdfs/server/namenode/TestMetaSave.java  | 60 ++
 4 files changed, 88 insertions(+), 13 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 02/03: HDFS-14497. Addendum: Write lock held by metasave impact following RPC processing.

2019-10-04 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit c61c114a3db021d1d86f290ce8fb6185582ed986
Author: He Xiaoqiao 
AuthorDate: Tue Aug 27 15:26:21 2019 -0700

HDFS-14497. Addendum: Write lock held by metasave impact following RPC 
processing.

Signed-off-by: Wei-Chiu Chuang 
(cherry picked from commit dde9399b37bffb77da17c025f0b9b673d7088bc6)
(cherry picked from commit e29ae7db1258f08339cf0f53968fce6f98ada3ac)
---
 .../main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 108cfbc..a091d19 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -592,7 +592,7 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
* HDFS-14497: Concurrency control when many metaSave request to write
* meta to same out stream after switch to read lock.
*/
-  private Object metaSaveLock = new Object();
+  private final Object metaSaveLock = new Object();
 
   /**
* Notify that loading of this FSDirectory is complete, and


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 03/03: HDFS-14890. Fixed namenode and journalnode startup on Windows. Contributed by Siddharth Wagle

2019-10-04 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 166d38ceaacf76b099d4d296ce17c810d8a3840e
Author: Eric Yang 
AuthorDate: Fri Oct 4 13:13:10 2019 -0400

HDFS-14890.  Fixed namenode and journalnode startup on Windows.
 Contributed by Siddharth Wagle

(cherry picked from commit aa24add8f0e9812d1f787efb3c40155b0fdeed9c)
(cherry picked from commit 8bb2b00d38978859b22b892034eb3f559b820942)
---
 .../java/org/apache/hadoop/hdfs/server/common/Storage.java| 11 ---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
index 2ba943a..e7da44e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
@@ -447,9 +447,14 @@ public abstract class Storage extends StorageInfo {
 throw new IOException("Cannot create directory " + curDir);
   }
   if (permission != null) {
-Set permissions =
-PosixFilePermissions.fromString(permission.toString());
-Files.setPosixFilePermissions(curDir.toPath(), permissions);
+try {
+  Set permissions =
+  PosixFilePermissions.fromString(permission.toString());
+  Files.setPosixFilePermissions(curDir.toPath(), permissions);
+} catch (UnsupportedOperationException uoe) {
+  // Default to FileUtil for non posix file systems
+  FileUtil.setPermission(curDir, permission);
+}
   }
 }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated (6630c9b -> e29ae7d)

2019-10-04 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a change to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 6630c9b  HDFS-14245. [SBN read] Enable ObserverReadProxyProvider to 
work with non-ClientProtocol proxy types. Contributed by Erik Krogen.
 new 80392e9  HDFS-14497. Write lock held by metasave impact following RPC 
processing. Contributed by He Xiaoqiao.
 new e29ae7d  HDFS-14497. Addendum: Write lock held by metasave impact 
following RPC processing.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../hdfs/server/blockmanagement/BlockManager.java  |  2 +-
 .../hadoop/hdfs/server/namenode/FSNamesystem.java  | 28 ++
 .../hadoop/hdfs/server/namenode/TestMetaSave.java  | 60 ++
 3 files changed, 80 insertions(+), 10 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 02/02: HDFS-14497. Addendum: Write lock held by metasave impact following RPC processing.

2019-10-04 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit e29ae7db1258f08339cf0f53968fce6f98ada3ac
Author: He Xiaoqiao 
AuthorDate: Tue Aug 27 15:26:21 2019 -0700

HDFS-14497. Addendum: Write lock held by metasave impact following RPC 
processing.

Signed-off-by: Wei-Chiu Chuang 
(cherry picked from commit dde9399b37bffb77da17c025f0b9b673d7088bc6)
---
 .../main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index d700654..0e6a8c4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -594,7 +594,7 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
* HDFS-14497: Concurrency control when many metaSave request to write
* meta to same out stream after switch to read lock.
*/
-  private Object metaSaveLock = new Object();
+  private final Object metaSaveLock = new Object();
 
   /**
* Notify that loading of this FSDirectory is complete, and


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/02: HDFS-14497. Write lock held by metasave impact following RPC processing. Contributed by He Xiaoqiao.

2019-10-04 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 80392e94b6dca16229fc35426d107184be68c908
Author: He Xiaoqiao 
AuthorDate: Thu May 30 13:27:48 2019 -0700

HDFS-14497. Write lock held by metasave impact following RPC processing. 
Contributed by He Xiaoqiao.

Signed-off-by: Wei-Chiu Chuang 
(cherry picked from commit 33c62f8f4e94442825fe286c2b18518925d980e6)

 Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
---
 .../hdfs/server/blockmanagement/BlockManager.java  |  2 +-
 .../hadoop/hdfs/server/namenode/FSNamesystem.java  | 28 ++
 .../hadoop/hdfs/server/namenode/TestMetaSave.java  | 60 ++
 3 files changed, 80 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 55d06a6..7399879 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -740,7 +740,7 @@ public class BlockManager implements BlockStatsMXBean {
 
   /** Dump meta data to out. */
   public void metaSave(PrintWriter out) {
-assert namesystem.hasWriteLock(); // TODO: block manager read lock and NS 
write lock
+assert namesystem.hasReadLock(); // TODO: block manager read lock and NS 
write lock
 final List live = new ArrayList();
 final List dead = new ArrayList();
 datanodeManager.fetchDatanodes(live, dead, false);
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index cfc4cf4..d700654 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -124,6 +124,7 @@ import java.lang.management.ManagementFactory;
 import java.net.InetAddress;
 import java.net.InetSocketAddress;
 import java.net.URI;
+import java.nio.file.Files;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
@@ -590,6 +591,12 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   private String nameNodeHostName = null;
 
   /**
+   * HDFS-14497: Concurrency control when many metaSave request to write
+   * meta to same out stream after switch to read lock.
+   */
+  private Object metaSaveLock = new Object();
+
+  /**
* Notify that loading of this FSDirectory is complete, and
* it is imageLoaded for use
*/
@@ -1765,23 +1772,26 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 String operationName = "metaSave";
 checkSuperuserPrivilege(operationName);
 checkOperation(OperationCategory.READ);
-writeLock();
+readLock();
 try {
   checkOperation(OperationCategory.READ);
-  File file = new File(System.getProperty("hadoop.log.dir"), filename);
-  PrintWriter out = new PrintWriter(new BufferedWriter(
-  new OutputStreamWriter(new FileOutputStream(file), Charsets.UTF_8)));
-  metaSave(out);
-  out.flush();
-  out.close();
+  synchronized(metaSaveLock) {
+File file = new File(System.getProperty("hadoop.log.dir"), filename);
+PrintWriter out = new PrintWriter(new BufferedWriter(
+new OutputStreamWriter(Files.newOutputStream(file.toPath()),
+Charsets.UTF_8)));
+metaSave(out);
+out.flush();
+out.close();
+  }
 } finally {
-  writeUnlock(operationName);
+  readUnlock(operationName);
 }
 logAuditEvent(true, operationName, null);
   }
 
   private void metaSave(PrintWriter out) {
-assert hasWriteLock();
+assert hasReadLock();
 long totalInodes = this.dir.totalInodes();
 long totalBlocks = this.getBlocksTotal();
 out.println(totalInodes + " files and directories, " + totalBlocks
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetaSave.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetaSave.java
index 8cc1433..d4748f3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetaSave.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetaSave.java
@@ -27,6 +27,7 @@ import java.io.File;
 import java.io.FileInputStream;
 impo

[hadoop] branch trunk updated (10bdc59 -> f3eaa84)

2019-10-04 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 10bdc59  HADOOP-16579. Upgrade to Apache Curator 4.2.0 excluding ZK 
(#1531). Contributed by Norbert Kalmár.
 add f3eaa84  HDDS-2164 : om.db.checkpoints is getting filling up fast. 
(#1536)

No new revisions were added by this update.

Summary of changes:
 .../hadoop/hdds/utils/db/RDBCheckpointManager.java |  2 +-
 .../hadoop/hdds/utils/db/RocksDBCheckpoint.java|  3 +-
 .../main/java/org/apache/hadoop/ozone/OmUtils.java | 97 ++
 .../java/org/apache/hadoop/ozone/TestOmUtils.java  | 79 ++
 .../hadoop/ozone/om/TestOMDbCheckpointServlet.java |  4 -
 .../hadoop/ozone/om/OMDBCheckpointServlet.java | 59 +
 .../java/org/apache/hadoop/ozone/om/OMMetrics.java | 10 ---
 .../org/apache/hadoop/ozone/recon/ReconUtils.java  | 61 ++
 .../apache/hadoop/ozone/recon/TestReconUtils.java  | 44 +-
 .../impl/TestOzoneManagerServiceProviderImpl.java  |  6 +-
 10 files changed, 240 insertions(+), 125 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-2158. Fixing Json Injection Issue in JsonUtils. (#1486)

2019-10-04 Thread hanishakoneru
This is an automated email from the ASF dual-hosted git repository.

hanishakoneru pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 8de4374  HDDS-2158. Fixing Json Injection Issue in JsonUtils. (#1486)
8de4374 is described below

commit 8de4374427e77d5d9b79a710ca9225f749556eda
Author: Hanisha Koneru 
AuthorDate: Fri Oct 4 12:52:29 2019 -0700

HDDS-2158. Fixing Json Injection Issue in JsonUtils. (#1486)
---
 .../java/org/apache/hadoop/hdds/scm/container/ContainerInfo.java   | 2 +-
 .../src/main/java/org/apache/hadoop/ozone/web/utils/JsonUtils.java | 5 ++---
 .../org/apache/hadoop/hdds/scm/cli/container/ListSubcommand.java   | 4 +---
 .../java/org/apache/hadoop/ozone/web/ozShell/ObjectPrinter.java| 3 +--
 .../hadoop/ozone/web/ozShell/bucket/AddAclBucketHandler.java   | 5 ++---
 .../hadoop/ozone/web/ozShell/bucket/GetAclBucketHandler.java   | 4 ++--
 .../hadoop/ozone/web/ozShell/bucket/RemoveAclBucketHandler.java| 7 +++
 .../hadoop/ozone/web/ozShell/bucket/SetAclBucketHandler.java   | 5 ++---
 .../org/apache/hadoop/ozone/web/ozShell/keys/AddAclKeyHandler.java | 5 ++---
 .../org/apache/hadoop/ozone/web/ozShell/keys/GetAclKeyHandler.java | 4 ++--
 .../apache/hadoop/ozone/web/ozShell/keys/RemoveAclKeyHandler.java  | 7 +++
 .../org/apache/hadoop/ozone/web/ozShell/keys/SetAclKeyHandler.java | 5 ++---
 .../org/apache/hadoop/ozone/web/ozShell/token/GetTokenHandler.java | 2 +-
 .../apache/hadoop/ozone/web/ozShell/token/PrintTokenHandler.java   | 2 +-
 .../hadoop/ozone/web/ozShell/volume/AddAclVolumeHandler.java   | 5 ++---
 .../hadoop/ozone/web/ozShell/volume/GetAclVolumeHandler.java   | 4 ++--
 .../hadoop/ozone/web/ozShell/volume/RemoveAclVolumeHandler.java| 7 +++
 .../hadoop/ozone/web/ozShell/volume/SetAclVolumeHandler.java   | 5 ++---
 18 files changed, 34 insertions(+), 47 deletions(-)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerInfo.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerInfo.java
index fe479ba..5c58e92 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerInfo.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerInfo.java
@@ -54,7 +54,7 @@ public class ContainerInfo implements 
Comparator,
 mapper.setVisibility(PropertyAccessor.FIELD, 
JsonAutoDetect.Visibility.ANY);
 mapper
 .setVisibility(PropertyAccessor.GETTER, 
JsonAutoDetect.Visibility.NONE);
-WRITER = mapper.writer();
+WRITER = mapper.writerWithDefaultPrettyPrinter();
   }
 
   private HddsProtos.LifeCycleState state;
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/web/utils/JsonUtils.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/web/utils/JsonUtils.java
index af56da3..4177b96 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/web/utils/JsonUtils.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/web/utils/JsonUtils.java
@@ -43,10 +43,9 @@ public final class JsonUtils {
 // Never constructed
   }
 
-  public static String toJsonStringWithDefaultPrettyPrinter(String jsonString)
+  public static String toJsonStringWithDefaultPrettyPrinter(Object obj)
   throws IOException {
-Object json = READER.readValue(jsonString);
-return WRITTER.writeValueAsString(json);
+return WRITTER.writeValueAsString(obj);
   }
 
   public static String toJsonString(Object obj) throws IOException {
diff --git 
a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/ListSubcommand.java
 
b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/ListSubcommand.java
index 288d9fa..5169c80 100644
--- 
a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/ListSubcommand.java
+++ 
b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/ListSubcommand.java
@@ -24,7 +24,6 @@ import java.util.concurrent.Callable;
 import org.apache.hadoop.hdds.cli.HddsVersionProvider;
 import org.apache.hadoop.hdds.scm.client.ScmClient;
 import org.apache.hadoop.hdds.scm.container.ContainerInfo;
-import org.apache.hadoop.ozone.web.utils.JsonUtils;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -61,8 +60,7 @@ public class ListSubcommand implements Callable {
   private void outputContainerInfo(ContainerInfo containerInfo)
   throws IOException {
 // Print container report info.
-LOG.info("{}", JsonUtils.toJsonStringWithDefaultPrettyPrinter(
-containerInfo.toJsonString()));
+LOG.info("{}", containerInfo.toJsonString());
   }
 
   @Override
diff --git 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/ObjectPrinter.java
 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShe

[hadoop] branch trunk updated (8de4374 -> a3cf54c)

2019-10-04 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 8de4374  HDDS-2158. Fixing Json Injection Issue in JsonUtils. (#1486)
 add a3cf54c  HDDS-2250. Generated configs missing from 
ozone-filesystem-lib jars

No new revisions were added by this update.

Summary of changes:
 hadoop-ozone/ozonefs-lib-current/pom.xml | 3 +++
 1 file changed, 3 insertions(+)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HDFS-13806. EC: No error message for unsetting EC policy of the directory inherits the erasure coding policy from an ancestor directory. Contributed by Ayush Saxena

2019-10-04 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 7ab02a6  HDFS-13806. EC: No error message for unsetting EC policy of 
the directory inherits the erasure coding policy from an ancestor directory. 
Contributed by Ayush Saxena.
7ab02a6 is described below

commit 7ab02a67bc1424badb6a73f15eb4e935f04989b9
Author: Vinayakumar B 
AuthorDate: Mon Sep 10 09:10:51 2018 +0530

HDFS-13806. EC: No error message for unsetting EC policy of the directory 
inherits the erasure coding policy from an ancestor directory. Contributed by 
Ayush Saxena.

(cherry picked from commit 30eceec3420fc6be00d3878ba787bd9518d3ca0e)
---
 .../java/org/apache/hadoop/hdfs/DFSClient.java |  3 +-
 .../hdfs/protocol/NoECPolicySetException.java  | 37 ++
 .../hdfs/server/namenode/FSDirErasureCodingOp.java |  4 +++
 .../java/org/apache/hadoop/hdfs/tools/ECAdmin.java |  7 
 .../hdfs/TestUnsetAndChangeDirectoryEcPolicy.java  | 23 +++---
 .../src/test/resources/testErasureCodingConf.xml   | 24 ++
 6 files changed, 92 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 77b053d..ddfe98f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -133,6 +133,7 @@ import org.apache.hadoop.hdfs.protocol.LastBlockWithStatus;
 import org.apache.hadoop.hdfs.protocol.LocatedBlock;
 import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
 import org.apache.hadoop.hdfs.protocol.NSQuotaExceededException;
+import org.apache.hadoop.hdfs.protocol.NoECPolicySetException;
 import org.apache.hadoop.hdfs.protocol.OpenFileEntry;
 import org.apache.hadoop.hdfs.protocol.OpenFilesIterator;
 import org.apache.hadoop.hdfs.protocol.OpenFilesIterator.OpenFilesType;
@@ -2757,7 +2758,7 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
   throw re.unwrapRemoteException(AccessControlException.class,
   SafeModeException.class,
   UnresolvedPathException.class,
-  FileNotFoundException.class);
+  FileNotFoundException.class, NoECPolicySetException.class);
 }
   }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/NoECPolicySetException.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/NoECPolicySetException.java
new file mode 100644
index 000..de3054a
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/NoECPolicySetException.java
@@ -0,0 +1,37 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.protocol;
+
+import java.io.IOException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+/**
+ *Thrown when no EC policy is set explicitly on the directory.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Evolving
+public class NoECPolicySetException extends IOException {
+  private static final long serialVersionUID = 1L;
+
+  public NoECPolicySetException(String msg) {
+super(msg);
+  }
+}
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirErasureCodingOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirErasureCodingOp.java
index 920451d..5ebfa2f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirErasureCodingOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirErasureCodingOp.java
@@ -28,6 +28,7 @@ import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.hdfs.XAttrHelper;
 import or

[hadoop] branch trunk updated (a3cf54c -> f209722)

2019-10-04 Thread bharat
This is an automated email from the ASF dual-hosted git repository.

bharat pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from a3cf54c  HDDS-2250. Generated configs missing from 
ozone-filesystem-lib jars
 add f209722  HDDS-2257. Fix checkstyle issues in ChecksumByteBuffer (#1603)

No new revisions were added by this update.

Summary of changes:
 .../hadoop/ozone/common/ChecksumByteBuffer.java| 24 ++
 1 file changed, 16 insertions(+), 8 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: Revert "YARN-9873. Mutation API Config Change updates Version Number. Contributed by Prabhu Joseph"

2019-10-04 Thread sunilg
This is an automated email from the ASF dual-hosted git repository.

sunilg pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new fb1ecff  Revert "YARN-9873. Mutation API Config Change updates Version 
Number. Contributed by Prabhu Joseph"
fb1ecff is described below

commit fb1ecff6a26875c7f2b86ef07d7b9145c469377e
Author: Sunil G 
AuthorDate: Sat Oct 5 09:15:17 2019 +0530

Revert "YARN-9873. Mutation API Config Change updates Version Number. 
Contributed by Prabhu Joseph"

This reverts commit 4510970e2f7728d036c750b596985e5ffa357b60.
---
 .../scheduler/MutableConfigurationProvider.java|  6 
 .../conf/FSSchedulerConfigurationStore.java|  7 -
 .../capacity/conf/InMemoryConfigurationStore.java  |  8 -
 .../capacity/conf/LeveldbConfigurationStore.java   | 23 +--
 .../conf/MutableCSConfigurationProvider.java   |  5 
 .../capacity/conf/YarnConfigurationStore.java  |  6 
 .../capacity/conf/ZKConfigurationStore.java| 19 
 .../server/resourcemanager/webapp/RMWSConsts.java  |  3 --
 .../resourcemanager/webapp/RMWebServices.java  | 34 +-
 .../capacity/conf/TestZKConfigurationStore.java| 15 --
 10 files changed, 2 insertions(+), 124 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/MutableConfigurationProvider.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/MutableConfigurationProvider.java
index eff8aa8..9e843df 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/MutableConfigurationProvider.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/MutableConfigurationProvider.java
@@ -65,12 +65,6 @@ public interface MutableConfigurationProvider {
*/
   Configuration getConfiguration();
 
-  /**
-   * Get the last updated scheduler config version.
-   * @return Last updated scheduler config version.
-   */
-  long getConfigVersion() throws Exception;
-
   void formatConfigurationInStore(Configuration conf) throws Exception;
 
   /**
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/FSSchedulerConfigurationStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/FSSchedulerConfigurationStore.java
index f59fa0a..80053be 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/FSSchedulerConfigurationStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/FSSchedulerConfigurationStore.java
@@ -148,13 +148,6 @@ public class FSSchedulerConfigurationStore extends 
YarnConfigurationStore {
 tempConfigPath = null;
   }
 
-  @Override
-  public long getConfigVersion() throws Exception {
-String version = getLatestConfigPath().getName().
-substring(YarnConfiguration.CS_CONFIGURATION_FILE.length() + 1);
-return Long.parseLong(version);
-  }
-
   private void finalizeFileSystemFile() throws IOException {
 // call confirmMutation() make sure tempConfigPath is not null
 Path finalConfigPath = getFinalConfigPath(tempConfigPath);
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/InMemoryConfigurationStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/InMemoryConfigurationStore.java
index 59d140e..4871443 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/InMemoryConfigurationStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/InMemoryConfigurationStore.java
@@ -33,13 +33,11 @@ public class InMemoryConfigurationStore extends 
YarnConfigurationStore {
 
   privat

[hadoop] branch branch-3.2 updated: Revert "YARN-9873. Mutation API Config Change updates Version Number. Contributed by Prabhu Joseph"

2019-10-04 Thread sunilg
This is an automated email from the ASF dual-hosted git repository.

sunilg pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 5704f15  Revert "YARN-9873. Mutation API Config Change updates Version 
Number. Contributed by Prabhu Joseph"
5704f15 is described below

commit 5704f1558974e97ace213c75fe99d5f327d33db9
Author: Sunil G 
AuthorDate: Sat Oct 5 09:16:04 2019 +0530

Revert "YARN-9873. Mutation API Config Change updates Version Number. 
Contributed by Prabhu Joseph"

This reverts commit 3a0afcfb7f196bee8e534a3e80211bfa077b2aee.
---
 .../scheduler/MutableConfigurationProvider.java|  6 
 .../conf/FSSchedulerConfigurationStore.java|  7 -
 .../capacity/conf/InMemoryConfigurationStore.java  |  8 -
 .../capacity/conf/LeveldbConfigurationStore.java   | 23 +--
 .../conf/MutableCSConfigurationProvider.java   |  5 
 .../capacity/conf/YarnConfigurationStore.java  |  6 
 .../capacity/conf/ZKConfigurationStore.java| 19 
 .../server/resourcemanager/webapp/RMWSConsts.java  |  3 --
 .../resourcemanager/webapp/RMWebServices.java  | 34 +-
 .../capacity/conf/TestZKConfigurationStore.java| 15 --
 10 files changed, 2 insertions(+), 124 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/MutableConfigurationProvider.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/MutableConfigurationProvider.java
index eff8aa8..9e843df 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/MutableConfigurationProvider.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/MutableConfigurationProvider.java
@@ -65,12 +65,6 @@ public interface MutableConfigurationProvider {
*/
   Configuration getConfiguration();
 
-  /**
-   * Get the last updated scheduler config version.
-   * @return Last updated scheduler config version.
-   */
-  long getConfigVersion() throws Exception;
-
   void formatConfigurationInStore(Configuration conf) throws Exception;
 
   /**
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/FSSchedulerConfigurationStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/FSSchedulerConfigurationStore.java
index 14cd0a4..ddc5c8a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/FSSchedulerConfigurationStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/FSSchedulerConfigurationStore.java
@@ -148,13 +148,6 @@ public class FSSchedulerConfigurationStore extends 
YarnConfigurationStore {
 tempConfigPath = null;
   }
 
-  @Override
-  public long getConfigVersion() throws Exception {
-String version = getLatestConfigPath().getName().
-substring(YarnConfiguration.CS_CONFIGURATION_FILE.length() + 1);
-return Long.parseLong(version);
-  }
-
   private void finalizeFileSystemFile() throws IOException {
 // call confirmMutation() make sure tempConfigPath is not null
 Path finalConfigPath = getFinalConfigPath(tempConfigPath);
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/InMemoryConfigurationStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/InMemoryConfigurationStore.java
index 59d140e..4871443 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/InMemoryConfigurationStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/InMemoryConfigurationStore.java
@@ -33,13 +33,11 @@ public class InMemoryConfigurationStore extends 
YarnConfigurationStore {