[GitHub] [hadoop] mehakmeet commented on a change in pull request #2069: HADOOP-16830. IOStatistics API.

2020-08-06 Thread GitBox


mehakmeet commented on a change in pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#discussion_r466189283



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/DynamicIOStatisticsBuilder.java
##
@@ -0,0 +1,248 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.statistics.impl;
+
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.function.Function;
+import java.util.function.ToLongFunction;
+
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.MeanStatistic;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+import static com.google.common.base.Preconditions.checkState;
+
+/**
+ * Builder of Dynamic IO Statistics which serve up up longs.
+ * Instantiate through
+ * {@link IOStatisticsBinding#dynamicIOStatistics()}.
+ */
+public class DynamicIOStatisticsBuilder {
+
+  /**
+   * the instance being built up. Will be null after the (single)
+   * call to {@link #build()}.
+   */
+  private DynamicIOStatistics instance = new DynamicIOStatistics();
+
+  /**
+   * Build the IOStatistics instance.
+   * @return an instance.
+   * @throws IllegalStateException if the builder has already been built.
+   */
+  public IOStatistics build() {
+final DynamicIOStatistics stats = activeInstance();
+// stop the builder from working any more.
+instance = null;
+return stats;
+  }
+
+
+  /**
+   * Get the statistics instance.
+   * @return the instance to build/return
+   * @throws IllegalStateException if the builder has already been built.
+   */
+  private DynamicIOStatistics activeInstance() {
+checkState(instance != null, "Already built");
+return instance;
+  }
+
+  /**
+   * Add a new evaluator to the counter statistics.
+   * @param key key of this statistic
+   * @param eval evaluator for the statistic
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withLongFunctionCounter(String key,
+  ToLongFunction eval) {
+activeInstance().addCounterFunction(key, eval::applyAsLong);
+return this;
+  }
+
+  /**
+   * Add a counter statistic to dynamically return the
+   * latest value of the source.
+   * @param key key of this statistic
+   * @param source atomic long counter
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withAtomicLongCounter(String key,
+  AtomicLong source) {
+withLongFunctionCounter(key, s -> source.get());
+return this;
+  }
+
+  /**
+   * Add a counter statistic to dynamically return the
+   * latest value of the source.
+   * @param key key of this statistic
+   * @param source atomic int counter
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withAtomicIntegerCounter(String key,
+  AtomicInteger source) {
+withLongFunctionCounter(key, s -> source.get());
+return this;
+  }
+
+  /**
+   * Build a dynamic counter statistic from a
+   * {@link MutableCounterLong}.
+   * @param key key of this statistic
+   * @param source mutable long counter
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withMutableCounter(String key,
+  MutableCounterLong source) {
+withLongFunctionCounter(key, s -> source.value());
+return this;
+  }
+
+  /**
+   * Add a new evaluator to the gauge statistics.
+   * @param key key of this statistic
+   * @param eval evaluator for the statistic
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withLongFunctionGauge(String key,

Review comment:
   can make this method private.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet commented on a change in pull request #2069: HADOOP-16830. IOStatistics API.

2020-08-06 Thread GitBox


mehakmeet commented on a change in pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#discussion_r466190899



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/DynamicIOStatisticsBuilder.java
##
@@ -0,0 +1,248 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.statistics.impl;
+
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.function.Function;
+import java.util.function.ToLongFunction;
+
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.MeanStatistic;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+import static com.google.common.base.Preconditions.checkState;
+
+/**
+ * Builder of Dynamic IO Statistics which serve up up longs.
+ * Instantiate through
+ * {@link IOStatisticsBinding#dynamicIOStatistics()}.
+ */
+public class DynamicIOStatisticsBuilder {
+
+  /**
+   * the instance being built up. Will be null after the (single)
+   * call to {@link #build()}.
+   */
+  private DynamicIOStatistics instance = new DynamicIOStatistics();
+
+  /**
+   * Build the IOStatistics instance.
+   * @return an instance.
+   * @throws IllegalStateException if the builder has already been built.
+   */
+  public IOStatistics build() {
+final DynamicIOStatistics stats = activeInstance();
+// stop the builder from working any more.
+instance = null;
+return stats;
+  }
+
+
+  /**
+   * Get the statistics instance.
+   * @return the instance to build/return
+   * @throws IllegalStateException if the builder has already been built.
+   */
+  private DynamicIOStatistics activeInstance() {
+checkState(instance != null, "Already built");
+return instance;
+  }
+
+  /**
+   * Add a new evaluator to the counter statistics.
+   * @param key key of this statistic
+   * @param eval evaluator for the statistic
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withLongFunctionCounter(String key,
+  ToLongFunction eval) {
+activeInstance().addCounterFunction(key, eval::applyAsLong);
+return this;
+  }
+
+  /**
+   * Add a counter statistic to dynamically return the
+   * latest value of the source.
+   * @param key key of this statistic
+   * @param source atomic long counter
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withAtomicLongCounter(String key,
+  AtomicLong source) {
+withLongFunctionCounter(key, s -> source.get());
+return this;
+  }
+
+  /**
+   * Add a counter statistic to dynamically return the
+   * latest value of the source.
+   * @param key key of this statistic
+   * @param source atomic int counter
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withAtomicIntegerCounter(String key,
+  AtomicInteger source) {
+withLongFunctionCounter(key, s -> source.get());
+return this;
+  }
+
+  /**
+   * Build a dynamic counter statistic from a
+   * {@link MutableCounterLong}.
+   * @param key key of this statistic
+   * @param source mutable long counter
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withMutableCounter(String key,
+  MutableCounterLong source) {
+withLongFunctionCounter(key, s -> source.value());
+return this;
+  }
+
+  /**
+   * Add a new evaluator to the gauge statistics.
+   * @param key key of this statistic
+   * @param eval evaluator for the statistic
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withLongFunctionGauge(String key,
+  ToLongFunction eval) {
+activeInstance().addGaugeFunction(key, eval::applyAsLong);
+return this;
+  }
+
+  /**
+   * Add a gauge statistic to dynamically return the
+   * latest value of the source.
+   * @param key key of this statistic
+   * @param source atomic long gauge
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withAtomicLongGauge(String key,
+  AtomicLong source) {
+withLongFunctionGauge(key, s -> source.get());
+return this;
+  }
+
+  /**
+   * Add a gauge statistic to dynamically return the
+   * latest value of the source.
+  

[GitHub] [hadoop] mehakmeet commented on a change in pull request #2069: HADOOP-16830. IOStatistics API.

2020-08-06 Thread GitBox


mehakmeet commented on a change in pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#discussion_r466191485



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/DynamicIOStatisticsBuilder.java
##
@@ -0,0 +1,248 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.statistics.impl;
+
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.function.Function;
+import java.util.function.ToLongFunction;
+
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.MeanStatistic;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+import static com.google.common.base.Preconditions.checkState;
+
+/**
+ * Builder of Dynamic IO Statistics which serve up up longs.
+ * Instantiate through
+ * {@link IOStatisticsBinding#dynamicIOStatistics()}.
+ */
+public class DynamicIOStatisticsBuilder {
+
+  /**
+   * the instance being built up. Will be null after the (single)
+   * call to {@link #build()}.
+   */
+  private DynamicIOStatistics instance = new DynamicIOStatistics();
+
+  /**
+   * Build the IOStatistics instance.
+   * @return an instance.
+   * @throws IllegalStateException if the builder has already been built.
+   */
+  public IOStatistics build() {
+final DynamicIOStatistics stats = activeInstance();
+// stop the builder from working any more.
+instance = null;
+return stats;
+  }
+
+
+  /**
+   * Get the statistics instance.
+   * @return the instance to build/return
+   * @throws IllegalStateException if the builder has already been built.
+   */
+  private DynamicIOStatistics activeInstance() {
+checkState(instance != null, "Already built");
+return instance;
+  }
+
+  /**
+   * Add a new evaluator to the counter statistics.
+   * @param key key of this statistic
+   * @param eval evaluator for the statistic
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withLongFunctionCounter(String key,
+  ToLongFunction eval) {
+activeInstance().addCounterFunction(key, eval::applyAsLong);
+return this;
+  }
+
+  /**
+   * Add a counter statistic to dynamically return the
+   * latest value of the source.
+   * @param key key of this statistic
+   * @param source atomic long counter
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withAtomicLongCounter(String key,
+  AtomicLong source) {
+withLongFunctionCounter(key, s -> source.get());
+return this;
+  }
+
+  /**
+   * Add a counter statistic to dynamically return the
+   * latest value of the source.
+   * @param key key of this statistic
+   * @param source atomic int counter
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withAtomicIntegerCounter(String key,
+  AtomicInteger source) {
+withLongFunctionCounter(key, s -> source.get());
+return this;
+  }
+
+  /**
+   * Build a dynamic counter statistic from a
+   * {@link MutableCounterLong}.
+   * @param key key of this statistic
+   * @param source mutable long counter
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withMutableCounter(String key,
+  MutableCounterLong source) {
+withLongFunctionCounter(key, s -> source.value());
+return this;
+  }
+
+  /**
+   * Add a new evaluator to the gauge statistics.
+   * @param key key of this statistic
+   * @param eval evaluator for the statistic
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withLongFunctionGauge(String key,
+  ToLongFunction eval) {
+activeInstance().addGaugeFunction(key, eval::applyAsLong);
+return this;
+  }
+
+  /**
+   * Add a gauge statistic to dynamically return the
+   * latest value of the source.
+   * @param key key of this statistic
+   * @param source atomic long gauge
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withAtomicLongGauge(String key,
+  AtomicLong source) {
+withLongFunctionGauge(key, s -> source.get());
+return this;
+  }
+
+  /**
+   * Add a gauge statistic to dynamically return the
+   * latest value of the source.
+  

[GitHub] [hadoop] mehakmeet commented on a change in pull request #2069: HADOOP-16830. IOStatistics API.

2020-08-06 Thread GitBox


mehakmeet commented on a change in pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#discussion_r466192173



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/DynamicIOStatisticsBuilder.java
##
@@ -0,0 +1,248 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.statistics.impl;
+
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.function.Function;
+import java.util.function.ToLongFunction;
+
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.MeanStatistic;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+import static com.google.common.base.Preconditions.checkState;
+
+/**
+ * Builder of Dynamic IO Statistics which serve up up longs.
+ * Instantiate through
+ * {@link IOStatisticsBinding#dynamicIOStatistics()}.
+ */
+public class DynamicIOStatisticsBuilder {
+
+  /**
+   * the instance being built up. Will be null after the (single)
+   * call to {@link #build()}.
+   */
+  private DynamicIOStatistics instance = new DynamicIOStatistics();
+
+  /**
+   * Build the IOStatistics instance.
+   * @return an instance.
+   * @throws IllegalStateException if the builder has already been built.
+   */
+  public IOStatistics build() {
+final DynamicIOStatistics stats = activeInstance();
+// stop the builder from working any more.
+instance = null;
+return stats;
+  }
+
+
+  /**
+   * Get the statistics instance.
+   * @return the instance to build/return
+   * @throws IllegalStateException if the builder has already been built.
+   */
+  private DynamicIOStatistics activeInstance() {
+checkState(instance != null, "Already built");
+return instance;
+  }
+
+  /**
+   * Add a new evaluator to the counter statistics.
+   * @param key key of this statistic
+   * @param eval evaluator for the statistic
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withLongFunctionCounter(String key,
+  ToLongFunction eval) {
+activeInstance().addCounterFunction(key, eval::applyAsLong);
+return this;
+  }
+
+  /**
+   * Add a counter statistic to dynamically return the
+   * latest value of the source.
+   * @param key key of this statistic
+   * @param source atomic long counter
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withAtomicLongCounter(String key,
+  AtomicLong source) {
+withLongFunctionCounter(key, s -> source.get());
+return this;
+  }
+
+  /**
+   * Add a counter statistic to dynamically return the
+   * latest value of the source.
+   * @param key key of this statistic
+   * @param source atomic int counter
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withAtomicIntegerCounter(String key,
+  AtomicInteger source) {
+withLongFunctionCounter(key, s -> source.get());
+return this;
+  }
+
+  /**
+   * Build a dynamic counter statistic from a
+   * {@link MutableCounterLong}.
+   * @param key key of this statistic
+   * @param source mutable long counter
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withMutableCounter(String key,
+  MutableCounterLong source) {
+withLongFunctionCounter(key, s -> source.value());
+return this;
+  }
+
+  /**
+   * Add a new evaluator to the gauge statistics.
+   * @param key key of this statistic
+   * @param eval evaluator for the statistic
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withLongFunctionGauge(String key,
+  ToLongFunction eval) {
+activeInstance().addGaugeFunction(key, eval::applyAsLong);
+return this;
+  }
+
+  /**
+   * Add a gauge statistic to dynamically return the
+   * latest value of the source.
+   * @param key key of this statistic
+   * @param source atomic long gauge
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withAtomicLongGauge(String key,
+  AtomicLong source) {
+withLongFunctionGauge(key, s -> source.get());
+return this;
+  }
+
+  /**
+   * Add a gauge statistic to dynamically return the
+   * latest value of the source.
+  

[GitHub] [hadoop] mehakmeet commented on a change in pull request #2069: HADOOP-16830. IOStatistics API.

2020-08-06 Thread GitBox


mehakmeet commented on a change in pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#discussion_r466192790



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/DynamicIOStatisticsBuilder.java
##
@@ -0,0 +1,248 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.statistics.impl;
+
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.function.Function;
+import java.util.function.ToLongFunction;
+
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.MeanStatistic;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+import static com.google.common.base.Preconditions.checkState;
+
+/**
+ * Builder of Dynamic IO Statistics which serve up up longs.
+ * Instantiate through
+ * {@link IOStatisticsBinding#dynamicIOStatistics()}.
+ */
+public class DynamicIOStatisticsBuilder {
+
+  /**
+   * the instance being built up. Will be null after the (single)
+   * call to {@link #build()}.
+   */
+  private DynamicIOStatistics instance = new DynamicIOStatistics();
+
+  /**
+   * Build the IOStatistics instance.
+   * @return an instance.
+   * @throws IllegalStateException if the builder has already been built.
+   */
+  public IOStatistics build() {
+final DynamicIOStatistics stats = activeInstance();
+// stop the builder from working any more.
+instance = null;
+return stats;
+  }
+
+
+  /**
+   * Get the statistics instance.
+   * @return the instance to build/return
+   * @throws IllegalStateException if the builder has already been built.
+   */
+  private DynamicIOStatistics activeInstance() {
+checkState(instance != null, "Already built");
+return instance;
+  }
+
+  /**
+   * Add a new evaluator to the counter statistics.
+   * @param key key of this statistic
+   * @param eval evaluator for the statistic
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withLongFunctionCounter(String key,
+  ToLongFunction eval) {
+activeInstance().addCounterFunction(key, eval::applyAsLong);
+return this;
+  }
+
+  /**
+   * Add a counter statistic to dynamically return the
+   * latest value of the source.
+   * @param key key of this statistic
+   * @param source atomic long counter
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withAtomicLongCounter(String key,
+  AtomicLong source) {
+withLongFunctionCounter(key, s -> source.get());
+return this;
+  }
+
+  /**
+   * Add a counter statistic to dynamically return the
+   * latest value of the source.
+   * @param key key of this statistic
+   * @param source atomic int counter
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withAtomicIntegerCounter(String key,
+  AtomicInteger source) {
+withLongFunctionCounter(key, s -> source.get());
+return this;
+  }
+
+  /**
+   * Build a dynamic counter statistic from a
+   * {@link MutableCounterLong}.
+   * @param key key of this statistic
+   * @param source mutable long counter
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withMutableCounter(String key,
+  MutableCounterLong source) {
+withLongFunctionCounter(key, s -> source.value());
+return this;
+  }
+
+  /**
+   * Add a new evaluator to the gauge statistics.
+   * @param key key of this statistic
+   * @param eval evaluator for the statistic
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withLongFunctionGauge(String key,
+  ToLongFunction eval) {
+activeInstance().addGaugeFunction(key, eval::applyAsLong);
+return this;
+  }
+
+  /**
+   * Add a gauge statistic to dynamically return the
+   * latest value of the source.
+   * @param key key of this statistic
+   * @param source atomic long gauge
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withAtomicLongGauge(String key,
+  AtomicLong source) {
+withLongFunctionGauge(key, s -> source.get());
+return this;
+  }
+
+  /**
+   * Add a gauge statistic to dynamically return the
+   * latest value of the source.
+  

[GitHub] [hadoop] mehakmeet commented on a change in pull request #2069: HADOOP-16830. IOStatistics API.

2020-08-06 Thread GitBox


mehakmeet commented on a change in pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#discussion_r466194352



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/WrappedIOStatistics.java
##
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.statistics.impl;
+
+import java.util.Map;
+
+import com.google.common.base.Preconditions;
+
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.MeanStatistic;
+
+import static 
org.apache.hadoop.fs.statistics.IOStatisticsLogging.ioStatisticsToString;
+
+/**
+ * Wrap IOStatistics source with another (dynamic) wrapper.
+ */
+public class WrappedIOStatistics extends AbstractIOStatisticsImpl {
+
+  /**
+   * The wrapped statistics.
+   */
+  private IOStatistics wrapped;
+
+  /**
+   * Instantiate.
+   * @param wrapped nullable wrapped statistics.
+   */
+  public WrappedIOStatistics(final IOStatistics wrapped) {
+this.wrapped = wrapped;
+  }
+
+  /**
+   * Instantiate without setting the statistics.
+   * This is for subclasses which build up the map during their own
+   * construction.
+   */
+  protected WrappedIOStatistics() {
+  }
+
+  @Override
+  public Map counters() {
+return getWrapped().counters();
+  }
+
+  /**
+   * Get at the wrapped inner statistics.
+   * @return the wrapped value
+   */
+  protected IOStatistics getWrapped() {
+return wrapped;
+  }
+
+  /**
+   * Set the wrapped statistics.
+   * Will fail if the field is already set.
+   * @param wrapped new value
+   */
+  protected void setWrapped(final IOStatistics wrapped) {
+Preconditions.checkState(this.wrapped == null,
+"Attempted to overwrite existing wrapped statistics");
+this.wrapped = wrapped;
+  }
+
+  @Override
+  public Map gauges() {
+return getWrapped().gauges();
+  }
+
+  @Override
+  public Map minimums() {
+return getWrapped().minimums();
+  }
+
+  @Override
+  public Map maximums() {
+return getWrapped().maximums();
+  }
+
+  @Override
+  public Map meanStatistics() {
+return getWrapped().meanStatistics();
+  }
+
+  /**
+   * return the statistics dump of the wrapped statistics

Review comment:
   nit: '.' at the end of Javadoc sentence.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17165) Implement service-user feature in DecayRPCScheduler

2020-08-06 Thread Yuxuan Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17172113#comment-17172113
 ] 

Yuxuan Wang commented on HADOOP-17165:
--

Furthermore, if we can specify users for each priority level queue, it's more 
useful.
But now is also a very useful patch, hope someone can review it.

> Implement service-user feature in DecayRPCScheduler
> ---
>
> Key: HADOOP-17165
> URL: https://issues.apache.org/jira/browse/HADOOP-17165
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-17165.001.patch
>
>
> In our cluster, we want to use FairCallQueue to limit heavy users, but not 
> want to restrict certain users who are submitting important requests. This 
> jira proposes to implement the service-user feature that the user is always 
> scheduled high-priority queue.
> According to HADOOP-9640, the initial concept of FCQ has this feature, but 
> not implemented finally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl commented on a change in pull request #2176: HDFS-15492. Make trash root inside each snapshottable directory

2020-08-06 Thread GitBox


smengcl commented on a change in pull request #2176:
URL: https://github.com/apache/hadoop/pull/2176#discussion_r466231836



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
##
@@ -2144,4 +2146,180 @@ public void testECCloseCommittedBlock() throws 
Exception {
   LambdaTestUtils.intercept(IOException.class, "", () -> str.close());
 }
   }
+
+  @Test
+  public void testGetTrashRoot() throws IOException {
+Configuration conf = getTestConfiguration();
+conf.setBoolean(DFS_NAMENODE_SNAPSHOT_TRASHROOT_ENABLED, true);
+MiniDFSCluster cluster =
+new MiniDFSCluster.Builder(conf).numDataNodes(1).build();
+try {
+  DistributedFileSystem dfs = cluster.getFileSystem();
+  Path testDir = new Path("/ssgtr/test1/");
+  Path file0path = new Path(testDir, "file-0");
+  dfs.create(file0path);
+
+  Path trBeforeAllowSnapshot = dfs.getTrashRoot(file0path);
+  String trBeforeAllowSnapshotStr = 
trBeforeAllowSnapshot.toUri().getPath();
+  // The trash root should be in user home directory
+  String homeDirStr = dfs.getHomeDirectory().toUri().getPath();
+  assertTrue(trBeforeAllowSnapshotStr.startsWith(homeDirStr));
+
+  dfs.allowSnapshot(testDir);
+
+  Path trAfterAllowSnapshot = dfs.getTrashRoot(file0path);
+  String trAfterAllowSnapshotStr = trAfterAllowSnapshot.toUri().getPath();
+  // The trash root should now be in the snapshot root
+  String testDirStr = testDir.toUri().getPath();
+  assertTrue(trAfterAllowSnapshotStr.startsWith(testDirStr));
+
+  // Cleanup
+  dfs.disallowSnapshot(testDir);
+  dfs.delete(testDir, true);
+} finally {
+  if (cluster != null) {
+cluster.shutdown();
+  }
+}
+  }
+
+  private boolean isPathInUserHome(String pathStr, DistributedFileSystem dfs) {
+String homeDirStr = dfs.getHomeDirectory().toUri().getPath();
+return pathStr.startsWith(homeDirStr);
+  }
+
+  @Test
+  public void testGetTrashRoots() throws IOException {
+Configuration conf = getTestConfiguration();
+conf.setBoolean(DFS_NAMENODE_SNAPSHOT_TRASHROOT_ENABLED, true);
+MiniDFSCluster cluster =
+new MiniDFSCluster.Builder(conf).numDataNodes(1).build();
+try {
+  DistributedFileSystem dfs = cluster.getFileSystem();
+  Path testDir = new Path("/ssgtr/test1/");
+  Path file0path = new Path(testDir, "file-0");
+  dfs.create(file0path);
+  // Create user trash
+  Path currUserHome = dfs.getHomeDirectory();
+  Path currUserTrash = new Path(currUserHome, FileSystem.TRASH_PREFIX);
+  dfs.mkdirs(currUserTrash);
+  // Create trash inside test directory
+  Path testDirTrash = new Path(testDir, FileSystem.TRASH_PREFIX);
+  Path testDirTrashCurrUser = new Path(testDirTrash,
+  UserGroupInformation.getCurrentUser().getShortUserName());
+  dfs.mkdirs(testDirTrashCurrUser);
+
+  Collection trashRoots = dfs.getTrashRoots(false);
+  // getTrashRoots should only return 1 empty user trash in the home dir 
now
+  assertEquals(1, trashRoots.size());
+  FileStatus firstFileStatus = trashRoots.iterator().next();
+  String pathStr = firstFileStatus.getPath().toUri().getPath();
+  assertTrue(isPathInUserHome(pathStr, dfs));
+  // allUsers should not make a difference for now because we have one user
+  Collection trashRootsAllUsers = dfs.getTrashRoots(true);
+  assertEquals(trashRoots, trashRootsAllUsers);
+
+  dfs.allowSnapshot(testDir);
+
+  Collection trashRootsAfter = dfs.getTrashRoots(false);
+  // getTrashRoots should return 1 more trash root inside snapshottable dir
+  assertEquals(trashRoots.size() + 1, trashRootsAfter.size());
+  boolean foundUserHomeTrash = false;
+  boolean foundSnapDirUserTrash = false;
+  String testDirStr = testDir.toUri().getPath();
+  for (FileStatus fileStatus : trashRootsAfter) {
+String currPathStr = fileStatus.getPath().toUri().getPath();
+if (isPathInUserHome(currPathStr, dfs)) {
+  foundUserHomeTrash = true;
+} else if (currPathStr.startsWith(testDirStr)) {
+  foundSnapDirUserTrash = true;
+}
+  }
+  assertTrue(foundUserHomeTrash);
+  assertTrue(foundSnapDirUserTrash);
+  // allUsers should not make a difference for now because we have one user
+  Collection trashRootsAfterAllUsers = dfs.getTrashRoots(true);
+  assertEquals(trashRootsAfter, trashRootsAfterAllUsers);
+
+  // Create trash root for user0
+  UserGroupInformation ugi = 
UserGroupInformation.createRemoteUser("user0");
+  String user0HomeStr = DFSUtilClient.getHomeDirectory(conf, ugi);
+  Path user0Trash = new Path(user0HomeStr, FileSystem.TRASH_PREFIX);
+  dfs.mkdirs(user0Trash);
+  // allUsers flag set to false should be unaffected
+  Collection trashRootsAfter2 = dfs.getTrashRoots(false);
+  asser

[GitHub] [hadoop] smengcl commented on a change in pull request #2176: HDFS-15492. Make trash root inside each snapshottable directory

2020-08-06 Thread GitBox


smengcl commented on a change in pull request #2176:
URL: https://github.com/apache/hadoop/pull/2176#discussion_r466236175



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
##
@@ -244,6 +244,10 @@
   "dfs.namenode.snapshot.capture.openfiles";
   boolean DFS_NAMENODE_SNAPSHOT_CAPTURE_OPENFILES_DEFAULT = false;
 
+  String DFS_NAMENODE_SNAPSHOT_TRASHROOT_ENABLED =
+  "dfs.namenode.snapshot.trashroot.enabled";
+  boolean DFS_NAMENODE_SNAPSHOT_TRASHROOT_ENABLED_DEFAULT = false;
+
   String DFS_PROVIDED_ALIASMAP_INMEMORY_RPC_ADDRESS =

Review comment:
   You are right. Done in 6b0a2585e837d7386fc52f58fc32e668d2440f5d.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-08-06 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-17145:
--
Attachment: HADOOP-17145.005.patch

> Unauthenticated users are not authorized to access this page message is 
> misleading in HttpServer2.java
> --
>
> Key: HADOOP-17145
> URL: https://issues.apache.org/jira/browse/HADOOP-17145
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-17145.001.patch, HADOOP-17145.002.patch, 
> HADOOP-17145.003.patch, HADOOP-17145.004.patch, HADOOP-17145.005.patch
>
>
> Recently one of the users were misled by the message "Unauthenticated users 
> are not authorized to access this page" when the user was not an admin user.
> At that point the user is authenticated but has no admin access, so it's 
> actually not an authentication issue but an authorization issue.
> Also, 401 as error code would be better.
> Something like "User is unauthorized to access the page" would help to users 
> to find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2133: HADOOP-17122: Preserving Directory Attributes in DistCp with Atomic Copy

2020-08-06 Thread GitBox


steveloughran commented on pull request #2133:
URL: https://github.com/apache/hadoop/pull/2133#issuecomment-669846914


I will review this on my friday afternoon PR review session.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17171) Please fix CVEs by removing reference to htrace-core4

2020-08-06 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki resolved HADOOP-17171.
---
Resolution: Duplicate

Closing this as duplicate of HADOOP-15566.

> Please fix CVEs by removing reference to htrace-core4
> -
>
> Key: HADOOP-17171
> URL: https://issues.apache.org/jira/browse/HADOOP-17171
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Rodney Aaron Stainback
>Priority: Major
>
> htrace-core4 is a retired project and even on the latest version they Shade 
> Jackson databind version 2.4.0 which has the following CVEs:
> |cve|severity|cvss|
> |CVE-2017-15095|critical|9.8|
> |CVE-2018-1000873|medium|6.5|
> |CVE-2018-14718|critical|9.8|
> |CVE-2018-5968|high|8.1|
> |CVE-2018-7489|critical|9.8|
> |CVE-2019-14540|critical|9.8|
> |CVE-2019-14893|critical|9.8|
> |CVE-2019-16335|critical|9.8|
> |CVE-2019-16942|critical|9.8|
> |CVE-2019-16943|critical|9.8|
> |CVE-2019-17267|critical|9.8|
> |CVE-2019-17531|critical|9.8|
> |CVE-2019-20330|critical|9.8|
> |CVE-2020-10672|high|8.8|
> |CVE-2020-10673|high|8.8|
> |CVE-2020-10968|high|8.8|
> |CVE-2020-10969|high|8.8|
> |CVE-2020-1|high|8.8|
> |CVE-2020-2|high|8.8|
> |CVE-2020-3|high|8.8|
> |CVE-2020-11619|critical|9.8|
> |CVE-2020-11620|critical|9.8|
> |CVE-2020-14060|high|8.1|
> |CVE-2020-14061|high|8.1|
> |CVE-2020-14062|high|8.1|
> |CVE-2020-14195|high|8.1|
> |CVE-2020-8840|critical|9.8|
> |CVE-2020-9546|critical|9.8|
> |CVE-2020-9547|critical|9.8|
> |CVE-2020-9548|critical|9.8|
>  
> Our security team is trying to block us from using hadoop because of this



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-08-06 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17172249#comment-17172249
 ] 

Hadoop QA commented on HADOOP-17145:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 28m 
10s{color} | {color:red} root in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 11m 
50s{color} | {color:red} root in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 10m  
8s{color} | {color:red} root in trunk failed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m  
3s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 12m 
44s{color} | {color:red} root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 12m 44s{color} 
| {color:red} root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 10m 
27s{color} | {color:red} root in the patch failed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 27s{color} 
| {color:red} root in the patch failed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 84 unchanged - 1 fixed = 85 total (was 85) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m  
0s{color} | {color:green} 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2176: HDFS-15492. Make trash root inside each snapshottable directory

2020-08-06 Thread GitBox


hadoop-yetus commented on pull request #2176:
URL: https://github.com/apache/hadoop/pull/2176#issuecomment-669913635


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 21s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  buf  |   0m  1s |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 21s |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |  28m 26s |  root in trunk failed.  |
   | -1 :x: |  compile  |  13m 23s |  root in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |  10m 47s |  root in trunk failed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09.  |
   | -0 :warning: |  checkstyle  |   2m 29s |  The patch fails to run 
checkstyle in root  |
   | +1 :green_heart: |  mvnsite  |   3m 37s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 23s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 23s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   2m 30s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   7m 36s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 52s |  the patch passed  |
   | -1 :x: |  compile  |  13m 19s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  cc  |  13m 19s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javac  |  13m 19s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | +1 :green_heart: |  compile  |  17m 21s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | -1 :x: |  cc  |  17m 21s |  
root-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 129 new + 33 unchanged - 0 
fixed = 162 total (was 33)  |
   | -1 :x: |  javac  |  17m 21s |  
root-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 192 new + 1756 unchanged - 0 
fixed = 1948 total (was 1756)  |
   | -0 :warning: |  checkstyle  |   2m 40s |  The patch fails to run 
checkstyle in root  |
   | +1 :green_heart: |  mvnsite  |   4m  0s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m  3s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 52s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   4m  3s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | -1 :x: |  findbugs  |   2m 51s |  hadoop-hdfs-project/hadoop-hdfs-client 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 32s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m 18s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  |  97m  9s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  9s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 274m 37s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
   |  |  Load of known null value in 
org.apache.hadoop.hdfs.DistributedFileSystem.getTrashRoot(Path)  At 
DistributedFileSystem.java:in 
org.apache.hadoop.hdfs.DistributedFileSystem.getTrashRoot(Path)  At 
DistributedFileSystem.java:[line 3292] |
   | Failed junit tests | hadoop.hdfs.TestGetFileChecksum |
   |   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.tools.TestHdfsConfigFields |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2176/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apa

[GitHub] [hadoop] hadoop-yetus commented on pull request #2168: HADOOP-16202. Enhance openFile()

2020-08-06 Thread GitBox


hadoop-yetus commented on pull request #2168:
URL: https://github.com/apache/hadoop/pull/2168#issuecomment-669926528


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
5 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 16s |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |  25m 52s |  root in trunk failed.  |
   | -1 :x: |  compile  |  12m 57s |  root in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | +1 :green_heart: |  compile  |  18m 59s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -0 :warning: |  checkstyle  |   2m 42s |  The patch fails to run 
checkstyle in root  |
   | +1 :green_heart: |  mvnsite  |   2m 21s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 30s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 11s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 14s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 24s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 42s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javac  |  18m 42s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 210 new + 1844 unchanged 
- 0 fixed = 2054 total (was 1844)  |
   | +1 :green_heart: |  compile  |  16m 50s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  16m 50s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 27s |  The patch fails to run 
checkstyle in root  |
   | +1 :green_heart: |  mvnsite  |   2m 19s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m  9s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 13s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 42s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 33s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 39s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 168m 46s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.contract.rawlocal.TestRawlocalContractOpen |
   |   | hadoop.fs.contract.localfs.TestLocalFSContractOpen |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2168/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2168 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 3785051319ac 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c7e71a6c0be |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | mvninstall | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2168/7/artifact/out/branch-mvninstall-root.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2168/7/artifact/out/branch-compile-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt

[GitHub] [hadoop] hadoop-yetus commented on pull request #2149: HADOOP-13230. S3A to optionally retain directory markers

2020-08-06 Thread GitBox


hadoop-yetus commented on pull request #2149:
URL: https://github.com/apache/hadoop/pull/2149#issuecomment-669930011


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
24 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 19s |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |  27m 13s |  root in trunk failed.  |
   | -1 :x: |  compile  |  14m 34s |  root in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | +1 :green_heart: |  compile  |  17m 46s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -0 :warning: |  checkstyle  |   2m 24s |  The patch fails to run 
checkstyle in root  |
   | +1 :green_heart: |  mvnsite  |   2m 14s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 30s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 19s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 19s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 53s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 39s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 34s |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 41s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javac  |  23m 41s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 210 new + 1839 unchanged 
- 0 fixed = 2049 total (was 1839)  |
   | +1 :green_heart: |  compile  |  20m 50s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  javac  |  20m 50s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 4 new + 1939 unchanged - 
4 fixed = 1943 total (was 1943)  |
   | -0 :warning: |  checkstyle  |   3m  5s |  The patch fails to run 
checkstyle in root  |
   | +1 :green_heart: |  mvnsite  |   2m 25s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 8 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch 3 line(s) with tabs.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  16m 48s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 23s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   4m  3s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m  3s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 42s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 57s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 184m 43s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2149/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2149 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml markdownlint |
   | uname | Linux 3834dc4fafee 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c7e71a6c0be |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-po

[GitHub] [hadoop] hadoop-yetus commented on pull request #2069: HADOOP-16830. IOStatistics API.

2020-08-06 Thread GitBox


hadoop-yetus commented on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-669930316


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  7s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
34 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 18s |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |  25m 33s |  root in trunk failed.  |
   | -1 :x: |  compile  |  11m 52s |  root in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | +1 :green_heart: |  compile  |  17m  9s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | -0 :warning: |  checkstyle  |   2m 31s |  The patch fails to run 
checkstyle in root  |
   | +1 :green_heart: |  mvnsite  |   3m 14s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 12s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 52s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 52s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m 14s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 48s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 36s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 56s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 58s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javac  |  20m 58s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 211 new + 1840 unchanged 
- 0 fixed = 2051 total (was 1840)  |
   | +1 :green_heart: |  compile  |  18m 50s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | -1 :x: |  javac  |  18m 50s |  
root-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 1 new + 1943 unchanged - 1 
fixed = 1944 total (was 1944)  |
   | -0 :warning: |  checkstyle  |   2m 29s |  The patch fails to run 
checkstyle in root  |
   | +1 :green_heart: |  mvnsite  |   3m  5s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 14 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  13m 35s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  hadoop-common in the patch 
passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09.  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  hadoop-mapreduce-client-core in 
the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09.  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with 
JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 0 new + 0 unchanged 
- 4 fixed = 0 total (was 4)  |
   | -1 :x: |  findbugs  |   2m 25s |  hadoop-common-project/hadoop-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | -1 :x: |  findbugs  |   1m 32s |  
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 29s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   7m  1s |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 41s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 185m  6s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.stat

[GitHub] [hadoop] hadoop-yetus commented on pull request #2168: HADOOP-16202. Enhance openFile()

2020-08-06 Thread GitBox


hadoop-yetus commented on pull request #2168:
URL: https://github.com/apache/hadoop/pull/2168#issuecomment-669933296


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
5 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 19s |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |  26m 20s |  root in trunk failed.  |
   | -1 :x: |  compile  |  13m 47s |  root in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | +1 :green_heart: |  compile  |  17m 53s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | -0 :warning: |  checkstyle  |   2m 51s |  The patch fails to run 
checkstyle in root  |
   | +1 :green_heart: |  mvnsite  |   2m 28s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 25s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 18s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m 23s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m  4s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 39s |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 33s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javac  |  24m 33s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 210 new + 1844 unchanged 
- 0 fixed = 2054 total (was 1844)  |
   | +1 :green_heart: |  compile  |  21m 43s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  21m 43s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m 28s |  The patch fails to run 
checkstyle in root  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  18m  1s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   4m 14s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  10m 44s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 28s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 190m 55s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.contract.localfs.TestLocalFSContractOpen |
   |   | hadoop.fs.contract.rawlocal.TestRawlocalContractOpen |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2168/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2168 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux e35029546da3 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c7e71a6c0be |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | mvninstall | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2168/6/artifact/out/branch-mvninstall-root.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2168/6/artifact/out/branch-compile-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt
 |
   | checkstyle | 
https://ci-hadoop.a

[jira] [Updated] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-08-06 Thread Andras Bokor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-17145:
--
Attachment: HADOOP-17145.006.patch

> Unauthenticated users are not authorized to access this page message is 
> misleading in HttpServer2.java
> --
>
> Key: HADOOP-17145
> URL: https://issues.apache.org/jira/browse/HADOOP-17145
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-17145.001.patch, HADOOP-17145.002.patch, 
> HADOOP-17145.003.patch, HADOOP-17145.004.patch, HADOOP-17145.005.patch, 
> HADOOP-17145.006.patch
>
>
> Recently one of the users were misled by the message "Unauthenticated users 
> are not authorized to access this page" when the user was not an admin user.
> At that point the user is authenticated but has no admin access, so it's 
> actually not an authentication issue but an authorization issue.
> Also, 401 as error code would be better.
> Something like "User is unauthorized to access the page" would help to users 
> to find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2176: HDFS-15492. Make trash root inside each snapshottable directory

2020-08-06 Thread GitBox


hadoop-yetus commented on pull request #2176:
URL: https://github.com/apache/hadoop/pull/2176#issuecomment-669936881


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  26m  0s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  buf  |   0m  0s |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 21s |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |  25m  8s |  root in trunk failed.  |
   | -1 :x: |  compile  |  12m 40s |  root in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |  10m 40s |  root in trunk failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   2m 31s |  The patch fails to run 
checkstyle in root  |
   | +1 :green_heart: |  mvnsite  |   3m 32s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 42s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 22s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 38s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   7m 44s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 46s |  the patch passed  |
   | -1 :x: |  compile  |  12m 46s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  cc  |  12m 46s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javac  |  12m 46s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |  11m 48s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  cc  |  11m 48s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  javac  |  11m 48s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   2m 40s |  The patch fails to run 
checkstyle in root  |
   | +1 :green_heart: |  mvnsite  |   3m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 32s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 11s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   2m 31s |  hadoop-hdfs-project/hadoop-hdfs-client 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 40s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m  6s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  |  99m  3s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 287m 15s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
   |  |  Load of known null value in 
org.apache.hadoop.hdfs.DistributedFileSystem.getTrashRoot(Path)  At 
DistributedFileSystem.java:in 
org.apache.hadoop.hdfs.DistributedFileSystem.getTrashRoot(Path)  At 
DistributedFileSystem.java:[line 3292] |
   | Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks |
   |   | hadoop.tools.TestHdfsConfigFields |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.server.datanode.TestBlockScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base:

[GitHub] [hadoop] steveloughran commented on a change in pull request #2069: HADOOP-16830. IOStatistics API.

2020-08-06 Thread GitBox


steveloughran commented on a change in pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#discussion_r466514403



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/IOStatisticsBinding.java
##
@@ -0,0 +1,300 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.statistics.impl;
+
+import java.io.Serializable;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.TreeMap;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.function.BiFunction;
+import java.util.function.Function;
+
+import com.google.common.annotations.VisibleForTesting;
+
+import org.apache.hadoop.fs.StorageStatistics;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.IOStatisticsSource;
+import org.apache.hadoop.fs.statistics.MeanStatistic;
+
+/**
+ * Support for implementing IOStatistics interfaces.
+ */
+public final class IOStatisticsBinding {
+
+  /** Pattern used for each entry. */
+  public static final String ENTRY_PATTERN = "(%s=%s)";

Review comment:
   will make package private so tests can use. Note: the entire impl 
package is tagged as private/unstable





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on a change in pull request #2176: HDFS-15492. Make trash root inside each snapshottable directory

2020-08-06 Thread GitBox


bshashikant commented on a change in pull request #2176:
URL: https://github.com/apache/hadoop/pull/2176#discussion_r466517871



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
##
@@ -2144,4 +2146,293 @@ public void testECCloseCommittedBlock() throws 
Exception {
   LambdaTestUtils.intercept(IOException.class, "", () -> str.close());
 }
   }
+
+  @Test
+  public void testGetTrashRoot() throws IOException {
+Configuration conf = getTestConfiguration();
+conf.setBoolean(DFS_NAMENODE_SNAPSHOT_TRASHROOT_ENABLED, true);
+MiniDFSCluster cluster =
+new MiniDFSCluster.Builder(conf).numDataNodes(1).build();
+try {
+  DistributedFileSystem dfs = cluster.getFileSystem();
+  Path testDir = new Path("/ssgtr/test1/");
+  Path file0path = new Path(testDir, "file-0");
+  dfs.create(file0path);
+
+  Path trBeforeAllowSnapshot = dfs.getTrashRoot(file0path);
+  String trBeforeAllowSnapshotStr = 
trBeforeAllowSnapshot.toUri().getPath();
+  // The trash root should be in user home directory
+  String homeDirStr = dfs.getHomeDirectory().toUri().getPath();
+  assertTrue(trBeforeAllowSnapshotStr.startsWith(homeDirStr));
+
+  dfs.allowSnapshot(testDir);
+
+  Path trAfterAllowSnapshot = dfs.getTrashRoot(file0path);
+  String trAfterAllowSnapshotStr = trAfterAllowSnapshot.toUri().getPath();
+  // The trash root should now be in the snapshot root
+  String testDirStr = testDir.toUri().getPath();
+  assertTrue(trAfterAllowSnapshotStr.startsWith(testDirStr));
+
+  // Cleanup
+  dfs.disallowSnapshot(testDir);
+  dfs.delete(testDir, true);
+} finally {
+  if (cluster != null) {
+cluster.shutdown();
+  }
+}
+  }
+
+  private boolean isPathInUserHome(String pathStr, DistributedFileSystem dfs) {
+String homeDirStr = dfs.getHomeDirectory().toUri().getPath();
+return pathStr.startsWith(homeDirStr);
+  }
+
+  @Test
+  public void testGetTrashRoots() throws IOException {
+Configuration conf = getTestConfiguration();
+conf.setBoolean(DFS_NAMENODE_SNAPSHOT_TRASHROOT_ENABLED, true);
+MiniDFSCluster cluster =
+new MiniDFSCluster.Builder(conf).numDataNodes(1).build();
+try {
+  DistributedFileSystem dfs = cluster.getFileSystem();
+  Path testDir = new Path("/ssgtr/test1/");
+  Path file0path = new Path(testDir, "file-0");
+  dfs.create(file0path);
+  // Create user trash
+  Path currUserHome = dfs.getHomeDirectory();
+  Path currUserTrash = new Path(currUserHome, FileSystem.TRASH_PREFIX);
+  dfs.mkdirs(currUserTrash);
+  // Create trash inside test directory
+  Path testDirTrash = new Path(testDir, FileSystem.TRASH_PREFIX);
+  Path testDirTrashCurrUser = new Path(testDirTrash,
+  UserGroupInformation.getCurrentUser().getShortUserName());
+  dfs.mkdirs(testDirTrashCurrUser);
+
+  Collection trashRoots = dfs.getTrashRoots(false);
+  // getTrashRoots should only return 1 empty user trash in the home dir 
now
+  assertEquals(1, trashRoots.size());
+  FileStatus firstFileStatus = trashRoots.iterator().next();
+  String pathStr = firstFileStatus.getPath().toUri().getPath();
+  assertTrue(isPathInUserHome(pathStr, dfs));
+  // allUsers should not make a difference for now because we have one user
+  Collection trashRootsAllUsers = dfs.getTrashRoots(true);
+  assertEquals(trashRoots, trashRootsAllUsers);
+
+  dfs.allowSnapshot(testDir);
+
+  Collection trashRootsAfter = dfs.getTrashRoots(false);
+  // getTrashRoots should return 1 more trash root inside snapshottable dir
+  assertEquals(trashRoots.size() + 1, trashRootsAfter.size());
+  boolean foundUserHomeTrash = false;
+  boolean foundSnapDirUserTrash = false;
+  String testDirStr = testDir.toUri().getPath();
+  for (FileStatus fileStatus : trashRootsAfter) {
+String currPathStr = fileStatus.getPath().toUri().getPath();
+if (isPathInUserHome(currPathStr, dfs)) {
+  foundUserHomeTrash = true;
+} else if (currPathStr.startsWith(testDirStr)) {
+  foundSnapDirUserTrash = true;
+}
+  }
+  assertTrue(foundUserHomeTrash);
+  assertTrue(foundSnapDirUserTrash);
+  // allUsers should not make a difference for now because we have one user
+  Collection trashRootsAfterAllUsers = dfs.getTrashRoots(true);
+  assertEquals(trashRootsAfter, trashRootsAfterAllUsers);
+
+  // Create trash root for user0
+  UserGroupInformation ugi = 
UserGroupInformation.createRemoteUser("user0");
+  String user0HomeStr = DFSUtilClient.getHomeDirectory(conf, ugi);
+  Path user0Trash = new Path(user0HomeStr, FileSystem.TRASH_PREFIX);
+  dfs.mkdirs(user0Trash);
+  // allUsers flag set to false should be unaffected
+  Collection trashRootsAfter2 = dfs.getTrashRoots(false);
+  a

[GitHub] [hadoop] bshashikant commented on a change in pull request #2176: HDFS-15492. Make trash root inside each snapshottable directory

2020-08-06 Thread GitBox


bshashikant commented on a change in pull request #2176:
URL: https://github.com/apache/hadoop/pull/2176#discussion_r466518339



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
##
@@ -2144,4 +2146,293 @@ public void testECCloseCommittedBlock() throws 
Exception {
   LambdaTestUtils.intercept(IOException.class, "", () -> str.close());
 }
   }
+
+  @Test
+  public void testGetTrashRoot() throws IOException {
+Configuration conf = getTestConfiguration();
+conf.setBoolean(DFS_NAMENODE_SNAPSHOT_TRASHROOT_ENABLED, true);
+MiniDFSCluster cluster =
+new MiniDFSCluster.Builder(conf).numDataNodes(1).build();
+try {
+  DistributedFileSystem dfs = cluster.getFileSystem();
+  Path testDir = new Path("/ssgtr/test1/");
+  Path file0path = new Path(testDir, "file-0");
+  dfs.create(file0path);
+
+  Path trBeforeAllowSnapshot = dfs.getTrashRoot(file0path);
+  String trBeforeAllowSnapshotStr = 
trBeforeAllowSnapshot.toUri().getPath();
+  // The trash root should be in user home directory
+  String homeDirStr = dfs.getHomeDirectory().toUri().getPath();
+  assertTrue(trBeforeAllowSnapshotStr.startsWith(homeDirStr));
+
+  dfs.allowSnapshot(testDir);
+
+  Path trAfterAllowSnapshot = dfs.getTrashRoot(file0path);
+  String trAfterAllowSnapshotStr = trAfterAllowSnapshot.toUri().getPath();
+  // The trash root should now be in the snapshot root
+  String testDirStr = testDir.toUri().getPath();
+  assertTrue(trAfterAllowSnapshotStr.startsWith(testDirStr));
+
+  // Cleanup
+  dfs.disallowSnapshot(testDir);
+  dfs.delete(testDir, true);
+} finally {
+  if (cluster != null) {
+cluster.shutdown();
+  }
+}
+  }
+
+  private boolean isPathInUserHome(String pathStr, DistributedFileSystem dfs) {
+String homeDirStr = dfs.getHomeDirectory().toUri().getPath();
+return pathStr.startsWith(homeDirStr);
+  }
+
+  @Test
+  public void testGetTrashRoots() throws IOException {
+Configuration conf = getTestConfiguration();
+conf.setBoolean(DFS_NAMENODE_SNAPSHOT_TRASHROOT_ENABLED, true);
+MiniDFSCluster cluster =
+new MiniDFSCluster.Builder(conf).numDataNodes(1).build();
+try {
+  DistributedFileSystem dfs = cluster.getFileSystem();
+  Path testDir = new Path("/ssgtr/test1/");
+  Path file0path = new Path(testDir, "file-0");
+  dfs.create(file0path);
+  // Create user trash
+  Path currUserHome = dfs.getHomeDirectory();
+  Path currUserTrash = new Path(currUserHome, FileSystem.TRASH_PREFIX);
+  dfs.mkdirs(currUserTrash);
+  // Create trash inside test directory
+  Path testDirTrash = new Path(testDir, FileSystem.TRASH_PREFIX);
+  Path testDirTrashCurrUser = new Path(testDirTrash,
+  UserGroupInformation.getCurrentUser().getShortUserName());
+  dfs.mkdirs(testDirTrashCurrUser);
+
+  Collection trashRoots = dfs.getTrashRoots(false);
+  // getTrashRoots should only return 1 empty user trash in the home dir 
now
+  assertEquals(1, trashRoots.size());
+  FileStatus firstFileStatus = trashRoots.iterator().next();
+  String pathStr = firstFileStatus.getPath().toUri().getPath();
+  assertTrue(isPathInUserHome(pathStr, dfs));
+  // allUsers should not make a difference for now because we have one user
+  Collection trashRootsAllUsers = dfs.getTrashRoots(true);
+  assertEquals(trashRoots, trashRootsAllUsers);
+
+  dfs.allowSnapshot(testDir);
+
+  Collection trashRootsAfter = dfs.getTrashRoots(false);
+  // getTrashRoots should return 1 more trash root inside snapshottable dir
+  assertEquals(trashRoots.size() + 1, trashRootsAfter.size());
+  boolean foundUserHomeTrash = false;
+  boolean foundSnapDirUserTrash = false;
+  String testDirStr = testDir.toUri().getPath();
+  for (FileStatus fileStatus : trashRootsAfter) {
+String currPathStr = fileStatus.getPath().toUri().getPath();
+if (isPathInUserHome(currPathStr, dfs)) {
+  foundUserHomeTrash = true;
+} else if (currPathStr.startsWith(testDirStr)) {
+  foundSnapDirUserTrash = true;
+}
+  }
+  assertTrue(foundUserHomeTrash);
+  assertTrue(foundSnapDirUserTrash);
+  // allUsers should not make a difference for now because we have one user
+  Collection trashRootsAfterAllUsers = dfs.getTrashRoots(true);
+  assertEquals(trashRootsAfter, trashRootsAfterAllUsers);
+
+  // Create trash root for user0
+  UserGroupInformation ugi = 
UserGroupInformation.createRemoteUser("user0");
+  String user0HomeStr = DFSUtilClient.getHomeDirectory(conf, ugi);
+  Path user0Trash = new Path(user0HomeStr, FileSystem.TRASH_PREFIX);
+  dfs.mkdirs(user0Trash);
+  // allUsers flag set to false should be unaffected
+  Collection trashRootsAfter2 = dfs.getTrashRoots(false);
+  a

[jira] [Created] (HADOOP-17189) add way for s3a to recognise buckets with "." in name and switch to path access

2020-08-06 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-17189:
---

 Summary: add way for s3a to recognise buckets with "." in name and 
switch to path access
 Key: HADOOP-17189
 URL: https://issues.apache.org/jira/browse/HADOOP-17189
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran


# AWS has, historically, allowed buckets with '.' in their name (along with 
other non-DNS valid chars)
# none of which work with virtual hostname S3 clients -you have to enable path 
style access
# which we can't do on a per-bucket basis, as the logic there doesn't support 
buckets with '.' in the name (think about it...)
# and we can't blindly say "use path access everywhere", because all buckets 
created on/after 2020-10-01 won't work that way  
dest.set(Math.max(dest.get(), sourceValue));




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on a change in pull request #2176: HDFS-15492. Make trash root inside each snapshottable directory

2020-08-06 Thread GitBox


bshashikant commented on a change in pull request #2176:
URL: https://github.com/apache/hadoop/pull/2176#discussion_r466525092



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
##
@@ -516,6 +516,11 @@
   public static final int
   DFS_NAMENODE_SNAPSHOT_SKIPLIST_MAX_SKIP_LEVELS_DEFAULT = 0;
 
+  public static final String DFS_NAMENODE_SNAPSHOT_TRASHROOT_ENABLED =
+  "dfs.namenode.snapshot.trashroot.enabled";
+  public static final boolean DFS_NAMENODE_SNAPSHOT_TRASHROOT_ENABLED_DEFAULT =

Review comment:
   i guess we need to define these configs in SnapshotManager if we intend 
not add it in hdfs-default.xml(which i would prefer). It leads to test failure 
here "hadoop.tools.TestHdfsConfigFields"





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17190) ITestTerasortOnS3A.test_120_terasort failing

2020-08-06 Thread Mukund Thakur (Jira)
Mukund Thakur created HADOOP-17190:
--

 Summary: ITestTerasortOnS3A.test_120_terasort failing
 Key: HADOOP-17190
 URL: https://issues.apache.org/jira/browse/HADOOP-17190
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Mukund Thakur


[*INFO*] Running org.apache.hadoop.fs.s3a.commit.terasort.*ITestTerasortOnS3A*

[*ERROR*] *Tests* *run: 14*, *Failures: 2*, Errors: 0, *Skipped: 2*, Time 
elapsed: 110.43 s *<<< FAILURE!* - in 
org.apache.hadoop.fs.s3a.commit.terasort.*ITestTerasortOnS3A*

[*ERROR*] 
test_120_terasort[directory](org.apache.hadoop.fs.s3a.commit.terasort.ITestTerasortOnS3A)
  Time elapsed: 6.261 s  <<< FAILURE!

java.lang.AssertionError: 
terasort(s3a://mthakur-data/terasort-directory/sortin, 
s3a://mthakur-data/terasort-directory/sortout) failed expected:<0> but was:<1>

 at 
org.apache.hadoop.fs.s3a.commit.terasort.ITestTerasortOnS3A.executeStage(ITestTerasortOnS3A.java:241)

 at 
org.apache.hadoop.fs.s3a.commit.terasort.ITestTerasortOnS3A.test_120_terasort(ITestTerasortOnS3A.java:291)

 

[*ERROR*] 
test_120_terasort[magic](org.apache.hadoop.fs.s3a.commit.terasort.ITestTerasortOnS3A)
  Time elapsed: 5.962 s  <<< FAILURE!

java.lang.AssertionError: terasort(s3a://mthakur-data/terasort-magic/sortin, 
s3a://mthakur-data/terasort-magic/sortout) failed expected:<0> but was:<1>

 at 
org.apache.hadoop.fs.s3a.commit.terasort.ITestTerasortOnS3A.executeStage(ITestTerasortOnS3A.java:241)

 at 
org.apache.hadoop.fs.s3a.commit.terasort.ITestTerasortOnS3A.test_120_terasort(ITestTerasortOnS3A.java:291)

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #2069: HADOOP-16830. IOStatistics API.

2020-08-06 Thread GitBox


steveloughran commented on a change in pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#discussion_r466528274



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/DynamicIOStatisticsBuilder.java
##
@@ -0,0 +1,248 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.statistics.impl;
+
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.function.Function;
+import java.util.function.ToLongFunction;
+
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.MeanStatistic;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+import static com.google.common.base.Preconditions.checkState;
+
+/**
+ * Builder of Dynamic IO Statistics which serve up up longs.
+ * Instantiate through
+ * {@link IOStatisticsBinding#dynamicIOStatistics()}.
+ */
+public class DynamicIOStatisticsBuilder {
+
+  /**
+   * the instance being built up. Will be null after the (single)
+   * call to {@link #build()}.
+   */
+  private DynamicIOStatistics instance = new DynamicIOStatistics();
+
+  /**
+   * Build the IOStatistics instance.
+   * @return an instance.
+   * @throws IllegalStateException if the builder has already been built.
+   */
+  public IOStatistics build() {
+final DynamicIOStatistics stats = activeInstance();
+// stop the builder from working any more.
+instance = null;
+return stats;
+  }
+
+
+  /**
+   * Get the statistics instance.
+   * @return the instance to build/return
+   * @throws IllegalStateException if the builder has already been built.
+   */
+  private DynamicIOStatistics activeInstance() {
+checkState(instance != null, "Already built");
+return instance;
+  }
+
+  /**
+   * Add a new evaluator to the counter statistics.
+   * @param key key of this statistic
+   * @param eval evaluator for the statistic
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withLongFunctionCounter(String key,
+  ToLongFunction eval) {
+activeInstance().addCounterFunction(key, eval::applyAsLong);
+return this;
+  }
+
+  /**
+   * Add a counter statistic to dynamically return the
+   * latest value of the source.
+   * @param key key of this statistic
+   * @param source atomic long counter
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withAtomicLongCounter(String key,
+  AtomicLong source) {
+withLongFunctionCounter(key, s -> source.get());
+return this;
+  }
+
+  /**
+   * Add a counter statistic to dynamically return the
+   * latest value of the source.
+   * @param key key of this statistic
+   * @param source atomic int counter
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withAtomicIntegerCounter(String key,
+  AtomicInteger source) {
+withLongFunctionCounter(key, s -> source.get());
+return this;
+  }
+
+  /**
+   * Build a dynamic counter statistic from a
+   * {@link MutableCounterLong}.
+   * @param key key of this statistic
+   * @param source mutable long counter
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withMutableCounter(String key,
+  MutableCounterLong source) {
+withLongFunctionCounter(key, s -> source.value());
+return this;
+  }
+
+  /**
+   * Add a new evaluator to the gauge statistics.
+   * @param key key of this statistic
+   * @param eval evaluator for the statistic
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withLongFunctionGauge(String key,

Review comment:
   no, because I want people to have the option of writing gauges wired up 
to internal state (queue length etc)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.o

[jira] [Commented] (HADOOP-17181) ITestS3AContractUnbuffer failure -stream.read didn't return all data

2020-08-06 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17172522#comment-17172522
 ] 

Steve Loughran commented on HADOOP-17181:
-

seeing this repeatedly, all of a sudden

> ITestS3AContractUnbuffer failure -stream.read didn't return all data
> 
>
> Key: HADOOP-17181
> URL: https://issues.apache.org/jira/browse/HADOOP-17181
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Seen 2x recently, failure in ITestS3AContractUnbuffer as not enough data came 
> back in the read. 
> The contract test assumes that stream.read() will return everything, but it 
> could be some buffering problem. Proposed: switch to ReadFully to see if it 
> is a quirk of the read/get or is something actually wrong with the production 
> code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #2069: HADOOP-16830. IOStatistics API.

2020-08-06 Thread GitBox


steveloughran commented on a change in pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#discussion_r466540975



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/DynamicIOStatisticsBuilder.java
##
@@ -0,0 +1,248 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.statistics.impl;
+
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.function.Function;
+import java.util.function.ToLongFunction;
+
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.MeanStatistic;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+import static com.google.common.base.Preconditions.checkState;
+
+/**
+ * Builder of Dynamic IO Statistics which serve up up longs.
+ * Instantiate through
+ * {@link IOStatisticsBinding#dynamicIOStatistics()}.
+ */
+public class DynamicIOStatisticsBuilder {
+
+  /**
+   * the instance being built up. Will be null after the (single)
+   * call to {@link #build()}.
+   */
+  private DynamicIOStatistics instance = new DynamicIOStatistics();
+
+  /**
+   * Build the IOStatistics instance.
+   * @return an instance.
+   * @throws IllegalStateException if the builder has already been built.
+   */
+  public IOStatistics build() {
+final DynamicIOStatistics stats = activeInstance();
+// stop the builder from working any more.
+instance = null;
+return stats;
+  }
+
+
+  /**
+   * Get the statistics instance.
+   * @return the instance to build/return
+   * @throws IllegalStateException if the builder has already been built.
+   */
+  private DynamicIOStatistics activeInstance() {
+checkState(instance != null, "Already built");
+return instance;
+  }
+
+  /**
+   * Add a new evaluator to the counter statistics.
+   * @param key key of this statistic
+   * @param eval evaluator for the statistic
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withLongFunctionCounter(String key,
+  ToLongFunction eval) {
+activeInstance().addCounterFunction(key, eval::applyAsLong);
+return this;
+  }
+
+  /**
+   * Add a counter statistic to dynamically return the
+   * latest value of the source.
+   * @param key key of this statistic
+   * @param source atomic long counter
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withAtomicLongCounter(String key,
+  AtomicLong source) {
+withLongFunctionCounter(key, s -> source.get());
+return this;
+  }
+
+  /**
+   * Add a counter statistic to dynamically return the
+   * latest value of the source.
+   * @param key key of this statistic
+   * @param source atomic int counter
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withAtomicIntegerCounter(String key,
+  AtomicInteger source) {
+withLongFunctionCounter(key, s -> source.get());
+return this;
+  }
+
+  /**
+   * Build a dynamic counter statistic from a
+   * {@link MutableCounterLong}.
+   * @param key key of this statistic
+   * @param source mutable long counter
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withMutableCounter(String key,
+  MutableCounterLong source) {
+withLongFunctionCounter(key, s -> source.value());
+return this;
+  }
+
+  /**
+   * Add a new evaluator to the gauge statistics.
+   * @param key key of this statistic
+   * @param eval evaluator for the statistic
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withLongFunctionGauge(String key,
+  ToLongFunction eval) {
+activeInstance().addGaugeFunction(key, eval::applyAsLong);
+return this;
+  }
+
+  /**
+   * Add a gauge statistic to dynamically return the
+   * latest value of the source.
+   * @param key key of this statistic
+   * @param source atomic long gauge
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder withAtomicLongGauge(String key,
+  AtomicLong source) {
+withLongFunctionGauge(key, s -> source.get());
+return this;
+  }
+
+  /**
+   * Add a gauge statistic to dynamically return the
+   * latest value of the source.

[jira] [Updated] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2

2020-08-06 Thread Hemanth Boyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hemanth Boyina updated HADOOP-17144:

Attachment: HADOOP-17144.002.patch

> Update Hadoop's lz4 to v1.9.2
> -
>
> Key: HADOOP-17144
> URL: https://issues.apache.org/jira/browse/HADOOP-17144
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Major
> Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch
>
>
> Update hadoop's native lz4 to v1.9.2 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2

2020-08-06 Thread Hemanth Boyina (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17172544#comment-17172544
 ] 

Hemanth Boyina commented on HADOOP-17144:
-

LZ4 compression states that  Compression is guaranteed to succeed if 
'dstCapacity' >= LZ4_compressBound(srcSize) , where LZ4_compressBound is
{code:java}
 LZ4_compressBound(isize) = (isize) + ((isize)/255) + 16){code}
 updated the patch as per the above rule and changed the buffer capacity in 
hadoop

> Update Hadoop's lz4 to v1.9.2
> -
>
> Key: HADOOP-17144
> URL: https://issues.apache.org/jira/browse/HADOOP-17144
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Major
> Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch
>
>
> Update hadoop's native lz4 to v1.9.2 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hemanthboyina commented on pull request #2191: HDFS-15512. Remove smallBufferSize in DFSClient.

2020-08-06 Thread GitBox


hemanthboyina commented on pull request #2191:
URL: https://github.com/apache/hadoop/pull/2191#issuecomment-670062160


   +1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hemanthboyina merged pull request #2191: HDFS-15512. Remove smallBufferSize in DFSClient.

2020-08-06 Thread GitBox


hemanthboyina merged pull request #2191:
URL: https://github.com/apache/hadoop/pull/2191


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hemanthboyina commented on pull request #2172: HDFS-15483. Ordered snapshot deletion: Disallow rename between two snapshottable directories.

2020-08-06 Thread GitBox


hemanthboyina commented on pull request #2172:
URL: https://github.com/apache/hadoop/pull/2172#issuecomment-670076775


   the patch has to be updated  as the method isSnapshotDeletionOrdered() got 
removed



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hemanthboyina edited a comment on pull request #2172: HDFS-15483. Ordered snapshot deletion: Disallow rename between two snapshottable directories.

2020-08-06 Thread GitBox


hemanthboyina edited a comment on pull request #2172:
URL: https://github.com/apache/hadoop/pull/2172#issuecomment-670076775


   the code has to be updated  as the method isSnapshotDeletionOrdered() got 
removed



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2069: HADOOP-16830. IOStatistics API.

2020-08-06 Thread GitBox


steveloughran commented on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-670090916


   Test `TestLocalFileSystem` failure happens as the new 
"BufferedIOStatisticsOutputStream" class rejects calls to Syncable APIs if the 
wrapped class doesn't support it. This is correct behaviour. The test is 
failing because Local FS doesn't support it until #2102 is in (which actually 
depends on this buffer class to work :).
   
   ```
   java.lang.UnsupportedOperationException: hflush not supported by 
org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream@1b224a47
at 
org.apache.hadoop.fs.statistics.impl.BufferedIOStatisticsOutputStream.hflush(BufferedIOStatisticsOutputStream.java:92)
at 
org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:138)
at 
org.apache.hadoop.fs.TestLocalFileSystem.testSyncable(TestLocalFileSystem.java:176)
   
   ```
   
   I could disable the test arguing purity, but what if someone expects local 
FS to be syncable? Adding an option to downgrade to flush, which will be 
enabled for local FS until #2102 is in, at which point it can be stripped as 
Syncable will work



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17190) ITestTerasortOnS3A.test_120_terasort failing

2020-08-06 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17172593#comment-17172593
 ] 

Steve Loughran commented on HADOOP-17190:
-

I'm seeing this too, regularly, wonder if there's been a regression

> ITestTerasortOnS3A.test_120_terasort failing
> 
>
> Key: HADOOP-17190
> URL: https://issues.apache.org/jira/browse/HADOOP-17190
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Mukund Thakur
>Priority: Major
>
> [*INFO*] Running org.apache.hadoop.fs.s3a.commit.terasort.*ITestTerasortOnS3A*
> [*ERROR*] *Tests* *run: 14*, *Failures: 2*, Errors: 0, *Skipped: 2*, Time 
> elapsed: 110.43 s *<<< FAILURE!* - in 
> org.apache.hadoop.fs.s3a.commit.terasort.*ITestTerasortOnS3A*
> [*ERROR*] 
> test_120_terasort[directory](org.apache.hadoop.fs.s3a.commit.terasort.ITestTerasortOnS3A)
>   Time elapsed: 6.261 s  <<< FAILURE!
> java.lang.AssertionError: 
> terasort(s3a://mthakur-data/terasort-directory/sortin, 
> s3a://mthakur-data/terasort-directory/sortout) failed expected:<0> but was:<1>
>  at 
> org.apache.hadoop.fs.s3a.commit.terasort.ITestTerasortOnS3A.executeStage(ITestTerasortOnS3A.java:241)
>  at 
> org.apache.hadoop.fs.s3a.commit.terasort.ITestTerasortOnS3A.test_120_terasort(ITestTerasortOnS3A.java:291)
>  
> [*ERROR*] 
> test_120_terasort[magic](org.apache.hadoop.fs.s3a.commit.terasort.ITestTerasortOnS3A)
>   Time elapsed: 5.962 s  <<< FAILURE!
> java.lang.AssertionError: terasort(s3a://mthakur-data/terasort-magic/sortin, 
> s3a://mthakur-data/terasort-magic/sortout) failed expected:<0> but was:<1>
>  at 
> org.apache.hadoop.fs.s3a.commit.terasort.ITestTerasortOnS3A.executeStage(ITestTerasortOnS3A.java:241)
>  at 
> org.apache.hadoop.fs.s3a.commit.terasort.ITestTerasortOnS3A.test_120_terasort(ITestTerasortOnS3A.java:291)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17191) ABFS: Run tests with All the auth types

2020-08-06 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H reassigned HADOOP-17191:
-

Assignee: Bilahari T H

> ABFS: Run tests with All the auth types
> ---
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17191) ABFS: Run tests with All the auth types

2020-08-06 Thread Bilahari T H (Jira)
Bilahari T H created HADOOP-17191:
-

 Summary: ABFS: Run tests with All the auth types
 Key: HADOOP-17191
 URL: https://issues.apache.org/jira/browse/HADOOP-17191
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.3.0
Reporter: Bilahari T H
 Fix For: 3.4.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17191) ABFS: Run tests with all the auth types

2020-08-06 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17191:
--
Summary: ABFS: Run tests with all the auth types  (was: ABFS: Run tests 
with All the auth types)

> ABFS: Run tests with all the auth types
> ---
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sguggilam opened a new pull request #2197: HADOOP-17159 Ability for forceful relogin in UserGroupInformation class

2020-08-06 Thread GitBox


sguggilam opened a new pull request #2197:
URL: https://github.com/apache/hadoop/pull/2197


   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sguggilam commented on pull request #2197: HADOOP-17159 Ability for forceful relogin in UserGroupInformation class

2020-08-06 Thread GitBox


sguggilam commented on pull request #2197:
URL: https://github.com/apache/hadoop/pull/2197#issuecomment-670186712


   @liuml07 @steveloughran Please review



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] msamirkhan opened a new pull request #2198: YARN-10390. LeafQueue: retain user limits cache across assignContainers() calls.

2020-08-06 Thread GitBox


msamirkhan opened a new pull request #2198:
URL: https://github.com/apache/hadoop/pull/2198


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] msamirkhan commented on a change in pull request #2198: YARN-10390. LeafQueue: retain user limits cache across assignContainers() calls.

2020-08-06 Thread GitBox


msamirkhan commented on a change in pull request #2198:
URL: https://github.com/apache/hadoop/pull/2198#discussion_r466692505



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/UsersManager.java
##
@@ -91,8 +91,8 @@
   new HashMap>();
 
   // Pre-computed list of user-limits.
-  Map> preComputedActiveUserLimit = new 
ConcurrentHashMap<>();
-  Map> preComputedAllUserLimit = new 
ConcurrentHashMap<>();
+  Map> preComputedActiveUserLimit = new 
HashMap<>();
+  Map> preComputedAllUserLimit = new 
HashMap<>();

Review comment:
   Both these maps were only being called under locks.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] msamirkhan commented on a change in pull request #2198: YARN-10390. LeafQueue: retain user limits cache across assignContainers() calls.

2020-08-06 Thread GitBox


msamirkhan commented on a change in pull request #2198:
URL: https://github.com/apache/hadoop/pull/2198#discussion_r466692384



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/UsersManager.java
##
@@ -72,7 +72,7 @@
 
   // To detect whether there is a change in user count for every user-limit
   // calculation.
-  private AtomicLong latestVersionOfUsersState = new AtomicLong(0);
+  private long latestVersionOfUsersState = 0;

Review comment:
   The only place where latestVersionOfUsersState was being called without 
a lock was in isRecomputeNeeded. Please also see the comment in 
isRecomputeNeeded().





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] msamirkhan commented on a change in pull request #2198: YARN-10390. LeafQueue: retain user limits cache across assignContainers() calls.

2020-08-06 Thread GitBox


msamirkhan commented on a change in pull request #2198:
URL: https://github.com/apache/hadoop/pull/2198#discussion_r466692858



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/UsersManager.java
##
@@ -581,15 +581,29 @@ public Resource 
getComputedResourceLimitForAllUsers(String userName,
 return userSpecificUserLimit;
   }
 
+  protected long getLatestVersionOfUsersState() {
+readLock.lock();
+try {
+  return latestVersionOfUsersState;
+} finally {
+  readLock.unlock();
+}
+  }
+
   /*
* Recompute user-limit under following conditions: 1. cached user-limit does
* not exist in local map. 2. Total User count doesn't match with local 
cached
* version.
*/
   private boolean isRecomputeNeeded(SchedulingMode schedulingMode,
   String nodePartition, boolean isActive) {
-return (getLocalVersionOfUsersState(nodePartition, schedulingMode,
-isActive) != latestVersionOfUsersState.get());
+readLock.lock();
+try {
+  return (getLocalVersionOfUsersState(nodePartition, schedulingMode,
+  isActive) != latestVersionOfUsersState);
+} finally {
+  readLock.unlock();
+}

Review comment:
   Previously it didn't readLock here but it calls 
getLocalVersionOfUsersState which does lock so doesn't save anything. Also, 
without lock latestVersionOfUsersState can possibly return negative value when 
it is being reset to 0 in another thread in userLimitNeedsRecompute().





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ merged pull request #2192: HADOOP-17183. ABFS: Enabling checkaccess on ABFS

2020-08-06 Thread GitBox


DadanielZ merged pull request #2192:
URL: https://github.com/apache/hadoop/pull/2192


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2197: HADOOP-17159 Ability for forceful relogin in UserGroupInformation class

2020-08-06 Thread GitBox


hadoop-yetus commented on pull request #2197:
URL: https://github.com/apache/hadoop/pull/2197#issuecomment-670255686


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  29m 18s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 53s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m 35s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 19s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   2m 15s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 13s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 54s |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m  1s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  21m  1s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 24s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  18m 24s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 47s |  
hadoop-common-project/hadoop-common: The patch generated 1 new + 85 unchanged - 
0 fixed = 86 total (was 85)  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 1 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch 1 line(s) with tabs.  |
   | +1 :green_heart: |  shadedclient  |  14m 11s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   2m 26s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 16s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 50s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 167m 10s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2197/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2197 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ab89126be726 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1d5ccc790bd |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2197/1/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
   | whitespace | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2197/1/artifact/out/whitespace-eol.txt
 |
   | whitespace | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2197/1/artifact/out/whitespace-tabs.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2197/1/testReport/ |
   | Max. process+thread count | 2475 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2197/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbug

[GitHub] [hadoop] sungpeo opened a new pull request #2199: YARN-10392: Fix typos Cotext related to AMRMProxyApplicationContext

2020-08-06 Thread GitBox


sungpeo opened a new pull request #2199:
URL: https://github.com/apache/hadoop/pull/2199


   Fix typos 'Cotext' to 'Context'  related to 
AMRMProxyApplicationContext.getNMCotext()



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2069: HADOOP-16830. IOStatistics API.

2020-08-06 Thread GitBox


hadoop-yetus commented on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-670263296


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
34 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 18s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 21s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 49s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m 45s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 57s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 17s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 58s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 30s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m 13s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m  4s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 33s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  2s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 54s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  20m 54s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2050 unchanged - 
1 fixed = 2050 total (was 2051)  |
   | +1 :green_heart: |  compile  |  17m 48s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  17m 48s |  
root-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 0 new + 1943 unchanged - 1 
fixed = 1943 total (was 1944)  |
   | -0 :warning: |  checkstyle  |   3m 23s |  root: The patch generated 29 new 
+ 241 unchanged - 25 fixed = 270 total (was 266)  |
   | +1 :green_heart: |  mvnsite  |   3m 16s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 14 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 26s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  hadoop-common in the patch 
passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  hadoop-mapreduce-client-core in 
the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09.  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with 
JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 0 new + 0 unchanged 
- 4 fixed = 0 total (was 4)  |
   | -1 :x: |  findbugs  |   2m 21s |  hadoop-common-project/hadoop-common 
generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 24s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   6m 59s |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 39s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 199m  2s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.statistics.MeanStatistic.samples; locked 56% of time  
Unsynchronized access at MeanStatistic.java:56% of time  Unsynchronized access 
at M

[GitHub] [hadoop] hadoop-yetus commented on pull request #2198: YARN-10390. LeafQueue: retain user limits cache across assignContainers() calls.

2020-08-06 Thread GitBox


hadoop-yetus commented on pull request #2198:
URL: https://github.com/apache/hadoop/pull/2198#issuecomment-670263165


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  29m 39s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 46s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 55s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 48s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 44s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 43s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 48s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 52s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 52s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 42s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 38s |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 46 new + 533 unchanged - 10 fixed = 579 total (was 543)  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 44s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m 45s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  89m  8s |  hadoop-yarn-server-resourcemanager in the 
patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 163m 11s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2198/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2198 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux fa5859d88446 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a2610e21ed5 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2198/1/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2198/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2198/1/testReport/ |
   | Max. process+thread count | 858 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/had

[GitHub] [hadoop] hadoop-yetus commented on pull request #2069: HADOOP-16830. IOStatistics API.

2020-08-06 Thread GitBox


hadoop-yetus commented on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-670263401


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
34 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 19s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 49s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 54s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m 43s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 49s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 14s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 30s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 39s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m 11s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 50s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 31s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 13s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  21m 13s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2050 unchanged - 
1 fixed = 2050 total (was 2051)  |
   | +1 :green_heart: |  compile  |  19m  3s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  19m  2s |  
root-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 0 new + 1943 unchanged - 1 
fixed = 1943 total (was 1944)  |
   | -0 :warning: |  checkstyle  |   3m 28s |  root: The patch generated 29 new 
+ 241 unchanged - 25 fixed = 270 total (was 266)  |
   | +1 :green_heart: |  mvnsite  |   3m 22s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  1s |  The patch has 14 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m  6s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  hadoop-common in the patch 
passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  hadoop-mapreduce-client-core in 
the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with 
JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 0 new + 0 unchanged 
- 4 fixed = 0 total (was 4)  |
   | -1 :x: |  findbugs  |   2m 30s |  hadoop-common-project/hadoop-common 
generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m  5s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   8m  4s |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 36s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 201m 14s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.statistics.MeanStatistic.samples; locked 56% of time  
Unsynchronized access at MeanStatistic.java:56% of time  Unsynchronized access 
at M

[GitHub] [hadoop] hadoop-yetus commented on pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree

2020-08-06 Thread GitBox


hadoop-yetus commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-670285487


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
5 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 20s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 49s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 52s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m 47s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 44s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 12s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 14s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 12s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   3m 20s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 39s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  9s |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 12s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  21m 12s |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m  7s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  19m  7s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m 20s |  root: The patch generated 1 new 
+ 92 unchanged - 1 fixed = 93 total (was 93)  |
   | +1 :green_heart: |  mvnsite  |   3m 11s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 54s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 21s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   6m  4s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 58s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |  95m  7s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  5s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 287m  8s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestMultipleNNPortQOP |
   |   | hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes |
   |   | hadoop.hdfs.TestDecommission |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2185 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 0367d0a0fe35 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1d5ccc790bd |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/5/artifact/out/diff-checkstyle-root.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-mul

[jira] [Updated] (HADOOP-17183) ABFS: Enable checkaccess API

2020-08-06 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17183:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> ABFS: Enable checkaccess API
> 
>
> Key: HADOOP-17183
> URL: https://issues.apache.org/jira/browse/HADOOP-17183
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Major
> Fix For: 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17183) ABFS: Enable checkaccess API

2020-08-06 Thread Bilahari T H (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17172793#comment-17172793
 ] 

Bilahari T H commented on HADOOP-17183:
---

Commit: 
https://github.com/apache/hadoop/commit/a2610e21ed5289323d8a6f6359477a8ceb2db2eb

> ABFS: Enable checkaccess API
> 
>
> Key: HADOOP-17183
> URL: https://issues.apache.org/jira/browse/HADOOP-17183
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Major
> Fix For: 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] msamirkhan commented on a change in pull request #2198: YARN-10390. LeafQueue: retain user limits cache across assignContainers() calls.

2020-08-06 Thread GitBox


msamirkhan commented on a change in pull request #2198:
URL: https://github.com/apache/hadoop/pull/2198#discussion_r466692505



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/UsersManager.java
##
@@ -91,8 +91,8 @@
   new HashMap>();
 
   // Pre-computed list of user-limits.
-  Map> preComputedActiveUserLimit = new 
ConcurrentHashMap<>();
-  Map> preComputedAllUserLimit = new 
ConcurrentHashMap<>();
+  Map> preComputedActiveUserLimit = new 
HashMap<>();
+  Map> preComputedAllUserLimit = new 
HashMap<>();

Review comment:
   Both these maps were only being called under locks.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] msamirkhan commented on a change in pull request #2198: YARN-10390. LeafQueue: retain user limits cache across assignContainers() calls.

2020-08-06 Thread GitBox


msamirkhan commented on a change in pull request #2198:
URL: https://github.com/apache/hadoop/pull/2198#discussion_r466814342



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/UsersManager.java
##
@@ -91,8 +90,12 @@
   new HashMap>();
 
   // Pre-computed list of user-limits.
-  Map> preComputedActiveUserLimit = new 
ConcurrentHashMap<>();
-  Map> preComputedAllUserLimit = new 
ConcurrentHashMap<>();
+  @VisibleForTesting
+  Map> preComputedActiveUserLimit =
+  new HashMap<>();
+  @VisibleForTesting
+  Map> preComputedAllUserLimit =
+  new HashMap<>();

Review comment:
   Both these maps were only being called under locks.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2199: YARN-10392: Fix typos Cotext related to AMRMProxyApplicationContext

2020-08-06 Thread GitBox


hadoop-yetus commented on pull request #2199:
URL: https://github.com/apache/hadoop/pull/2199#issuecomment-670320600


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  24m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 16s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 23s |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 26s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   2m 46s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m  2s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m 43s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  0s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 22s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 16s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   3m 16s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 38s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   2m 38s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 57s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 21s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   3m 11s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  22m  6s |  hadoop-yarn-server-nodemanager in 
the patch passed.  |
   | -1 :x: |  unit  |  96m 18s |  hadoop-yarn-server-resourcemanager in the 
patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 237m 53s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2199/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2199 |
   | JIRA Issue | YARN-10392 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 073c06fc0ae6 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a2610e21ed5 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2199/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2199/1/testReport/ |
   | Max. process+thread count | 820 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager

common-issues@hadoop.apache.org

2020-08-06 Thread GitBox


lichaojacobs opened a new pull request #2200:
URL: https://github.com/apache/hadoop/pull/2200


   https://issues.apache.org/jira/browse/MAPREDUCE-7290
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



common-issues@hadoop.apache.org

2020-08-06 Thread GitBox


hadoop-yetus commented on pull request #2200:
URL: https://github.com/apache/hadoop/pull/2200#issuecomment-670355906


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 35s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 19s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 40s |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 15s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m 55s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 45s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 40s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 51s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 53s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m  9s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   2m  9s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 48s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   1m 48s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 37s |  
hadoop-mapreduce-project/hadoop-mapreduce-client: The patch generated 49 new + 
102 unchanged - 2 fixed = 151 total (was 104)  |
   | +1 :green_heart: |  mvnsite  |   0m 53s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 33s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | -1 :x: |  findbugs  |   1m 16s |  
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   6m 48s |  hadoop-mapreduce-client-core in the patch 
passed.  |
   | -1 :x: |  unit  |   2m 31s |  hadoop-mapreduce-client-shuffle in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  93m 11s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | 
module:hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core
 |
   |  |  Method 
org.apache.hadoop.mapreduce.task.reduce.ShuffleHeader.readByVersion(DataInput) 
seems to be useless  At ShuffleHeader.java:useless  At ShuffleHeader.java:[line 
124] |
   |  |  Method 
org.apache.hadoop.mapreduce.task.reduce.ShuffleHeader.writeByVersion(DataOutput)
 seems to be useless  At ShuffleHeader.java:useless  At 
ShuffleHeader.java:[line 147] |
   |  |  org.apache.hadoop.mapreduce.task.reduce.ShuffleHeader$HeaderVersion 
defines compareTo(ShuffleHeader$HeaderVersion) and uses Object.equals()  At 
ShuffleHeader.java:Object.equals()  At ShuffleHeader.java:[lines 322-340] |
   | Failed junit tests | hadoop.mapreduce.task.reduce.TestFetcher |
   |   | hadoop.mapred.TestShuffleHandler |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2200/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2200 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 6be1eb8e4ed0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug