[jira] [Updated] (HADOOP-16641) RPC: Heavy contention on Configuration.getClassByNameOrNull

2019-10-07 Thread Gopal Vijayaraghavan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal Vijayaraghavan updated HADOOP-16641:
--
Attachment: llap-rpc-locks.svg

> RPC: Heavy contention on Configuration.getClassByNameOrNull 
> 
>
> Key: HADOOP-16641
> URL: https://issues.apache.org/jira/browse/HADOOP-16641
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Gopal Vijayaraghavan
>Priority: Major
>  Labels: performance
> Attachments: config-get-class-by-name.png, llap-rpc-locks.svg
>
>
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L2589
> {code}
> map = Collections.synchronizedMap(
>   new WeakHashMap>>());
> {code}
> This synchronizes all lookups across the same class-loader across all threads 
> & yields rpc threads.
>  !config-get-class-by-name.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16641) RPC: Heavy contention on Configuration.getClassByNameOrNull

2019-10-07 Thread Gopal Vijayaraghavan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal Vijayaraghavan updated HADOOP-16641:
--
Description: 
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L2589

{code}
map = Collections.synchronizedMap(
  new WeakHashMap>>());
{code}

This synchronizes all lookups across the same class-loader across all threads & 
yields rpc threads.

 !config-get-class-by-name.png! 

When reading from HDFS with good locality, this fills up the contended lock 
profile with almost no other contributors to the locking - see  
[^llap-rpc-locks.svg] 

  was:
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L2589

{code}
map = Collections.synchronizedMap(
  new WeakHashMap>>());
{code}

This synchronizes all lookups across the same class-loader across all threads & 
yields rpc threads.

 !config-get-class-by-name.png! 


> RPC: Heavy contention on Configuration.getClassByNameOrNull 
> 
>
> Key: HADOOP-16641
> URL: https://issues.apache.org/jira/browse/HADOOP-16641
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Gopal Vijayaraghavan
>Priority: Major
>  Labels: performance
> Attachments: config-get-class-by-name.png, llap-rpc-locks.svg
>
>
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L2589
> {code}
> map = Collections.synchronizedMap(
>   new WeakHashMap>>());
> {code}
> This synchronizes all lookups across the same class-loader across all threads 
> & yields rpc threads.
>  !config-get-class-by-name.png! 
> When reading from HDFS with good locality, this fills up the contended lock 
> profile with almost no other contributors to the locking - see  
> [^llap-rpc-locks.svg] 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] christeoh commented on issue #1582: HDDS-2217. Removed redundant LOG4J lines from docker configurations

2019-10-07 Thread GitBox
christeoh commented on issue #1582: HDDS-2217. Removed redundant LOG4J lines 
from docker configurations
URL: https://github.com/apache/hadoop/pull/1582#issuecomment-539341389
 
 
   Looks like there's some issues with Ratis? Is this related to the patch?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16641) RPC: Heavy contention on Configuration.getClassByNameOrNull

2019-10-07 Thread Gopal Vijayaraghavan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal Vijayaraghavan updated HADOOP-16641:
--
Labels: performance  (was: )

> RPC: Heavy contention on Configuration.getClassByNameOrNull 
> 
>
> Key: HADOOP-16641
> URL: https://issues.apache.org/jira/browse/HADOOP-16641
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Gopal Vijayaraghavan
>Priority: Major
>  Labels: performance
> Attachments: config-get-class-by-name.png
>
>
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L2589
> {code}
> map = Collections.synchronizedMap(
>   new WeakHashMap>>());
> {code}
> This synchronizes all lookups across the same class-loader across all threads 
> & yields rpc threads.
>  !config-get-class-by-name.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16641) RPC: Heavy contention on Configuration.getClassByNameOrNull

2019-10-07 Thread Gopal Vijayaraghavan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal Vijayaraghavan updated HADOOP-16641:
--
Component/s: conf

> RPC: Heavy contention on Configuration.getClassByNameOrNull 
> 
>
> Key: HADOOP-16641
> URL: https://issues.apache.org/jira/browse/HADOOP-16641
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Gopal Vijayaraghavan
>Priority: Major
> Attachments: config-get-class-by-name.png
>
>
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L2589
> {code}
> map = Collections.synchronizedMap(
>   new WeakHashMap>>());
> {code}
> This synchronizes all lookups across the same class-loader across all threads 
> & yields rpc threads.
>  !config-get-class-by-name.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16641) RPC: Heavy contention on Configuration.getClassByNameOrNull

2019-10-07 Thread Gopal Vijayaraghavan (Jira)
Gopal Vijayaraghavan created HADOOP-16641:
-

 Summary: RPC: Heavy contention on 
Configuration.getClassByNameOrNull 
 Key: HADOOP-16641
 URL: https://issues.apache.org/jira/browse/HADOOP-16641
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Gopal Vijayaraghavan
 Attachments: config-get-class-by-name.png

https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L2589

{code}
map = Collections.synchronizedMap(
  new WeakHashMap>>());
{code}

This synchronizes all lookups across the same class-loader across all threads & 
yields rpc threads.

 !config-get-class-by-name.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jeffsaremi commented on issue #1480: HDFS-14857. FS operations fail in HA mode: DataNode fails to connect to NameNode

2019-10-07 Thread GitBox
jeffsaremi commented on issue #1480: HDFS-14857. FS operations fail in HA mode: 
DataNode fails to connect to NameNode
URL: https://github.com/apache/hadoop/pull/1480#issuecomment-539308184
 
 
   I went back and revised some of the code. It looked like we're trying to do 
some really important stuff right inside the perfromFailover() method. This 
didn't feel right and we didn't want to deal with all sorts of things that 
could go wrong. Instead we're doing the minimal here --  making sure the proxy 
is nullified -- and then when the Retry loop calls the getProxy() we deal with 
actual IP resetting.
   This was important since we already had a lazy initializer inside 
AbstractNNFailoverProxyProvider and we could just reuse that .
   Also there seemed to be some overlap in the old way of resettingProxy and 
what AbstractNNFailoverProxyProvider is doing. So we're avoiding too much 
redundancy.
   Inside AbstractNNFailoverProxyProvider there are actually two spots that 
this could happen: 
   createProxyIfNeeded()
   and 
   getProxyAddresses()
   
   either one could be altered to handle the IP address reset as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Cosss7 closed pull request #1129: HDFS-14509 DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 3.x

2019-10-07 Thread GitBox
Cosss7 closed pull request #1129: HDFS-14509 DN throws InvalidToken due to 
inequality of password when upgrade NN 2.x to 3.x
URL: https://github.com/apache/hadoop/pull/1129
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ChenSammi commented on a change in pull request #1469: HDDS-2034. Async RATIS pipeline creation and destroy through heartbea…

2019-10-07 Thread GitBox
ChenSammi commented on a change in pull request #1469: HDDS-2034. Async RATIS 
pipeline creation and destroy through heartbea…
URL: https://github.com/apache/hadoop/pull/1469#discussion_r332321239
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CreatePipelineCommandHandler.java
 ##
 @@ -0,0 +1,226 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.container.common.statemachine.commandhandler;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.
+StorageContainerDatanodeProtocolProtos.CreatePipelineCommandProto;
+import org.apache.hadoop.hdds.protocol.proto.
+StorageContainerDatanodeProtocolProtos.SCMCommandProto;
+import org.apache.hadoop.hdds.protocol.proto.
+StorageContainerDatanodeProtocolProtos.CreatePipelineACKProto;
+import org.apache.hadoop.hdds.ratis.RatisHelper;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.hadoop.io.MultipleIOException;
+import org.apache.hadoop.ozone.container.common.statemachine
+.SCMConnectionManager;
+import org.apache.hadoop.ozone.container.common.statemachine.StateContext;
+import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
+import org.apache.hadoop.ozone.protocol.commands.CommandStatus;
+import org.apache.hadoop.ozone.protocol.commands.CreatePipelineCommand;
+import org.apache.hadoop.ozone.protocol.commands.CreatePipelineCommandStatus;
+import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
+import org.apache.hadoop.util.Time;
+import org.apache.ratis.client.RaftClient;
+import org.apache.ratis.grpc.GrpcTlsConfig;
+import org.apache.ratis.protocol.NotLeaderException;
+import org.apache.ratis.protocol.RaftClientReply;
+import org.apache.ratis.protocol.RaftGroup;
+import org.apache.ratis.protocol.RaftGroupId;
+import org.apache.ratis.protocol.RaftPeer;
+import org.apache.ratis.retry.RetryPolicy;
+import org.apache.ratis.rpc.SupportedRpcType;
+import org.apache.ratis.util.TimeDuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+import java.util.concurrent.RejectedExecutionException;
+import java.util.function.Consumer;
+import java.util.stream.Collectors;
+
+/**
+ * Handler for create pipeline command received from SCM.
+ */
+public class CreatePipelineCommandHandler implements CommandHandler {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(CreatePipelineCommandHandler.class);
+
+  private int invocationCount;
+  private long totalTime;
+
+  /**
+   * Constructs a createPipelineCommand handler.
+   */
+  public CreatePipelineCommandHandler() {
+  }
+
+  /**
+   * Handles a given SCM command.
+   *
+   * @param command   - SCM Command
+   * @param ozoneContainer- Ozone Container.
+   * @param context   - Current Context.
+   * @param connectionManager - The SCMs that we are talking to.
+   */
+  @Override
+  public void handle(SCMCommand command, OzoneContainer ozoneContainer,
+  StateContext context, SCMConnectionManager connectionManager) {
+invocationCount++;
+final long startTime = Time.monotonicNow();
+final DatanodeDetails dn = context.getParent()
+.getDatanodeDetails();
+final CreatePipelineCommandProto createCommand =
+((CreatePipelineCommand)command).getProto();
+final PipelineID pipelineID = PipelineID.getFromProtobuf(
+createCommand.getPipelineID());
+Collection peers =
+createCommand.getDatanodeList().stream()
+.map(DatanodeDetails::getFromProtoBuf)
+.collect(Collectors.toList());
+
+final CreatePipelineACKProto createPipelineACK =
+CreatePipelineACKProto.newBuilder()
+.setPipelineID(createCommand.getPipelineID())
+.setDatanodeUUID(dn.getUuidString()).build();
+boolean success = 

[GitHub] [hadoop] ChenSammi commented on issue #1469: HDDS-2034. Async RATIS pipeline creation and destroy through heartbea…

2019-10-07 Thread GitBox
ChenSammi commented on issue #1469: HDDS-2034. Async RATIS pipeline creation 
and destroy through heartbea…
URL: https://github.com/apache/hadoop/pull/1469#issuecomment-539294113
 
 
   > > I think the purpose of safemode is to guarantee that Ozone cluster is 
ready to provide service to Ozone client once safemode is exited.
   > 
   > @ChenSammi I agree with that. I think the problem occurs with 
OneReplicaPipelineSafeModeRule. This rule makes sure that atleast one datanode 
in the old pipeline is reported so that reads for OPEN containers can go 
through. Here I think that old pipelines need to be tracked separately.
   
   OK, I will try to separate the olds from the new ones. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16614) Missing leveldbjni package of aarch64 platform

2019-10-07 Thread liusheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liusheng updated HADOOP-16614:
--
Description: 
Currently, Hadoop denpend on the *leveldbjni-all:1.8* package of 
*org.fusesource.leveldbjni* group, but it cannot support ARM platform.

see: https://search.maven.org/search?q=g:org.fusesource.leveldbjni

  was:Currently, Hadoop denpend on the *leveldbjni-all:1.8* package, but the 
ARM platform cannot be supported.


> Missing leveldbjni package of aarch64 platform
> --
>
> Key: HADOOP-16614
> URL: https://issues.apache.org/jira/browse/HADOOP-16614
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: liusheng
>Priority: Major
>
> Currently, Hadoop denpend on the *leveldbjni-all:1.8* package of 
> *org.fusesource.leveldbjni* group, but it cannot support ARM platform.
> see: https://search.maven.org/search?q=g:org.fusesource.leveldbjni



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-10-07 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946430#comment-16946430
 ] 

Wei-Chiu Chuang commented on HADOOP-16152:
--

The compilation error doesn't look related. See HDFS-14900

> Upgrade Eclipse Jetty version to 9.4.x
> --
>
> Key: HADOOP-16152
> URL: https://issues.apache.org/jira/browse/HADOOP-16152
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Yuming Wang
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16152.002.patch, HADOOP-16152.002.patch, 
> HADOOP-16152.003.patch, HADOOP-16152.004.patch, HADOOP-16152.005.patch, 
> HADOOP-16152.006.patch, HADOOP-16152.v1.patch
>
>
> Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
> compatibility issues.
> Spark: 
> [https://github.com/apache/spark/blob/02a0cdea13a5eebd27649a60d981de35156ba52c/pom.xml#L146]
> Calcite: 
> [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
> Hive: HIVE-21211



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1582: HDDS-2217. Removed redundant LOG4J lines from docker configurations

2019-10-07 Thread GitBox
hadoop-yetus commented on issue #1582: HDDS-2217. Removed redundant LOG4J lines 
from docker configurations
URL: https://github.com/apache/hadoop/pull/1582#issuecomment-539289204
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 10 | https://github.com/apache/hadoop/pull/1582 does not 
apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1582 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1582/3/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] christeoh commented on issue #1582: HDDS-2217. Removed redundant LOG4J lines from docker configurations

2019-10-07 Thread GitBox
christeoh commented on issue #1582: HDDS-2217. Removed redundant LOG4J lines 
from docker configurations
URL: https://github.com/apache/hadoop/pull/1582#issuecomment-539286942
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1588: HDDS-1986. Fix listkeys API.

2019-10-07 Thread GitBox
anuengineer commented on a change in pull request #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332314620
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
 
 Review comment:
   I am ok with putting this change in if we can prove that we can do large 
list keys. You might want to borrow the DB from @nandakumar131 and see if you 
can list keys with this patch, just a thought.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1588: HDDS-1986. Fix listkeys API.

2019-10-07 Thread GitBox
anuengineer commented on a change in pull request #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332255281
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -645,7 +648,12 @@ public boolean isBucketEmpty(String volume, String bucket)
   @Override
   public List listKeys(String volumeName, String bucketName,
   String startKey, String keyPrefix, int maxKeys) throws IOException {
+
 List result = new ArrayList<>();
+if (maxKeys == 0) {
 
 Review comment:
   or <= 0 ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1588: HDDS-1986. Fix listkeys API.

2019-10-07 Thread GitBox
anuengineer commented on a change in pull request #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332255873
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
 
 Review comment:
   How many keys are expected in this cache? and how many in the tree ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1588: HDDS-1986. Fix listkeys API.

2019-10-07 Thread GitBox
anuengineer commented on a change in pull request #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332256331
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
 
 Review comment:
   I feel that we are better off leaving the old code in place...where we can 
read from the DB.. Worst, we might have to make sure that cache is flushed to 
DB before doing the list operation.But practically it may not matter.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1555: HDDS-1984. Fix listBucket API.

2019-10-07 Thread GitBox
anuengineer commented on a change in pull request #1555: HDDS-1984. Fix 
listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#discussion_r332313947
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -618,23 +618,31 @@ public boolean isBucketEmpty(String volume, String 
bucket)
 }
 int currentCount = 0;
 
-try (TableIterator>
-bucketIter = bucketTable.iterator()) {
-  KeyValue kv = bucketIter.seek(startKey);
-  while (currentCount < maxNumOfBuckets && bucketIter.hasNext()) {
-kv = bucketIter.next();
-// Skip the Start Bucket if needed.
-if (kv != null && skipStartKey &&
-kv.getKey().equals(startKey)) {
+
+// For Bucket it is full cache, so we can just iterate in-memory table
+// cache.
+Iterator, CacheValue>> iterator =
 
 Review comment:
   You can consider this comment resolved. Thanks for the explanation. I am 
leaving it open for other reviewers who might want to read this patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1555: HDDS-1984. Fix listBucket API.

2019-10-07 Thread GitBox
anuengineer commented on a change in pull request #1555: HDDS-1984. Fix 
listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#discussion_r332313824
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -618,23 +618,31 @@ public boolean isBucketEmpty(String volume, String 
bucket)
 }
 int currentCount = 0;
 
-try (TableIterator>
-bucketIter = bucketTable.iterator()) {
-  KeyValue kv = bucketIter.seek(startKey);
-  while (currentCount < maxNumOfBuckets && bucketIter.hasNext()) {
-kv = bucketIter.next();
-// Skip the Start Bucket if needed.
-if (kv != null && skipStartKey &&
-kv.getKey().equals(startKey)) {
+
+// For Bucket it is full cache, so we can just iterate in-memory table
+// cache.
+Iterator, CacheValue>> iterator =
 
 Review comment:
   > But when we want to list all buckets is /vol2, we will iterate the entries 
from start, and reach to /vol2 in the cache and once the maximum count is 
reached we return from there.
   
   The architecture of SkipList prevents us from iterating all the keys. That 
is good enough, I was worried that we will walk all the entries. I missed we 
were using a skipList based hash table.
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1589: HDDS-2244. Use new ReadWrite lock in OzoneManager.

2019-10-07 Thread GitBox
arp7 commented on a change in pull request #1589: HDDS-2244. Use new ReadWrite 
lock in OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589#discussion_r332313554
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -105,15 +109,66 @@ public OzoneManagerLock(Configuration conf) {
* should be bucket name. For remaining all resource only one param should
* be passed.
*/
+  @Deprecated
   public boolean acquireLock(Resource resource, String... resources) {
 String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::writeLock, WRITE_LOCK);
+  }
+
+  /**
+   * Acquire read lock on resource.
+   *
+   * For S3_BUCKET_LOCK, VOLUME_LOCK, BUCKET_LOCK type resource, same
+   * thread acquiring lock again is allowed.
+   *
+   * For USER_LOCK, PREFIX_LOCK, S3_SECRET_LOCK type resource, same thread
 
 Review comment:
   Why is that?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on issue #1589: HDDS-2244. Use new ReadWrite lock in OzoneManager.

2019-10-07 Thread GitBox
arp7 commented on issue #1589: HDDS-2244. Use new ReadWrite lock in 
OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589#issuecomment-539279865
 
 
   I looked through the JDK implementation of read-write locks a couple of 
years ago. Even in non-fair mode there is prevention against starvation. HDFS 
uses non-fair mode by default and works well even for very busy Name Nodes.
   
   However we can make the lock fair for now, and evaluate making it non-fair 
later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1555: HDDS-1984. Fix listBucket API.

2019-10-07 Thread GitBox
arp7 commented on a change in pull request #1555: HDDS-1984. Fix listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#discussion_r332306087
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/TableCacheImpl.java
 ##
 @@ -47,15 +48,22 @@
 public class TableCacheImpl implements TableCache 
{
 
-  private final ConcurrentHashMap cache;
+  private final Map cache;
   private final NavigableSet> epochEntries;
   private ExecutorService executorService;
   private CacheCleanupPolicy cleanupPolicy;
 
 
 
   public TableCacheImpl(CacheCleanupPolicy cleanupPolicy) {
-cache = new ConcurrentHashMap<>();
+
+// As for full table cache only we need elements to be inserted in sorted
+// manner, so that list will be easy. For other we can go with Hash map.
+if (cleanupPolicy == CacheCleanupPolicy.NEVER) {
 
 Review comment:
   We do need a solution for this, it's not very good to keep adding special 
checks for Cache policy NEVER. The code will become fragile.
   
   I think when we first added cache policies you proposed making it an 
interface. Let's do that now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1555: HDDS-1984. Fix listBucket API.

2019-10-07 Thread GitBox
arp7 commented on a change in pull request #1555: HDDS-1984. Fix listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#discussion_r332305881
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/CacheKey.java
 ##
 @@ -53,4 +53,18 @@ public boolean equals(Object o) {
   public int hashCode() {
 return Objects.hash(key);
   }
+
+  @Override
+  public int compareTo(Object o) {
+if(Objects.equals(key, ((CacheKey)o).key)) {
+  return 0;
+} else {
+  if (key instanceof String) {
+return ((String) key).compareTo((String) ((CacheKey)o).key);
+  } else {
+// If not type string, convert to string and compare.
+return key.toString().compareToCacheKey) o).key).toString());
 
 Review comment:
   I think the correct fix is to make OzoneTokenIdentifier a Comparable, and 
then enforce that the CacheKey implements Comparable.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16640) WASB: Override getCanonicalServiceName() to return full url of WASB filesystem

2019-10-07 Thread Da Zhou (Jira)
Da Zhou created HADOOP-16640:


 Summary: WASB: Override getCanonicalServiceName() to return full 
url of WASB filesystem
 Key: HADOOP-16640
 URL: https://issues.apache.org/jira/browse/HADOOP-16640
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/azure
Affects Versions: 3.2
Reporter: Da Zhou


HBase calls getCanonicalServiceName() to check if two FS are the same:
[https://github.com/apache/hbase/blob/10180e232ebf886c9577d77eb91ce64b51564dfc/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSHDFSUtils.java#L117]

This is creating some issues for customer because the current WASB relied on 
the default implementation of getCanonicalServiceName() and will return 
"ip:port".

Will override getCanonicalServiceName()  in WASB to return the full URI of the 
fs, and this would be configurable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16640) WASB: Override getCanonicalServiceName() to return full url of WASB filesystem

2019-10-07 Thread Da Zhou (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou reassigned HADOOP-16640:


Assignee: Da Zhou

> WASB: Override getCanonicalServiceName() to return full url of WASB filesystem
> --
>
> Key: HADOOP-16640
> URL: https://issues.apache.org/jira/browse/HADOOP-16640
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.2
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
>
> HBase calls getCanonicalServiceName() to check if two FS are the same:
> [https://github.com/apache/hbase/blob/10180e232ebf886c9577d77eb91ce64b51564dfc/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSHDFSUtils.java#L117]
> This is creating some issues for customer because the current WASB relied on 
> the default implementation of getCanonicalServiceName() and will return 
> "ip:port".
> Will override getCanonicalServiceName()  in WASB to return the full URI of 
> the fs, and this would be configurable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1588: HDDS-1986. Fix listkeys API.

2019-10-07 Thread GitBox
arp7 commented on a change in pull request #1588: HDDS-1986. Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332304634
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestOmMetadataManager.java
 ##
 @@ -0,0 +1,298 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om;
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.StorageType;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.request.TestOMRequestUtils;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import java.util.List;
+import java.util.TreeSet;
+
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_DB_DIRS;
+
+/**
+ * Tests OzoneManager MetadataManager.
+ */
+public class TestOmMetadataManager {
+
+  private OMMetadataManager omMetadataManager;
+  private OzoneConfiguration ozoneConfiguration;
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+
+  @Before
+  public void setup() throws Exception {
+ozoneConfiguration = new OzoneConfiguration();
+ozoneConfiguration.set(OZONE_OM_DB_DIRS,
+folder.getRoot().getAbsolutePath());
+omMetadataManager = new OmMetadataManagerImpl(ozoneConfiguration);
+  }
+  @Test
+  public void testListKeys() throws Exception {
+
+String volumeNameA = "volumeA";
+String volumeNameB = "volumeB";
+String ozoneBucket = "ozoneBucket";
+String hadoopBucket = "hadoopBucket";
+
+
+// Create volumes and buckets.
+TestOMRequestUtils.addVolumeToDB(volumeNameA, omMetadataManager);
+TestOMRequestUtils.addVolumeToDB(volumeNameB, omMetadataManager);
+addBucketsToCache(volumeNameA, ozoneBucket);
+addBucketsToCache(volumeNameB, hadoopBucket);
+
+
+String prefixKeyA = "key-a";
+String prefixKeyB = "key-b";
+TreeSet keysASet = new TreeSet<>();
+TreeSet keysBSet = new TreeSet<>();
+for (int i=1; i<= 100; i++) {
+  if (i % 2 == 0) {
+keysASet.add(
+prefixKeyA + i);
+addKeysToOM(volumeNameA, ozoneBucket, prefixKeyA + i, i);
+  } else {
+keysBSet.add(
+prefixKeyB + i);
+addKeysToOM(volumeNameA, hadoopBucket, prefixKeyB + i, i);
+  }
+}
+
+
+TreeSet keysAVolumeBSet = new TreeSet<>();
+TreeSet keysBVolumeBSet = new TreeSet<>();
+for (int i=1; i<= 100; i++) {
+  if (i % 2 == 0) {
+keysAVolumeBSet.add(
+prefixKeyA + i);
+addKeysToOM(volumeNameB, ozoneBucket, prefixKeyA + i, i);
+  } else {
+keysBVolumeBSet.add(
+prefixKeyB + i);
+addKeysToOM(volumeNameB, hadoopBucket, prefixKeyB + i, i);
+  }
+}
+
+
+// List all keys which have prefix "key-a"
+List omKeyInfoList =
+omMetadataManager.listKeys(volumeNameA, ozoneBucket,
+null, prefixKeyA, 100);
+
+Assert.assertEquals(omKeyInfoList.size(),  50);
+
+for (OmKeyInfo omKeyInfo : omKeyInfoList) {
+  Assert.assertTrue(omKeyInfo.getKeyName().startsWith(
+  prefixKeyA));
+}
+
+
+String startKey = prefixKeyA + 10;
+omKeyInfoList =
+omMetadataManager.listKeys(volumeNameA, ozoneBucket,
+startKey, prefixKeyA, 100);
+
+Assert.assertEquals(keysASet.tailSet(
+startKey).size() - 1, omKeyInfoList.size());
+
+startKey = prefixKeyA + 38;
+omKeyInfoList =
+omMetadataManager.listKeys(volumeNameA, ozoneBucket,
+startKey, prefixKeyA, 100);
+
+Assert.assertEquals(keysASet.tailSet(
+startKey).size() - 1, omKeyInfoList.size());
+
+for (OmKeyInfo omKeyInfo : omKeyInfoList) {
+  Assert.assertTrue(omKeyInfo.getKeyName().startsWith(
+  prefixKeyA));
+  Assert.assertFalse(omKeyInfo.getBucketName().equals(
+

[GitHub] [hadoop] arp7 commented on a change in pull request #1588: HDDS-1986. Fix listkeys API.

2019-10-07 Thread GitBox
arp7 commented on a change in pull request #1588: HDDS-1986. Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332303770
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
+  Map.Entry< CacheKey, CacheValue> entry =
+  iterator.next();
+
+  String key = entry.getKey().getCacheKey();
+  OmKeyInfo omKeyInfo = entry.getValue().getCacheValue();
+  // Making sure that entry in cache is not for delete key request.
+
+  if (omKeyInfo != null) {
+if (key.startsWith(seekPrefix) && key.compareTo(seekKey) >= 0) {
+  cacheKeyMap.put(key, omKeyInfo);
 }
+  } else {
+deletedKeySet.add(key);
+  }
+}
+
+// Get maxKeys from DB if it has.
+
+try (TableIterator>
+ keyIter = getKeyTable().iterator()) {
+  KeyValue< String, OmKeyInfo > kv;
+  keyIter.seek(seekKey);
+  // we need to iterate maxKeys + 1 here because if skipStartKey is true,
+  // we should skip that entry and return the result.
+  while (currentCount < maxKeys + 1 && keyIter.hasNext()) {
+kv = keyIter.next();
 if (kv != null && kv.getKey().startsWith(seekPrefix)) {
-  result.add(kv.getValue());
-  currentCount++;
+
+  // Entry should not be marked for delete, consider only those
+  // entries.
+  if(!deletedKeySet.contains(kv.getKey())) {
+cacheKeyMap.put(kv.getKey(), kv.getValue());
+currentCount++;
+  }
 } else {
   // The SeekPrefix does not match any more, we can break out of the
   // loop.
   break;
 }
   }
 }
+
+// Finally DB entries and cache entries are merged, then return the count
+// of maxKeys from the sorted map.
+currentCount = 0;
+
+for (Map.Entry  cacheKey : cacheKeyMap.entrySet()) {
 
 Review comment:
   The second iteration is unfortunate. We should see if there is a way to 
avoid it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-10-07 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946364#comment-16946364
 ] 

Hadoop QA commented on HADOOP-16152:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
51s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
28s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  3m  
7s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-client-modules/hadoop-client-runtime 
hadoop-client-modules/hadoop-client-minicluster {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m 
51s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 51s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-client-modules/hadoop-client-runtime 
hadoop-client-modules/hadoop-client-minicluster {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
7s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
33s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} hadoop-yarn-applications-catalog-webapp in the patch 
passed. {color} |
| {color:red}-1{color} 

[GitHub] [hadoop] hadoop-yetus commented on issue #1582: HDDS-2217. Removed redundant LOG4J lines from docker configurations

2019-10-07 Thread GitBox
hadoop-yetus commented on issue #1582: HDDS-2217. Removed redundant LOG4J lines 
from docker configurations
URL: https://github.com/apache/hadoop/pull/1582#issuecomment-539264930
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 11 | https://github.com/apache/hadoop/pull/1582 does not 
apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1582 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1582/2/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jeffsaremi commented on a change in pull request #1480: HDFS-14857. FS operations fail in HA mode: DataNode fails to connect to NameNode

2019-10-07 Thread GitBox
jeffsaremi commented on a change in pull request #1480: HDFS-14857. FS 
operations fail in HA mode: DataNode fails to connect to NameNode
URL: https://github.com/apache/hadoop/pull/1480#discussion_r332286495
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ConfiguredFailoverProxyProvider.java
 ##
 @@ -61,7 +63,10 @@ public ConfiguredFailoverProxyProvider(Configuration conf, 
URI uri,
   }
 
   @Override
-  public  void performFailover(T currentProxy) {
+  public void performFailover(T currentProxy) {
+//reset the IP address in case  the stale IP was the cause for failover
+LOG.info("Resetting cached proxy: " + currentProxyIndex);
+resetProxyAddress(proxies, currentProxyIndex);
 
 Review comment:
   We don't know if the next proxy will have the same issue or not. We just 
know about the current proxy having issue so we can't just alter the next one.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1555: HDDS-1984. Fix listBucket API.

2019-10-07 Thread GitBox
bharatviswa504 commented on a change in pull request #1555: HDDS-1984. Fix 
listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#discussion_r332284284
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -618,23 +618,31 @@ public boolean isBucketEmpty(String volume, String 
bucket)
 }
 int currentCount = 0;
 
-try (TableIterator>
-bucketIter = bucketTable.iterator()) {
-  KeyValue kv = bucketIter.seek(startKey);
-  while (currentCount < maxNumOfBuckets && bucketIter.hasNext()) {
-kv = bucketIter.next();
-// Skip the Start Bucket if needed.
-if (kv != null && skipStartKey &&
-kv.getKey().equals(startKey)) {
+
+// For Bucket it is full cache, so we can just iterate in-memory table
+// cache.
+Iterator, CacheValue>> iterator =
 
 Review comment:
   To improve this API, I think we should change our cache data structure which 
will be effective for both read/write/list API. (Initially, usage of 
ConcurrentHashMap, as get() is a constant time operation, but it has caused the 
problem for list API).
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1555: HDDS-1984. Fix listBucket API.

2019-10-07 Thread GitBox
bharatviswa504 commented on a change in pull request #1555: HDDS-1984. Fix 
listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#discussion_r332282739
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -618,23 +618,31 @@ public boolean isBucketEmpty(String volume, String 
bucket)
 }
 int currentCount = 0;
 
-try (TableIterator>
-bucketIter = bucketTable.iterator()) {
-  KeyValue kv = bucketIter.seek(startKey);
-  while (currentCount < maxNumOfBuckets && bucketIter.hasNext()) {
-kv = bucketIter.next();
-// Skip the Start Bucket if needed.
-if (kv != null && skipStartKey &&
-kv.getKey().equals(startKey)) {
+
+// For Bucket it is full cache, so we can just iterate in-memory table
+// cache.
+Iterator, CacheValue>> iterator =
 
 Review comment:
   There is a maxCount, when we reach the count, we will return immediately, in 
that case, we shall not iterate all volumes in a bucket.
   
   BucketTable cache is just a concurrentHashMap of all buckets in OM.
   
   So let's take an example: As it is ConcurrentSkipListMap, it has items 
sorted based on the key.
   
   We have entries in bucket table cache like below: (For brevity, removed 
bucketInfo structures which are the values for the key.)
   /vol/buck1
   /vol/buck2
   /vol/buck3
   /vol2/bucket2
   /vol2/bucket3
   /vol2/bucket4
   
   When we want to list buckets of /vol, and in each return only 1 
entry(maximum count), we return /vol/buck1 and then immediately return, next 
time listBuckets is called, startkey will be /vol/buck1,1, we return 
/vol/buck2, (To return this we iterate 2 entries), like that so on.
   
   But when we want to list all buckets is /vol2, we will iterate the entries 
from start, and reach to /vol2 in the cache and once the maximum count is 
reached we return from there.
   
   So, to answer your question not every time we iterate entire cache map. (But 
some times we iterate and skip them, as shown with case of vol2 bucket list)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1555: HDDS-1984. Fix listBucket API.

2019-10-07 Thread GitBox
bharatviswa504 commented on a change in pull request #1555: HDDS-1984. Fix 
listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#discussion_r332282739
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -618,23 +618,31 @@ public boolean isBucketEmpty(String volume, String 
bucket)
 }
 int currentCount = 0;
 
-try (TableIterator>
-bucketIter = bucketTable.iterator()) {
-  KeyValue kv = bucketIter.seek(startKey);
-  while (currentCount < maxNumOfBuckets && bucketIter.hasNext()) {
-kv = bucketIter.next();
-// Skip the Start Bucket if needed.
-if (kv != null && skipStartKey &&
-kv.getKey().equals(startKey)) {
+
+// For Bucket it is full cache, so we can just iterate in-memory table
+// cache.
+Iterator, CacheValue>> iterator =
 
 Review comment:
   There is a maxCount, when we reach the count, we will return immediately, in 
that case, we shall not iterate all volumes in a bucket.
   
   BucketTable cache is just a concurrentHashMap of all buckets in OM.
   
   So let's take an example: As it is ConcurrentSkipListMap, it has items 
sorted based on the key.
   
   We have entries in bucket table cache like below: (For brevity, removed 
bucketInfo structures which are the values for the key.)
   /vol/buck1
   /vol/buck2
   /vol/buck3
   /vol2/bucket2
   /vol2/bucket3
   /vol2/bucket4
   
   When we want to list buckets of /vol, and in each return only 1 
entry(maximum count), we return /vol/buck1 and then immediately return, next 
time listBuckets is called, startkey will be /vol/buck1,1, we return 
/vol/buck2, (To return this we iterate 2 entries), like that so on.
   
   But when we want to list all buckets is /vol2, we will iterate the entries 
from start, and reach to /vol2 in the cache and once the maximum count is 
reached we return from there.
   
   So, to answer your question not every time we iterate entire cache map.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1555: HDDS-1984. Fix listBucket API.

2019-10-07 Thread GitBox
bharatviswa504 commented on a change in pull request #1555: HDDS-1984. Fix 
listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#discussion_r332281050
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/CacheKey.java
 ##
 @@ -53,4 +53,18 @@ public boolean equals(Object o) {
   public int hashCode() {
 return Objects.hash(key);
   }
+
+  @Override
+  public int compareTo(Object o) {
+if(Objects.equals(key, ((CacheKey)o).key)) {
+  return 0;
+} else {
+  if (key instanceof String) {
+return ((String) key).compareTo((String) ((CacheKey)o).key);
+  } else {
+// If not type string, convert to string and compare.
+return key.toString().compareToCacheKey) o).key).toString());
 
 Review comment:
   When CacheKey KEY type is not string. For BucketTable type is string, 
for tokentable type is OzoneTokenIdentifier. As this is common class used by 
all tables in OM, added the if, else condition.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16579) Upgrade to Apache Curator 4.2.0 in Hadoop

2019-10-07 Thread Jira


[ 
https://issues.apache.org/jira/browse/HADOOP-16579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946329#comment-16946329
 ] 

Íñigo Goiri commented on HADOOP-16579:
--

Thanks [~weichiu] for the ping.
I'll try to deploy trunk to see if there's something broken.
We internally use Hadoop 2.9 with ZooKeeper 3.4.8 and there were not many 
issues when switching from 3.4.6.

> Upgrade to Apache Curator 4.2.0 in Hadoop
> -
>
> Key: HADOOP-16579
> URL: https://issues.apache.org/jira/browse/HADOOP-16579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Mate Szalay-Beko
>Assignee: Norbert Kalmár
>Priority: Major
>
> Currently in Hadoop we are using [ZooKeeper version 
> 3.4.13|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L90].
>  ZooKeeper 3.5.5 is the latest stable Apache ZooKeeper release. It contains 
> many new features (including SSL related improvements which can be very 
> important for production use; see [the release 
> notes|https://zookeeper.apache.org/doc/r3.5.5/releasenotes.html]).
> Apache Curator is a high level ZooKeeper client library, that makes it easier 
> to use the low level ZooKeeper API. Currently [in Hadoop we are using Curator 
> 2.13.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L91]
>  and [in Ozone we use Curator 
> 2.12.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/pom.ozone.xml#L146].
> Curator 2.x is supporting only the ZooKeeper 3.4.x releases, while Curator 
> 3.x is compatible only with the new ZooKeeper 3.5.x releases. Fortunately, 
> the latest Curator 4.x versions are compatible with both ZooKeeper 3.4.x and 
> 3.5.x. (see [the relevant Curator 
> page|https://curator.apache.org/zk-compatibility.html]). Many Apache projects 
> have already migrated to Curator 4 (like HBase, Phoenix, Druid, etc.), other 
> components are doing it right now (e.g. Hive).
> *The aims of this task are* to:
>  - change Curator version in Hadoop to the latest stable 4.x version 
> (currently 4.2.0)
>  - also make sure we don't have multiple ZooKeeper versions in the classpath 
> to avoid runtime problems (it is 
> [recommended|https://curator.apache.org/zk-compatibility.html] to exclude the 
> ZooKeeper which come with Curator, so that there will be only a single 
> ZooKeeper version used runtime in Hadoop)
> In this ticket we still don't want to change the default ZooKeeper version in 
> Hadoop, we only want to make it possible for the community to be able to 
> build / use Hadoop with the new ZooKeeper (e.g. if they need to secure the 
> ZooKeeper communication with SSL, what is only supported in the new ZooKeeper 
> version). Upgrading to Curator 4.x should keep Hadoop to be compatible with 
> both ZooKeeper 3.4 and 3.5.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer edited a comment on issue #1589: HDDS-2244. Use new ReadWrite lock in OzoneManager.

2019-10-07 Thread GitBox
anuengineer edited a comment on issue #1589: HDDS-2244. Use new ReadWrite lock 
in OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589#issuecomment-539237587
 
 
   > Right now ActiveLock creates ReadWrite Lock with non-fair mode. Do you 
mean, we want to create the RWLOCK with an option of fair mode. If my 
understanding is wrong, could you let me know what additional things need to be 
implemented?
   
   When you use a Reader writer lock, there is a question of fairness. Where as 
exclusive locks are first come first serve.
   
   > 
   > And also this work is mainly to improve read performance workloads, as now 
with current approach of exclusive lock all reads are serialized.
   
   I am afraid this gives so much importance to Reads that you will have your 
writes getting stalled completely. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1589: HDDS-2244. Use new ReadWrite lock in OzoneManager.

2019-10-07 Thread GitBox
anuengineer commented on issue #1589: HDDS-2244. Use new ReadWrite lock in 
OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589#issuecomment-539237587
 
 
   
   > Right now ActiveLock creates ReadWrite Lock with non-fair mode. Do you 
mean, we want to create the RWLOCK with an option of fair mode. If my 
understanding is wrong, could you let me know what additional things need to be 
implemented?
   When you use a Reader writer lock, there is a question of fairness. Where as 
exclusive locks are first come first serve.
   > 
   > And also this work is mainly to improve read performance workloads, as now 
with current approach of exclusive lock all reads are serialized.
   I am afraid this gives so much importance to Reads that you will have your 
writes getting stalled completely. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1589: HDDS-2244. Use new ReadWrite lock in OzoneManager.

2019-10-07 Thread GitBox
anuengineer commented on a change in pull request #1589: HDDS-2244. Use new 
ReadWrite lock in OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589#discussion_r332270946
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -105,15 +109,66 @@ public OzoneManagerLock(Configuration conf) {
* should be bucket name. For remaining all resource only one param should
* be passed.
*/
+  @Deprecated
   public boolean acquireLock(Resource resource, String... resources) {
 String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::writeLock, WRITE_LOCK);
+  }
+
+  /**
+   * Acquire read lock on resource.
+   *
+   * For S3_BUCKET_LOCK, VOLUME_LOCK, BUCKET_LOCK type resource, same
+   * thread acquiring lock again is allowed.
+   *
+   * For USER_LOCK, PREFIX_LOCK, S3_SECRET_LOCK type resource, same thread
+   * acquiring lock again is not allowed.
+   *
+   * Special Note for USER_LOCK: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resource - Type of the resource.
+   * @param resources - Resource names on which user want to acquire lock.
+   * For Resource type BUCKET_LOCK, first param should be volume, second param
+   * should be bucket name. For remaining all resource only one param should
+   * be passed.
+   */
+  public boolean acquireReadLock(Resource resource, String... resources) {
+String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::readLock, READ_LOCK);
+  }
+
+
+  /**
+   * Acquire write lock on resource.
+   *
+   * For S3_BUCKET_LOCK, VOLUME_LOCK, BUCKET_LOCK type resource, same
+   * thread acquiring lock again is allowed.
+   *
+   * For USER_LOCK, PREFIX_LOCK, S3_SECRET_LOCK type resource, same thread
+   * acquiring lock again is not allowed.
+   *
+   * Special Note for USER_LOCK: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resource - Type of the resource.
+   * @param resources - Resource names on which user want to acquire lock.
+   * For Resource type BUCKET_LOCK, first param should be volume, second param
+   * should be bucket name. For remaining all resource only one param should
+   * be passed.
+   */
+  public boolean acquireWriteLock(Resource resource, String... resources) {
+String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::writeLock, WRITE_LOCK);
+  }
+
+  private boolean lock(Resource resource, String resourceName,
+  Consumer lockFn, String lockType) {
 if (!resource.canLock(lockSet.get())) {
   String errorMessage = getErrorMessage(resource);
   LOG.error(errorMessage);
   throw new RuntimeException(errorMessage);
 } else {
-  manager.lock(resourceName);
-  LOG.debug("Acquired {} lock on resource {}", resource.name,
+  lockFn.accept(resourceName);
+  LOG.debug("Acquired {} {} lock on resource {}", lockType, resource.name,
   resourceName);
 
 Review comment:
   Yes, it is very confusing. But Thanks for the explanation, it makes sense 
now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16638) Use Relative URLs in Hadoop KMS WebApps

2019-10-07 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946301#comment-16946301
 ] 

Hadoop QA commented on HADOOP-16638:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
38s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  3m 
19s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
43m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  3m  
1s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  3m  1s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
30s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.2 Server=19.03.2 Image:yetus/hadoop:1dde3efb91e |
| JIRA Issue | HADOOP-16638 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12982422/HADOOP-16638.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  |
| uname | Linux cd9e6254e8ab 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cfba6ac |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16577/artifact/out/branch-compile-root.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16577/artifact/out/patch-compile-root.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16577/artifact/out/patch-compile-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16577/testReport/ |
| Max. process+thread count | 308 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16577/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Use Relative URLs in Hadoop KMS WebApps
> ---
>
> Key: HADOOP-16638
> URL: https://issues.apache.org/jira/browse/HADOOP-16638
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: 

[GitHub] [hadoop] anuengineer commented on a change in pull request #1555: HDDS-1984. Fix listBucket API.

2019-10-07 Thread GitBox
anuengineer commented on a change in pull request #1555: HDDS-1984. Fix 
listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#discussion_r332264370
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -618,23 +618,31 @@ public boolean isBucketEmpty(String volume, String 
bucket)
 }
 int currentCount = 0;
 
-try (TableIterator>
-bucketIter = bucketTable.iterator()) {
-  KeyValue kv = bucketIter.seek(startKey);
-  while (currentCount < maxNumOfBuckets && bucketIter.hasNext()) {
-kv = bucketIter.next();
-// Skip the Start Bucket if needed.
-if (kv != null && skipStartKey &&
-kv.getKey().equals(startKey)) {
+
+// For Bucket it is full cache, so we can just iterate in-memory table
+// cache.
+Iterator, CacheValue>> iterator =
 
 Review comment:
   Sorry, I am not about to make sure of thi; but for each request do we 
iterator through the whole bucket space here? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1555: HDDS-1984. Fix listBucket API.

2019-10-07 Thread GitBox
anuengineer commented on a change in pull request #1555: HDDS-1984. Fix 
listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#discussion_r332257684
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/CacheKey.java
 ##
 @@ -53,4 +53,18 @@ public boolean equals(Object o) {
   public int hashCode() {
 return Objects.hash(key);
   }
+
+  @Override
+  public int compareTo(Object o) {
+if(Objects.equals(key, ((CacheKey)o).key)) {
+  return 0;
+} else {
+  if (key instanceof String) {
+return ((String) key).compareTo((String) ((CacheKey)o).key);
+  } else {
+// If not type string, convert to string and compare.
+return key.toString().compareToCacheKey) o).key).toString());
 
 Review comment:
   when can this happen? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1589: HDDS-2244. Use new ReadWrite lock in OzoneManager.

2019-10-07 Thread GitBox
bharatviswa504 commented on issue #1589: HDDS-2244. Use new ReadWrite lock in 
OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589#issuecomment-539225586
 
 
   > I have an uber question on this patch. How do we ensure that writes will 
not be starved on a resource, since Reads allow multiple of them to get thru at 
the same time? Do we have a mechanism to avoid write starvation in place? if 
not, it is not better to keep simple locks?
   
   Right now ActiveLock creates ReadWrite Lock with non-fair mode. Do you mean, 
we want to create the RWLOCK with an option of fair mode. If my understanding 
is wrong, could you let me know what additional things need to be implemented?
   
   And also this work is mainly to improve read performance workloads, as now 
with current approach of exclusive lock all reads are serialized.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1589: HDDS-2244. Use new ReadWrite lock in OzoneManager.

2019-10-07 Thread GitBox
bharatviswa504 commented on a change in pull request #1589: HDDS-2244. Use new 
ReadWrite lock in OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589#discussion_r332255791
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -105,15 +109,66 @@ public OzoneManagerLock(Configuration conf) {
* should be bucket name. For remaining all resource only one param should
* be passed.
*/
+  @Deprecated
   public boolean acquireLock(Resource resource, String... resources) {
 String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::writeLock, WRITE_LOCK);
+  }
+
+  /**
+   * Acquire read lock on resource.
+   *
+   * For S3_BUCKET_LOCK, VOLUME_LOCK, BUCKET_LOCK type resource, same
+   * thread acquiring lock again is allowed.
+   *
+   * For USER_LOCK, PREFIX_LOCK, S3_SECRET_LOCK type resource, same thread
+   * acquiring lock again is not allowed.
+   *
+   * Special Note for USER_LOCK: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resource - Type of the resource.
+   * @param resources - Resource names on which user want to acquire lock.
+   * For Resource type BUCKET_LOCK, first param should be volume, second param
+   * should be bucket name. For remaining all resource only one param should
+   * be passed.
+   */
+  public boolean acquireReadLock(Resource resource, String... resources) {
+String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::readLock, READ_LOCK);
+  }
+
+
+  /**
+   * Acquire write lock on resource.
+   *
+   * For S3_BUCKET_LOCK, VOLUME_LOCK, BUCKET_LOCK type resource, same
+   * thread acquiring lock again is allowed.
+   *
+   * For USER_LOCK, PREFIX_LOCK, S3_SECRET_LOCK type resource, same thread
+   * acquiring lock again is not allowed.
+   *
+   * Special Note for USER_LOCK: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resource - Type of the resource.
+   * @param resources - Resource names on which user want to acquire lock.
+   * For Resource type BUCKET_LOCK, first param should be volume, second param
+   * should be bucket name. For remaining all resource only one param should
+   * be passed.
+   */
+  public boolean acquireWriteLock(Resource resource, String... resources) {
+String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::writeLock, WRITE_LOCK);
+  }
+
+  private boolean lock(Resource resource, String resourceName,
+  Consumer lockFn, String lockType) {
 if (!resource.canLock(lockSet.get())) {
   String errorMessage = getErrorMessage(resource);
   LOG.error(errorMessage);
   throw new RuntimeException(errorMessage);
 } else {
-  manager.lock(resourceName);
-  LOG.debug("Acquired {} lock on resource {}", resource.name,
+  lockFn.accept(resourceName);
+  LOG.debug("Acquired {} {} lock on resource {}", lockType, resource.name,
   resourceName);
 
 Review comment:
   Here the first resource.name prints VOLUME_LOCK/BUCKET_LOCK, next 
resourceName prints actual resource name. (I think it is little confusing here, 
because class Resource name is defined like that.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1589: HDDS-2244. Use new ReadWrite lock in OzoneManager.

2019-10-07 Thread GitBox
bharatviswa504 commented on a change in pull request #1589: HDDS-2244. Use new 
ReadWrite lock in OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589#discussion_r332255791
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -105,15 +109,66 @@ public OzoneManagerLock(Configuration conf) {
* should be bucket name. For remaining all resource only one param should
* be passed.
*/
+  @Deprecated
   public boolean acquireLock(Resource resource, String... resources) {
 String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::writeLock, WRITE_LOCK);
+  }
+
+  /**
+   * Acquire read lock on resource.
+   *
+   * For S3_BUCKET_LOCK, VOLUME_LOCK, BUCKET_LOCK type resource, same
+   * thread acquiring lock again is allowed.
+   *
+   * For USER_LOCK, PREFIX_LOCK, S3_SECRET_LOCK type resource, same thread
+   * acquiring lock again is not allowed.
+   *
+   * Special Note for USER_LOCK: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resource - Type of the resource.
+   * @param resources - Resource names on which user want to acquire lock.
+   * For Resource type BUCKET_LOCK, first param should be volume, second param
+   * should be bucket name. For remaining all resource only one param should
+   * be passed.
+   */
+  public boolean acquireReadLock(Resource resource, String... resources) {
+String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::readLock, READ_LOCK);
+  }
+
+
+  /**
+   * Acquire write lock on resource.
+   *
+   * For S3_BUCKET_LOCK, VOLUME_LOCK, BUCKET_LOCK type resource, same
+   * thread acquiring lock again is allowed.
+   *
+   * For USER_LOCK, PREFIX_LOCK, S3_SECRET_LOCK type resource, same thread
+   * acquiring lock again is not allowed.
+   *
+   * Special Note for USER_LOCK: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resource - Type of the resource.
+   * @param resources - Resource names on which user want to acquire lock.
+   * For Resource type BUCKET_LOCK, first param should be volume, second param
+   * should be bucket name. For remaining all resource only one param should
+   * be passed.
+   */
+  public boolean acquireWriteLock(Resource resource, String... resources) {
+String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::writeLock, WRITE_LOCK);
+  }
+
+  private boolean lock(Resource resource, String resourceName,
+  Consumer lockFn, String lockType) {
 if (!resource.canLock(lockSet.get())) {
   String errorMessage = getErrorMessage(resource);
   LOG.error(errorMessage);
   throw new RuntimeException(errorMessage);
 } else {
-  manager.lock(resourceName);
-  LOG.debug("Acquired {} lock on resource {}", resource.name,
+  lockFn.accept(resourceName);
+  LOG.debug("Acquired {} {} lock on resource {}", lockType, resource.name,
   resourceName);
 
 Review comment:
   Here the first resource.name prints VOLUME_LOCK/BUCKET_LOCK, next 
resourceName prints actual resource name.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1589: HDDS-2244. Use new ReadWrite lock in OzoneManager.

2019-10-07 Thread GitBox
anuengineer commented on a change in pull request #1589: HDDS-2244. Use new 
ReadWrite lock in OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589#discussion_r332251450
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -105,15 +109,66 @@ public OzoneManagerLock(Configuration conf) {
* should be bucket name. For remaining all resource only one param should
* be passed.
*/
+  @Deprecated
   public boolean acquireLock(Resource resource, String... resources) {
 String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::writeLock, WRITE_LOCK);
+  }
+
+  /**
+   * Acquire read lock on resource.
+   *
+   * For S3_BUCKET_LOCK, VOLUME_LOCK, BUCKET_LOCK type resource, same
+   * thread acquiring lock again is allowed.
+   *
+   * For USER_LOCK, PREFIX_LOCK, S3_SECRET_LOCK type resource, same thread
+   * acquiring lock again is not allowed.
+   *
+   * Special Note for USER_LOCK: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resource - Type of the resource.
+   * @param resources - Resource names on which user want to acquire lock.
+   * For Resource type BUCKET_LOCK, first param should be volume, second param
+   * should be bucket name. For remaining all resource only one param should
+   * be passed.
+   */
+  public boolean acquireReadLock(Resource resource, String... resources) {
+String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::readLock, READ_LOCK);
+  }
+
+
+  /**
+   * Acquire write lock on resource.
+   *
+   * For S3_BUCKET_LOCK, VOLUME_LOCK, BUCKET_LOCK type resource, same
+   * thread acquiring lock again is allowed.
+   *
+   * For USER_LOCK, PREFIX_LOCK, S3_SECRET_LOCK type resource, same thread
+   * acquiring lock again is not allowed.
+   *
+   * Special Note for USER_LOCK: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resource - Type of the resource.
+   * @param resources - Resource names on which user want to acquire lock.
+   * For Resource type BUCKET_LOCK, first param should be volume, second param
+   * should be bucket name. For remaining all resource only one param should
+   * be passed.
+   */
+  public boolean acquireWriteLock(Resource resource, String... resources) {
+String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::writeLock, WRITE_LOCK);
+  }
+
+  private boolean lock(Resource resource, String resourceName,
+  Consumer lockFn, String lockType) {
 if (!resource.canLock(lockSet.get())) {
   String errorMessage = getErrorMessage(resource);
   LOG.error(errorMessage);
   throw new RuntimeException(errorMessage);
 } else {
-  manager.lock(resourceName);
-  LOG.debug("Acquired {} lock on resource {}", resource.name,
+  lockFn.accept(resourceName);
+  LOG.debug("Acquired {} {} lock on resource {}", lockType, resource.name,
   resourceName);
 
 Review comment:
   I am trying to read this debug statement.. Do you need to have resource name 
twice ? once via resource.name and another via resourceName?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1559: HDDS-1737. Add Volume check in KeyManager and File Operations.

2019-10-07 Thread GitBox
bharatviswa504 commented on a change in pull request #1559: HDDS-1737. Add 
Volume check in KeyManager and File Operations.
URL: https://github.com/apache/hadoop/pull/1559#discussion_r332253834
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRenameRequest.java
 ##
 @@ -123,10 +126,17 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
   acquiredLock = omMetadataManager.getLock().acquireLock(BUCKET_LOCK,
   volumeName, bucketName);
 
-  // Not doing bucket/volume checks here. In this way we can avoid db
-  // checks for them.
-  // TODO: Once we have volume/bucket full cache, we can add
-  // them back, as these checks will be inexpensive at that time.
+  // Check volume exist.
+  if (omMetadataManager.getVolumeTable().isExist(volumeName)) {
+throw new OMException("Volume not found " + volumeName,
 
 Review comment:
   Same as above


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1559: HDDS-1737. Add Volume check in KeyManager and File Operations.

2019-10-07 Thread GitBox
bharatviswa504 commented on a change in pull request #1559: HDDS-1737. Add 
Volume check in KeyManager and File Operations.
URL: https://github.com/apache/hadoop/pull/1559#discussion_r332252751
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java
 ##
 @@ -117,12 +121,19 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
   acquiredLock = omMetadataManager.getLock().acquireLock(BUCKET_LOCK,
   volumeName, bucketName);
 
-  // Not doing bucket/volume checks here. In this way we can avoid db
-  // checks for them.
-  // TODO: Once we have volume/bucket full cache, we can add
-  // them back, as these checks will be inexpensive at that time.
-  OmKeyInfo omKeyInfo = omMetadataManager.getKeyTable().get(objectKey);
+  // Check volume exist.
+  if (omMetadataManager.getVolumeTable().isExist(volumeName)) {
 
 Review comment:
   Here it should be if 
(!omMetadataManager.getVolumeTable().isExist(volumeName)) right?
   
   And also we should pass omMetadataManagerImpl.getVolumeKey/getBucketKey, not 
direct volumeName/bucketName.
   
   As here if it does not exist, we should return error?
   
   ```
 /**
  * Check if a given key exists in Metadata store.
  * (Optimization to save on data deserialization)
  * A lock on the key / bucket needs to be acquired before invoking this 
API.
  * @param key metadata key
  * @return true if the metadata store contains a key.
  * @throws IOException on Failure
  */
 boolean isExist(KEY key) throws IOException;
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1559: HDDS-1737. Add Volume check in KeyManager and File Operations.

2019-10-07 Thread GitBox
bharatviswa504 commented on a change in pull request #1559: HDDS-1737. Add 
Volume check in KeyManager and File Operations.
URL: https://github.com/apache/hadoop/pull/1559#discussion_r332253241
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java
 ##
 @@ -117,12 +121,19 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
   acquiredLock = omMetadataManager.getLock().acquireLock(BUCKET_LOCK,
   volumeName, bucketName);
 
-  // Not doing bucket/volume checks here. In this way we can avoid db
-  // checks for them.
-  // TODO: Once we have volume/bucket full cache, we can add
-  // them back, as these checks will be inexpensive at that time.
-  OmKeyInfo omKeyInfo = omMetadataManager.getKeyTable().get(objectKey);
+  // Check volume exist.
+  if (omMetadataManager.getVolumeTable().isExist(volumeName)) {
 
 Review comment:
   And also we can do little optimization here, check the first bucket exists 
or not, if it does not exist, then check volume?
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1559: HDDS-1737. Add Volume check in KeyManager and File Operations.

2019-10-07 Thread GitBox
bharatviswa504 commented on a change in pull request #1559: HDDS-1737. Add 
Volume check in KeyManager and File Operations.
URL: https://github.com/apache/hadoop/pull/1559#discussion_r332252751
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java
 ##
 @@ -117,12 +121,19 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
   acquiredLock = omMetadataManager.getLock().acquireLock(BUCKET_LOCK,
   volumeName, bucketName);
 
-  // Not doing bucket/volume checks here. In this way we can avoid db
-  // checks for them.
-  // TODO: Once we have volume/bucket full cache, we can add
-  // them back, as these checks will be inexpensive at that time.
-  OmKeyInfo omKeyInfo = omMetadataManager.getKeyTable().get(objectKey);
+  // Check volume exist.
+  if (omMetadataManager.getVolumeTable().isExist(volumeName)) {
 
 Review comment:
   Here it should be if 
(!omMetadataManager.getVolumeTable().isExist(volumeName)) right?
   
   As here if it does not exist, we should return error?
   
   ```
 /**
  * Check if a given key exists in Metadata store.
  * (Optimization to save on data deserialization)
  * A lock on the key / bucket needs to be acquired before invoking this 
API.
  * @param key metadata key
  * @return true if the metadata store contains a key.
  * @throws IOException on Failure
  */
 boolean isExist(KEY key) throws IOException;
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swagle commented on issue #1610: HDDS-1868. Ozone pipelines should be marked as ready only after the leader election is complete.

2019-10-07 Thread GitBox
swagle commented on issue #1610: HDDS-1868. Ozone pipelines should be marked as 
ready only after the leader election is complete.
URL: https://github.com/apache/hadoop/pull/1610#issuecomment-539217658
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1606: HDDS-2262. SLEEP_SECONDS: command not found

2019-10-07 Thread GitBox
anuengineer closed pull request #1606: HDDS-2262. SLEEP_SECONDS: command not 
found
URL: https://github.com/apache/hadoop/pull/1606
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1606: HDDS-2262. SLEEP_SECONDS: command not found

2019-10-07 Thread GitBox
anuengineer commented on issue #1606: HDDS-2262. SLEEP_SECONDS: command not 
found
URL: https://github.com/apache/hadoop/pull/1606#issuecomment-539216038
 
 
   Thank you for the contribution. I have committed this patch to the trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1605: HDDS-2259. Container Data Scrubber computes wrong checksum

2019-10-07 Thread GitBox
anuengineer closed pull request #1605: HDDS-2259. Container Data Scrubber 
computes wrong checksum
URL: https://github.com/apache/hadoop/pull/1605
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1605: HDDS-2259. Container Data Scrubber computes wrong checksum

2019-10-07 Thread GitBox
anuengineer commented on issue #1605: HDDS-2259. Container Data Scrubber 
computes wrong checksum
URL: https://github.com/apache/hadoop/pull/1605#issuecomment-539214958
 
 
   +1. LGTM. Thank you for fixing this very important issue. I have committed 
this patch to the trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16639) Use Relative URLS in Hadoop HDFS HTTP FS

2019-10-07 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor resolved HADOOP-16639.
-
Resolution: Won't Fix

Belong in HDFS project

> Use Relative URLS in Hadoop HDFS HTTP FS
> 
>
> Key: HADOOP-16639
> URL: https://issues.apache.org/jira/browse/HADOOP-16639
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: httpfs
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16639) Use Relative URLS in Hadoop HDFS HTTP FS

2019-10-07 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946250#comment-16946250
 ] 

David Mollitor edited comment on HADOOP-16639 at 10/7/19 9:20 PM:
--

Belongs in HDFS project


was (Author: belugabehr):
Belong in HDFS project

> Use Relative URLS in Hadoop HDFS HTTP FS
> 
>
> Key: HADOOP-16639
> URL: https://issues.apache.org/jira/browse/HADOOP-16639
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: httpfs
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15956) Use relative resource URLs across WebUI components

2019-10-07 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-15956:

Status: Open  (was: Patch Available)

> Use relative resource URLs across WebUI components
> --
>
> Key: HADOOP-15956
> URL: https://issues.apache.org/jira/browse/HADOOP-15956
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
> Attachments: HADOOP-15956.001.patch
>
>
> Similar to HDFS-12961 there are absolute paths used for static resources in 
> the WebUI for HDFS & KMS which can cause issues when attempting to access 
> these pages via a reverse proxy. Using relative paths in all WebUI components 
> will allow pages to render properly when using a reverse proxy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1607: HDDS-2264. Improve output of TestOzoneContainer

2019-10-07 Thread GitBox
anuengineer closed pull request #1607: HDDS-2264. Improve output of 
TestOzoneContainer
URL: https://github.com/apache/hadoop/pull/1607
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1607: HDDS-2264. Improve output of TestOzoneContainer

2019-10-07 Thread GitBox
anuengineer commented on issue #1607: HDDS-2264. Improve output of 
TestOzoneContainer
URL: https://github.com/apache/hadoop/pull/1607#issuecomment-539207650
 
 
   +1. LGTM. I have committed this patch to the trunk. Thank you for the 
contribution.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16639) Use Relative URLS in Hadoop HDFS HTTP FS

2019-10-07 Thread David Mollitor (Jira)
David Mollitor created HADOOP-16639:
---

 Summary: Use Relative URLS in Hadoop HDFS HTTP FS
 Key: HADOOP-16639
 URL: https://issues.apache.org/jira/browse/HADOOP-16639
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: httpfs
Affects Versions: 3.2.0
Reporter: David Mollitor
Assignee: David Mollitor






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16638) Use Relative URLs in Hadoop KMS WebApps

2019-10-07 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-16638:

Status: Patch Available  (was: Open)

> Use Relative URLs in Hadoop KMS WebApps
> ---
>
> Key: HADOOP-16638
> URL: https://issues.apache.org/jira/browse/HADOOP-16638
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HADOOP-16638.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16638) Use Relative URLs in Hadoop KMS WebApps

2019-10-07 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-16638:

Flags: Patch

> Use Relative URLs in Hadoop KMS WebApps
> ---
>
> Key: HADOOP-16638
> URL: https://issues.apache.org/jira/browse/HADOOP-16638
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HADOOP-16638.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16638) Use Relative URLs in Hadoop KMS WebApps

2019-10-07 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-16638:

Attachment: HADOOP-16638.1.patch

> Use Relative URLs in Hadoop KMS WebApps
> ---
>
> Key: HADOOP-16638
> URL: https://issues.apache.org/jira/browse/HADOOP-16638
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HADOOP-16638.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1590: HDDS-2238. Container Data Scrubber spams log in empty cluster

2019-10-07 Thread GitBox
anuengineer commented on issue #1590: HDDS-2238. Container Data Scrubber spams 
log in empty cluster
URL: https://github.com/apache/hadoop/pull/1590#issuecomment-539204102
 
 
   Thank you for the contribution. I have committed this change to the trunk 
branch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1590: HDDS-2238. Container Data Scrubber spams log in empty cluster

2019-10-07 Thread GitBox
anuengineer closed pull request #1590: HDDS-2238. Container Data Scrubber spams 
log in empty cluster
URL: https://github.com/apache/hadoop/pull/1590
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1590: HDDS-2238. Container Data Scrubber spams log in empty cluster

2019-10-07 Thread GitBox
anuengineer commented on issue #1590: HDDS-2238. Container Data Scrubber spams 
log in empty cluster
URL: https://github.com/apache/hadoop/pull/1590#issuecomment-539203108
 
 
   +1. LGTM. There is one thing that is not very clear to me. Why add 
container it is not an issue, but I am not sure I understand the benefit 
either. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16638) Use Relative URLs in Hadoop KMS WebApps

2019-10-07 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-16638:

Summary: Use Relative URLs in Hadoop KMS WebApps  (was: Use Relative URLs 
in Hadoop KMS)

> Use Relative URLs in Hadoop KMS WebApps
> ---
>
> Key: HADOOP-16638
> URL: https://issues.apache.org/jira/browse/HADOOP-16638
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16638) Use Relative URLs in Hadoop KMS

2019-10-07 Thread David Mollitor (Jira)
David Mollitor created HADOOP-16638:
---

 Summary: Use Relative URLs in Hadoop KMS
 Key: HADOOP-16638
 URL: https://issues.apache.org/jira/browse/HADOOP-16638
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: kms
Affects Versions: 3.2.0
Reporter: David Mollitor
Assignee: David Mollitor






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-10-07 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16152:

Attachment: HADOOP-16152.006.patch
Status: Patch Available  (was: In Progress)

> Upgrade Eclipse Jetty version to 9.4.x
> --
>
> Key: HADOOP-16152
> URL: https://issues.apache.org/jira/browse/HADOOP-16152
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Yuming Wang
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16152.002.patch, HADOOP-16152.002.patch, 
> HADOOP-16152.003.patch, HADOOP-16152.004.patch, HADOOP-16152.005.patch, 
> HADOOP-16152.006.patch, HADOOP-16152.v1.patch
>
>
> Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
> compatibility issues.
> Spark: 
> [https://github.com/apache/spark/blob/02a0cdea13a5eebd27649a60d981de35156ba52c/pom.xml#L146]
> Calcite: 
> [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
> Hive: HIVE-21211



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-10-07 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16152:

Status: In Progress  (was: Patch Available)

> Upgrade Eclipse Jetty version to 9.4.x
> --
>
> Key: HADOOP-16152
> URL: https://issues.apache.org/jira/browse/HADOOP-16152
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Yuming Wang
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16152.002.patch, HADOOP-16152.002.patch, 
> HADOOP-16152.003.patch, HADOOP-16152.004.patch, HADOOP-16152.005.patch, 
> HADOOP-16152.006.patch, HADOOP-16152.v1.patch
>
>
> Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
> compatibility issues.
> Spark: 
> [https://github.com/apache/spark/blob/02a0cdea13a5eebd27649a60d981de35156ba52c/pom.xml#L146]
> Calcite: 
> [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
> Hive: HIVE-21211



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-10-07 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946216#comment-16946216
 ] 

Siyao Meng commented on HADOOP-16152:
-

rev 006 changelog: addressing checkstyle, whitespace and findbugs.

Apart from ZK unit test failures, 
*TestRenameWithSnapshots#testRename2PreDescendant* is also unrelated (failed 
even before this patch, some commit else probably broke it).

> Upgrade Eclipse Jetty version to 9.4.x
> --
>
> Key: HADOOP-16152
> URL: https://issues.apache.org/jira/browse/HADOOP-16152
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Yuming Wang
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16152.002.patch, HADOOP-16152.002.patch, 
> HADOOP-16152.003.patch, HADOOP-16152.004.patch, HADOOP-16152.005.patch, 
> HADOOP-16152.v1.patch
>
>
> Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
> compatibility issues.
> Spark: 
> [https://github.com/apache/spark/blob/02a0cdea13a5eebd27649a60d981de35156ba52c/pom.xml#L146]
> Calcite: 
> [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
> Hive: HIVE-21211



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1611: Hadoop 16612 Track Azure Blob File System client-perceived latency

2019-10-07 Thread GitBox
hadoop-yetus commented on issue #1611: Hadoop 16612 Track Azure Blob File 
System client-perceived latency
URL: https://github.com/apache/hadoop/pull/1611#issuecomment-539195753
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1087 | trunk passed |
   | +1 | compile | 33 | trunk passed |
   | +1 | checkstyle | 26 | trunk passed |
   | +1 | mvnsite | 36 | trunk passed |
   | +1 | shadedclient | 793 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 27 | trunk passed |
   | 0 | spotbugs | 52 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 51 | trunk passed |
   | -0 | patch | 79 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 29 | the patch passed |
   | +1 | compile | 25 | the patch passed |
   | +1 | javac | 25 | the patch passed |
   | -0 | checkstyle | 20 | hadoop-tools/hadoop-azure: The patch generated 71 
new + 5 unchanged - 0 fixed = 76 total (was 5) |
   | +1 | mvnsite | 28 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 777 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 25 | the patch passed |
   | +1 | findbugs | 56 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 84 | hadoop-azure in the patch passed. |
   | +1 | asflicense | 35 | The patch does not generate ASF License warnings. |
   | | | 3283 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1611/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1611 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 8da536829358 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 382967b |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1611/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1611/1/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1611/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swagle commented on issue #1610: HDDS-1868. Ozone pipelines should be marked as ready only after the leader election is complete.

2019-10-07 Thread GitBox
swagle commented on issue #1610: HDDS-1868. Ozone pipelines should be marked as 
ready only after the leader election is complete.
URL: https://github.com/apache/hadoop/pull/1610#issuecomment-539184469
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1590: HDDS-2238. Container Data Scrubber spams log in empty cluster

2019-10-07 Thread GitBox
adoroszlai commented on issue #1590: HDDS-2238. Container Data Scrubber spams 
log in empty cluster
URL: https://github.com/apache/hadoop/pull/1590#issuecomment-539179385
 
 
   @anuengineer please review, too


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jeeteshm opened a new pull request #1611: Hadoop 16612 Track Azure Blob File System client-perceived latency

2019-10-07 Thread GitBox
jeeteshm opened a new pull request #1611: Hadoop 16612 Track Azure Blob File 
System client-perceived latency
URL: https://github.com/apache/hadoop/pull/1611
 
 
   Add instrumentation code to measure the ADLS Gen 2 API performance
   Add a feature switch to optionally enable this feature
   Add unit tests for correctness and performance


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jeeteshm commented on issue #1569: HADOOP-16612 Track Azure Blob File System client-perceived latency

2019-10-07 Thread GitBox
jeeteshm commented on issue #1569: HADOOP-16612 Track Azure Blob File System 
client-perceived latency
URL: https://github.com/apache/hadoop/pull/1569#issuecomment-539169875
 
 
   Raising a new PR because this one shows changes that don't belong to me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jeeteshm opened a new pull request #1569: HADOOP-16612 Track Azure Blob File System client-perceived latency

2019-10-07 Thread GitBox
jeeteshm opened a new pull request #1569: HADOOP-16612 Track Azure Blob File 
System client-perceived latency
URL: https://github.com/apache/hadoop/pull/1569
 
 
   Add instrumentation code to measure the ADLS Gen 2 API performance
   Add a feature switch to optionally enable this feature
   Add unit tests for correctness and performance 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jeeteshm closed pull request #1569: HADOOP-16612 Track Azure Blob File System client-perceived latency

2019-10-07 Thread GitBox
jeeteshm closed pull request #1569: HADOOP-16612 Track Azure Blob File System 
client-perceived latency
URL: https://github.com/apache/hadoop/pull/1569
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jeeteshm closed pull request #1569: HADOOP-16612 Track Azure Blob File System client-perceived latency

2019-10-07 Thread GitBox
jeeteshm closed pull request #1569: HADOOP-16612 Track Azure Blob File System 
client-perceived latency
URL: https://github.com/apache/hadoop/pull/1569
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1610: HDDS-1868. Ozone pipelines should be marked as ready only after the leader election is complete.

2019-10-07 Thread GitBox
hadoop-yetus commented on issue #1610: HDDS-1868. Ozone pipelines should be 
marked as ready only after the leader election is complete.
URL: https://github.com/apache/hadoop/pull/1610#issuecomment-539167261
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for branch |
   | -1 | mvninstall | 31 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-hdds in trunk failed. |
   | -1 | compile | 15 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 51 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 843 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 19 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 945 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 33 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 20 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | -1 | mvninstall | 48 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 38 | hadoop-ozone in the patch failed. |
   | -1 | compile | 25 | hadoop-hdds in the patch failed. |
   | -1 | compile | 19 | hadoop-ozone in the patch failed. |
   | -1 | cc | 25 | hadoop-hdds in the patch failed. |
   | -1 | cc | 19 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-hdds in the patch failed. |
   | -1 | javac | 19 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 27 | hadoop-hdds: The patch generated 7 new + 0 
unchanged - 0 fixed = 7 total (was 0) |
   | -0 | checkstyle | 30 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 723 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 21 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 32 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 21 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 28 | hadoop-hdds in the patch failed. |
   | -1 | unit | 27 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 2438 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1610 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc xml |
   | uname | Linux 5d4b009da9f3 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9685a6c |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/1/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | 

[GitHub] [hadoop] hadoop-yetus commented on issue #1569: HADOOP-16612 Track Azure Blob File System client-perceived latency

2019-10-07 Thread GitBox
hadoop-yetus commented on issue #1569: HADOOP-16612 Track Azure Blob File 
System client-perceived latency
URL: https://github.com/apache/hadoop/pull/1569#issuecomment-539166965
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 12 | https://github.com/apache/hadoop/pull/1569 does not 
apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1569 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1569/2/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1608: HDDS-2265. integration.sh may report false negative

2019-10-07 Thread GitBox
anuengineer commented on a change in pull request #1608: HDDS-2265. 
integration.sh may report false negative
URL: https://github.com/apache/hadoop/pull/1608#discussion_r332190504
 
 

 ##
 File path: hadoop-ozone/dev-support/checks/_mvn_unit_report.sh
 ##
 @@ -45,6 +45,11 @@ grep -A1 'Crashed tests' "${REPORT_DIR}/output.log" \
   | cut -f2- -d' ' \
   | sort -u >> "${REPORT_DIR}/summary.txt"
 
+## Check if Maven was killed
+if grep -q 'Killed.* mvn .* test ' "${REPORT_DIR}/output.log"; then
 
 Review comment:
   So we are presuming that Killed will never be used by a test? :) I am fine 
with that.
   +1. LGTM.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16579) Upgrade to Apache Curator 4.2.0 in Hadoop

2019-10-07 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946154#comment-16946154
 ] 

Wei-Chiu Chuang commented on HADOOP-16579:
--

Thank you, Norbert. Our unit test doesn't actually run any useful tests when 
you update only the pom.xml file.

We probably want to update to zookeeper 3.5.6, assuming it doesn't have 
performance regression (we have heavy dependency on zookeeper for KMS and 
Router Based Federation), and backward compat. We can also gradually get rid of 
netty3.

[~inigoiri] FYI, like I mentioned, RBF have heavy dependency on ZK, so you 
should care about it.

> Upgrade to Apache Curator 4.2.0 in Hadoop
> -
>
> Key: HADOOP-16579
> URL: https://issues.apache.org/jira/browse/HADOOP-16579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Mate Szalay-Beko
>Assignee: Norbert Kalmár
>Priority: Major
>
> Currently in Hadoop we are using [ZooKeeper version 
> 3.4.13|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L90].
>  ZooKeeper 3.5.5 is the latest stable Apache ZooKeeper release. It contains 
> many new features (including SSL related improvements which can be very 
> important for production use; see [the release 
> notes|https://zookeeper.apache.org/doc/r3.5.5/releasenotes.html]).
> Apache Curator is a high level ZooKeeper client library, that makes it easier 
> to use the low level ZooKeeper API. Currently [in Hadoop we are using Curator 
> 2.13.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L91]
>  and [in Ozone we use Curator 
> 2.12.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/pom.ozone.xml#L146].
> Curator 2.x is supporting only the ZooKeeper 3.4.x releases, while Curator 
> 3.x is compatible only with the new ZooKeeper 3.5.x releases. Fortunately, 
> the latest Curator 4.x versions are compatible with both ZooKeeper 3.4.x and 
> 3.5.x. (see [the relevant Curator 
> page|https://curator.apache.org/zk-compatibility.html]). Many Apache projects 
> have already migrated to Curator 4 (like HBase, Phoenix, Druid, etc.), other 
> components are doing it right now (e.g. Hive).
> *The aims of this task are* to:
>  - change Curator version in Hadoop to the latest stable 4.x version 
> (currently 4.2.0)
>  - also make sure we don't have multiple ZooKeeper versions in the classpath 
> to avoid runtime problems (it is 
> [recommended|https://curator.apache.org/zk-compatibility.html] to exclude the 
> ZooKeeper which come with Curator, so that there will be only a single 
> ZooKeeper version used runtime in Hadoop)
> In this ticket we still don't want to change the default ZooKeeper version in 
> Hadoop, we only want to make it possible for the community to be able to 
> build / use Hadoop with the new ZooKeeper (e.g. if they need to secure the 
> ZooKeeper communication with SSL, what is only supported in the new ZooKeeper 
> version). Upgrading to Curator 4.x should keep Hadoop to be compatible with 
> both ZooKeeper 3.4 and 3.5.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1586: HDDS-2240. Command line tool for OM HA.

2019-10-07 Thread GitBox
anuengineer commented on a change in pull request #1586: HDDS-2240. Command 
line tool for OM HA.
URL: https://github.com/apache/hadoop/pull/1586#discussion_r332187600
 
 

 ##
 File path: hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
 ##
 @@ -1097,11 +1097,34 @@ message UpdateGetS3SecretRequest {
 required string awsSecret = 2;
 }
 
+message OMServiceId {
+required string serviceID = 1;
+}
+
+/**
+  This proto is used to define the OM node Id and its ratis server state.
+*/
+message RoleInfo {
+required string omNodeID = 1;
+required string ratisServerRole = 2;
+}
+
+/**
+  This is used to get the Server States of OMs.
+*/
+message ServiceState {
+repeated RoleInfo roleInfos = 1;
+}
+
 /**
  The OM service that takes care of Ozone namespace.
 */
 service OzoneManagerService {
 // A client-to-OM RPC to send client requests to OM Ratis server
 rpc submitRequest(OMRequest)
   returns(OMResponse);
+
+// A client-to-OM RPC to get ratis server states of OMs
 
 Review comment:
   Also just wondering, is there a reason to add a new RPC and not use the 
submitRequet pattern -- that is add the message to the OMRequest / OMResponse 
pattern?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1586: HDDS-2240. Command line tool for OM HA.

2019-10-07 Thread GitBox
anuengineer commented on a change in pull request #1586: HDDS-2240. Command 
line tool for OM HA.
URL: https://github.com/apache/hadoop/pull/1586#discussion_r332186630
 
 

 ##
 File path: hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
 ##
 @@ -1097,11 +1097,34 @@ message UpdateGetS3SecretRequest {
 required string awsSecret = 2;
 }
 
+message OMServiceId {
+required string serviceID = 1;
+}
+
+/**
+  This proto is used to define the OM node Id and its ratis server state.
+*/
+message RoleInfo {
+required string omNodeID = 1;
+required string ratisServerRole = 2;
+}
+
+/**
+  This is used to get the Server States of OMs.
+*/
+message ServiceState {
+repeated RoleInfo roleInfos = 1;
+}
+
 /**
  The OM service that takes care of Ozone namespace.
 */
 service OzoneManagerService {
 // A client-to-OM RPC to send client requests to OM Ratis server
 rpc submitRequest(OMRequest)
   returns(OMResponse);
+
+// A client-to-OM RPC to get ratis server states of OMs
+rpc getServiceState(OMServiceId)
+  returns(ServiceState);
 }
 
 Review comment:
   What is the use case for this RPC? it is used only from the OM HA tool? or 
is some thing that clients need to know when communicating with the OM? if 
clients need to know this info, then clients already make a call called 
getServiceList and this info is more appropriate to be part of that call? Then 
both clients and tools like OM HA can get to that information pretty easily.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16637) Fix findbugs warnings in hadoop-cos

2019-10-07 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien reassigned HADOOP-16637:
-

Assignee: YiSheng Lien

> Fix findbugs warnings in hadoop-cos
> ---
>
> Key: HADOOP-16637
> URL: https://issues.apache.org/jira/browse/HADOOP-16637
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/cos
>Reporter: Akira Ajisaka
>Assignee: YiSheng Lien
>Priority: Major
>
> qbt report: 
> https://lists.apache.org/thread.html/ab1ea4ac6590061cfb2f89183f33f97e92da0e68e67657dbfbda862f@%3Ccommon-dev.hadoop.apache.org%3E
> {noformat}
>module:hadoop-cloud-storage-project/hadoop-cos
>Redundant nullcheck of dir, which is known to be non-null in 
> org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check 
> at BufferPool.java:is known to be non-null in 
> org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check 
> at BufferPool.java:[line 66]
>org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
> expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
> At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
> CosNInputStream.java:[line 87]
>Found reliance on default encoding in 
> org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
> byte[]):in 
> org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
> byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
>Found reliance on default encoding in 
> org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
> InputStream, byte[], long):in 
> org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
> InputStream, byte[], long): new String(byte[]) At 
> CosNativeFileSystemStore.java:[line 178]
>org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
> String, String, int) may fail to clean up java.io.InputStream Obligation to 
> clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
> java.io.InputStream Obligation to clean up resource created at 
> CosNativeFileSystemStore.java:[line 252] is not discharged
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1601: HADOOP-16635. S3A innerGetFileStatus scans for directories-only still does a HEAD.

2019-10-07 Thread GitBox
hadoop-yetus commented on issue #1601: HADOOP-16635. S3A innerGetFileStatus 
scans for directories-only still does a HEAD.
URL: https://github.com/apache/hadoop/pull/1601#issuecomment-539131837
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1070 | trunk passed |
   | +1 | compile | 35 | trunk passed |
   | +1 | checkstyle | 28 | trunk passed |
   | +1 | mvnsite | 40 | trunk passed |
   | +1 | shadedclient | 785 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 30 | trunk passed |
   | 0 | spotbugs | 60 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 59 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 34 | the patch passed |
   | +1 | compile | 29 | the patch passed |
   | +1 | javac | 29 | the patch passed |
   | +1 | checkstyle | 20 | the patch passed |
   | +1 | mvnsite | 33 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 780 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 27 | the patch passed |
   | +1 | findbugs | 62 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 90 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 3296 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1601/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1601 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 18c4809af70f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1a77a15 |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1601/3/testReport/ |
   | Max. process+thread count | 447 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1601/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swagle opened a new pull request #1610: HDDS-1868. Ozone pipelines should be marked as ready only after the leader election is complete.

2019-10-07 Thread GitBox
swagle opened a new pull request #1610: HDDS-1868. Ozone pipelines should be 
marked as ready only after the leader election is complete.
URL: https://github.com/apache/hadoop/pull/1610
 
 
   Ozone pipeline on restart or when first created, start in allocated state. 
They are moved into open state after all the pipeline have reported to it. 
However, this potentially can lead into an issue where the pipeline is still 
not ready to accept any incoming IO operations.
   
   The pipelines should be marked as ready only after the leader election is 
complete and leader is ready to accept incoming IO.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashvina commented on a change in pull request #1480: HDFS-14857. FS operations fail in HA mode: DataNode fails to connect to NameNode

2019-10-07 Thread GitBox
ashvina commented on a change in pull request #1480: HDFS-14857. FS operations 
fail in HA mode: DataNode fails to connect to NameNode
URL: https://github.com/apache/hadoop/pull/1480#discussion_r332153348
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ConfiguredFailoverProxyProvider.java
 ##
 @@ -76,13 +81,7 @@ synchronized void incrementProxyIndex() {
   @Override
   public synchronized void close() throws IOException {
 for (ProxyInfo proxy : proxies) {
-  if (proxy.proxy != null) {
-if (proxy.proxy instanceof Closeable) {
-  ((Closeable)proxy.proxy).close();
-} else {
-  RPC.stopProxy(proxy.proxy);
-}
-  }
+  stopProxy(proxy.proxy);
 
 Review comment:
   The behavior is getting changed here. Earlier `IOException` was thrown in 
case of an error. It is changed to `RuntimeException` now. I think this change 
is not desired.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashvina commented on a change in pull request #1480: HDFS-14857. FS operations fail in HA mode: DataNode fails to connect to NameNode

2019-10-07 Thread GitBox
ashvina commented on a change in pull request #1480: HDFS-14857. FS operations 
fail in HA mode: DataNode fails to connect to NameNode
URL: https://github.com/apache/hadoop/pull/1480#discussion_r332153687
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ConfiguredFailoverProxyProvider.java
 ##
 @@ -93,4 +92,34 @@ public synchronized void close() throws IOException {
   public boolean useLogicalURI() {
 return true;
   }
+
+  /**
+   * Resets the NameNode proxy address in case it's stale
+   */
+  protected void resetProxyAddress(List> proxies, int index) {
+try {
+  stopProxy(proxies.get(index).proxy);
 
 Review comment:
   This may result in a NPE if `performFailover` is invoked multiple times.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashvina commented on a change in pull request #1480: HDFS-14857. FS operations fail in HA mode: DataNode fails to connect to NameNode

2019-10-07 Thread GitBox
ashvina commented on a change in pull request #1480: HDFS-14857. FS operations 
fail in HA mode: DataNode fails to connect to NameNode
URL: https://github.com/apache/hadoop/pull/1480#discussion_r332153884
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ConfiguredFailoverProxyProvider.java
 ##
 @@ -93,4 +92,34 @@ public synchronized void close() throws IOException {
   public boolean useLogicalURI() {
 return true;
   }
+
+  /**
+   * Resets the NameNode proxy address in case it's stale
+   */
+  protected void resetProxyAddress(List> proxies, int index) {
+try {
+  stopProxy(proxies.get(index).proxy);
+  InetSocketAddress oldAddress = proxies.get(index).getAddress();
 
 Review comment:
   `proxies.get(index)` can be abstracted


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashvina commented on a change in pull request #1480: HDFS-14857. FS operations fail in HA mode: DataNode fails to connect to NameNode

2019-10-07 Thread GitBox
ashvina commented on a change in pull request #1480: HDFS-14857. FS operations 
fail in HA mode: DataNode fails to connect to NameNode
URL: https://github.com/apache/hadoop/pull/1480#discussion_r332149198
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ConfiguredFailoverProxyProvider.java
 ##
 @@ -93,4 +92,34 @@ public synchronized void close() throws IOException {
   public boolean useLogicalURI() {
 return true;
   }
+
+  /**
+   * Resets the NameNode proxy address in case it's stale
+   */
+  protected void resetProxyAddress(List> proxies, int index) {
+try {
+  stopProxy(proxies.get(index).proxy);
+  InetSocketAddress oldAddress = proxies.get(index).getAddress();
+  InetSocketAddress address = NetUtils.createSocketAddr(
+  oldAddress.getHostName() + ":" + oldAddress.getPort());
+  LOG.debug("oldAddress {}, newAddress {}", oldAddress, address);
+  proxies.set(index, new NNProxyInfo(address));
+} catch (Exception e) {
+  throw new RuntimeException("Could not refresh NN address", e);
+}
+  }
+
+  protected void stopProxy(T proxy) {
 
 Review comment:
   Please add javadoc for non-private methods.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashvina commented on a change in pull request #1480: HDFS-14857. FS operations fail in HA mode: DataNode fails to connect to NameNode

2019-10-07 Thread GitBox
ashvina commented on a change in pull request #1480: HDFS-14857. FS operations 
fail in HA mode: DataNode fails to connect to NameNode
URL: https://github.com/apache/hadoop/pull/1480#discussion_r332152664
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ConfiguredFailoverProxyProvider.java
 ##
 @@ -61,7 +63,10 @@ public ConfiguredFailoverProxyProvider(Configuration conf, 
URI uri,
   }
 
   @Override
-  public  void performFailover(T currentProxy) {
+  public void performFailover(T currentProxy) {
+//reset the IP address in case  the stale IP was the cause for failover
+LOG.info("Resetting cached proxy: " + currentProxyIndex);
+resetProxyAddress(proxies, currentProxyIndex);
 
 Review comment:
   IIUC, this call will reset address of the proxy which will not be used by 
the caller of `getProxy`. The next instruction will change proxy index, and the 
new proxy may still have old ip address. Would it be better to reset ip of the 
proxy, if needed, after the proxy index has changed?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1608: HDDS-2265. integration.sh may report false negative

2019-10-07 Thread GitBox
adoroszlai commented on issue #1608: HDDS-2265. integration.sh may report false 
negative
URL: https://github.com/apache/hadoop/pull/1608#issuecomment-539122084
 
 
   @elek please review (let me know if I should retest until we hit the case 
when integration test gets killed)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1600: HDDS-2239. Fix TestOzoneFsHAUrls

2019-10-07 Thread GitBox
adoroszlai commented on issue #1600: HDDS-2239. Fix TestOzoneFsHAUrls
URL: https://github.com/apache/hadoop/pull/1600#issuecomment-539121038
 
 
   Thanks @bharatviswa504 for reporting the problem, and for reviewing and 
committing the fix.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #1600: HDDS-2239. Fix TestOzoneFsHAUrls

2019-10-07 Thread GitBox
bharatviswa504 merged pull request #1600: HDDS-2239. Fix TestOzoneFsHAUrls
URL: https://github.com/apache/hadoop/pull/1600
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx commented on issue #1604: HDDS-2254 : Fix flaky unit test TestContainerStateMachine#testRatisSnapshotRetention

2019-10-07 Thread GitBox
avijayanhwx commented on issue #1604: HDDS-2254 : Fix flaky unit test 
TestContainerStateMachine#testRatisSnapshotRetention
URL: https://github.com/apache/hadoop/pull/1604#issuecomment-539104971
 
 
   > I think flakiness could be reduced more by executing `init()` and 
`shutdown()` per test case (`@Before`, etc.). In that case you could keep the 
assertion.
   
   Yes I agree. However, I was hoping to avoid setting up a MiniOzoneCluster 
again for a small test. Let me think about it and get back.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx commented on issue #1604: HDDS-2254 : Fix flaky unit test TestContainerStateMachine#testRatisSnapshotRetention

2019-10-07 Thread GitBox
avijayanhwx commented on issue #1604: HDDS-2254 : Fix flaky unit test 
TestContainerStateMachine#testRatisSnapshotRetention
URL: https://github.com/apache/hadoop/pull/1604#issuecomment-539104507
 
 
   > Does this change also address assertion failure on line # 188?
   
   @swagle I could not reproduce any failure on L#188. I will continue to try 
different experiments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1596: HDDS-2233 - Remove ByteStringHelper and refactor the code to the place where it used

2019-10-07 Thread GitBox
anuengineer commented on issue #1596: HDDS-2233 - Remove ByteStringHelper and 
refactor the code to the place where it used
URL: https://github.com/apache/hadoop/pull/1596#issuecomment-539102507
 
 
   +1. LGTM.  I will wait for others to finish commenting to commit this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >