[jira] [Assigned] (HDDS-1968) Add an endpoint in SCM to publish unhealthy/missing containers.

2019-08-20 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-1968:
-

Assignee: Aravindan Vijayan

Will be moving this to v2 since it involves:
- Ability to take rocksDB snapshot of SCM
- API to service snapshots to Recon over rpc/http
- Persist container state in the SCM rocksDB

> Add an endpoint in SCM to publish unhealthy/missing containers.
> ---
>
> Key: HDDS-1968
> URL: https://issues.apache.org/jira/browse/HDDS-1968
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-08-20 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911779#comment-16911779
 ] 

Eric Yang commented on HDFS-2470:
-

[~swagle] Thank you for the patch.  Patch 08 looks good to me.  Pending Jenkins 
results.

> NN should automatically set permissions on dfs.namenode.*.dir
> -
>
> Key: HDFS-2470
> URL: https://issues.apache.org/jira/browse/HDFS-2470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Aaron T. Myers
>Assignee: Siddharth Wagle
>Priority: Major
> Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, 
> HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, 
> HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch
>
>
> Much as the DN currently sets the correct permissions for the 
> dfs.datanode.data.dir, the NN should do the same for the 
> dfs.namenode.(name|edit).dir.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1873) Recon should store last successful run timestamp for each task

2019-08-20 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-1873:
-

Assignee: Siddharth Wagle

> Recon should store last successful run timestamp for each task
> --
>
> Key: HDDS-1873
> URL: https://issues.apache.org/jira/browse/HDDS-1873
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Siddharth Wagle
>Priority: Major
>
> Recon should store last ozone manager snapshot received timestamp along with 
> timestamps of last successful run for each task.
> This is important to give users a sense of how latest the current data that 
> they are looking at is. And, we need this per task because some tasks might 
> fail to run or might take much longer time to run than other tasks and this 
> needs to be reflected in the UI for better and consistent user experience.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?focusedWorklogId=298271=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298271
 ]

ASF GitHub Bot logged work on HDDS-1927:


Author: ASF GitHub Bot
Created on: 20/Aug/19 22:34
Start Date: 20/Aug/19 22:34
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1263: HDDS-1927. 
Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315932118
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OzoneAclUtil.java
 ##
 @@ -0,0 +1,311 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.helpers;
+
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OzoneAclInfo;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLType;
+import org.apache.hadoop.ozone.security.acl.RequestContext;
+
+import java.util.ArrayList;
+import java.util.BitSet;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.DEFAULT;
+import static 
org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLIdentityType.GROUP;
+import static 
org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLIdentityType.USER;
+import static 
org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLType.ALL;
+import static 
org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLType.NONE;
+
+/**
+ * Helper class for ozone acls operations.
+ */
+public final class OzoneAclUtil {
+
+  private OzoneAclUtil(){
+  }
+
+  /**
+   * Helper function to get access acl list for current user.
+   *
+   * @param userName
+   * @param userGroups
+   * @return list of OzoneAcls
+   * */
+  public static List getAclList(String userName,
+  List userGroups, ACLType userRights, ACLType groupRights) {
+
+List listOfAcls = new ArrayList<>();
+
+// User ACL.
+listOfAcls.add(new OzoneAcl(USER, userName, userRights, ACCESS));
+if(userGroups != null) {
+  // Group ACLs of the User.
+  userGroups.forEach((group) -> listOfAcls.add(
+  new OzoneAcl(GROUP, group, groupRights, ACCESS)));
+}
+return listOfAcls;
+  }
+
+  /**
+   * Check if acl right requested for given RequestContext exist
+   * in provided acl list.
+   * Acl validation rules:
+   * 1. If user/group has ALL bit set than all user should have all rights.
+   * 2. If user/group has NONE bit set than user/group will not have any right.
+   * 3. For all other individual rights individual bits should be set.
+   *
+   * @param acls
+   * @param context
+   * @return return true if acl list contains right requsted in context.
+   * */
+  public static boolean checkAclRight(List acls,
+  RequestContext context) throws OMException {
+String[] userGroups = context.getClientUgi().getGroupNames();
+String userName = context.getClientUgi().getUserName();
+ACLType aclToCheck = context.getAclRights();
+for (OzoneAcl a : acls) {
+  if(checkAccessInAcl(a, userGroups, userName, aclToCheck)) {
+return true;
+  }
+}
+return false;
+  }
+
+  private static boolean checkAccessInAcl(OzoneAcl a, String[] groups,
+  String username, ACLType aclToCheck) {
+BitSet rights = a.getAclBitSet();
+switch (a.getType()) {
+case USER:
+  if (a.getName().equals(username)) {
+return checkIfAclBitIsSet(aclToCheck, rights);
+  }
+  break;
+case GROUP:
+  for (String grp : groups) {
+if (a.getName().equals(grp)) {
+  return checkIfAclBitIsSet(aclToCheck, rights);
+}
+  }
+  break;
+
+default:
+  return checkIfAclBitIsSet(aclToCheck, rights);
+}
+return false;
+  }
+
+  /**
+   * Check if acl right requested for given 

[jira] [Work logged] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?focusedWorklogId=298264=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298264
 ]

ASF GitHub Bot logged work on HDDS-1927:


Author: ASF GitHub Bot
Created on: 20/Aug/19 22:26
Start Date: 20/Aug/19 22:26
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1263: 
HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315929987
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OzoneAclUtil.java
 ##
 @@ -0,0 +1,311 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.helpers;
+
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OzoneAclInfo;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLType;
+import org.apache.hadoop.ozone.security.acl.RequestContext;
+
+import java.util.ArrayList;
+import java.util.BitSet;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.DEFAULT;
+import static 
org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLIdentityType.GROUP;
+import static 
org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLIdentityType.USER;
+import static 
org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLType.ALL;
+import static 
org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLType.NONE;
+
+/**
+ * Helper class for ozone acls operations.
+ */
+public final class OzoneAclUtil {
+
+  private OzoneAclUtil(){
+  }
+
+  /**
+   * Helper function to get access acl list for current user.
+   *
+   * @param userName
+   * @param userGroups
+   * @return list of OzoneAcls
+   * */
+  public static List getAclList(String userName,
+  List userGroups, ACLType userRights, ACLType groupRights) {
+
+List listOfAcls = new ArrayList<>();
+
+// User ACL.
+listOfAcls.add(new OzoneAcl(USER, userName, userRights, ACCESS));
+if(userGroups != null) {
+  // Group ACLs of the User.
+  userGroups.forEach((group) -> listOfAcls.add(
+  new OzoneAcl(GROUP, group, groupRights, ACCESS)));
+}
+return listOfAcls;
+  }
+
+  /**
+   * Check if acl right requested for given RequestContext exist
+   * in provided acl list.
+   * Acl validation rules:
+   * 1. If user/group has ALL bit set than all user should have all rights.
+   * 2. If user/group has NONE bit set than user/group will not have any right.
+   * 3. For all other individual rights individual bits should be set.
+   *
+   * @param acls
+   * @param context
+   * @return return true if acl list contains right requsted in context.
+   * */
+  public static boolean checkAclRight(List acls,
+  RequestContext context) throws OMException {
+String[] userGroups = context.getClientUgi().getGroupNames();
+String userName = context.getClientUgi().getUserName();
+ACLType aclToCheck = context.getAclRights();
+for (OzoneAcl a : acls) {
+  if(checkAccessInAcl(a, userGroups, userName, aclToCheck)) {
+return true;
+  }
+}
+return false;
+  }
+
+  private static boolean checkAccessInAcl(OzoneAcl a, String[] groups,
+  String username, ACLType aclToCheck) {
+BitSet rights = a.getAclBitSet();
+switch (a.getType()) {
+case USER:
+  if (a.getName().equals(username)) {
+return checkIfAclBitIsSet(aclToCheck, rights);
+  }
+  break;
+case GROUP:
+  for (String grp : groups) {
+if (a.getName().equals(grp)) {
+  return checkIfAclBitIsSet(aclToCheck, rights);
+}
+  }
+  break;
+
+default:
+  return checkIfAclBitIsSet(aclToCheck, rights);
+}
+return false;
+  }
+
+  /**
+   * Check if acl right requested for 

[jira] [Work logged] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?focusedWorklogId=298263=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298263
 ]

ASF GitHub Bot logged work on HDDS-1927:


Author: ASF GitHub Bot
Created on: 20/Aug/19 22:25
Start Date: 20/Aug/19 22:25
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1263: 
HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315929710
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OzoneAclUtil.java
 ##
 @@ -0,0 +1,311 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.helpers;
+
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OzoneAclInfo;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLType;
+import org.apache.hadoop.ozone.security.acl.RequestContext;
+
+import java.util.ArrayList;
+import java.util.BitSet;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.DEFAULT;
+import static 
org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLIdentityType.GROUP;
+import static 
org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLIdentityType.USER;
+import static 
org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLType.ALL;
+import static 
org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLType.NONE;
+
+/**
+ * Helper class for ozone acls operations.
+ */
+public final class OzoneAclUtil {
+
+  private OzoneAclUtil(){
+  }
+
+  /**
+   * Helper function to get access acl list for current user.
+   *
+   * @param userName
+   * @param userGroups
+   * @return list of OzoneAcls
+   * */
+  public static List getAclList(String userName,
+  List userGroups, ACLType userRights, ACLType groupRights) {
+
+List listOfAcls = new ArrayList<>();
+
+// User ACL.
+listOfAcls.add(new OzoneAcl(USER, userName, userRights, ACCESS));
+if(userGroups != null) {
+  // Group ACLs of the User.
+  userGroups.forEach((group) -> listOfAcls.add(
+  new OzoneAcl(GROUP, group, groupRights, ACCESS)));
+}
+return listOfAcls;
+  }
+
+  /**
+   * Check if acl right requested for given RequestContext exist
+   * in provided acl list.
+   * Acl validation rules:
+   * 1. If user/group has ALL bit set than all user should have all rights.
+   * 2. If user/group has NONE bit set than user/group will not have any right.
+   * 3. For all other individual rights individual bits should be set.
+   *
+   * @param acls
+   * @param context
+   * @return return true if acl list contains right requsted in context.
+   * */
+  public static boolean checkAclRight(List acls,
+  RequestContext context) throws OMException {
+String[] userGroups = context.getClientUgi().getGroupNames();
+String userName = context.getClientUgi().getUserName();
+ACLType aclToCheck = context.getAclRights();
+for (OzoneAcl a : acls) {
+  if(checkAccessInAcl(a, userGroups, userName, aclToCheck)) {
+return true;
+  }
+}
+return false;
+  }
+
+  private static boolean checkAccessInAcl(OzoneAcl a, String[] groups,
+  String username, ACLType aclToCheck) {
+BitSet rights = a.getAclBitSet();
+switch (a.getType()) {
+case USER:
+  if (a.getName().equals(username)) {
+return checkIfAclBitIsSet(aclToCheck, rights);
+  }
+  break;
+case GROUP:
+  for (String grp : groups) {
+if (a.getName().equals(grp)) {
+  return checkIfAclBitIsSet(aclToCheck, rights);
+}
+  }
+  break;
+
+default:
+  return checkIfAclBitIsSet(aclToCheck, rights);
+}
+return false;
+  }
+
+  /**
+   * Check if acl right requested for 

[jira] [Commented] (HDFS-8631) WebHDFS : Support setQuota

2019-08-20 Thread Chao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911775#comment-16911775
 ] 

Chao Sun commented on HDFS-8631:


[~elgoiri]: fixed the style issue - could you take another look? thanks!

> WebHDFS : Support setQuota
> --
>
> Key: HDFS-8631
> URL: https://issues.apache.org/jira/browse/HDFS-8631
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.7.2
>Reporter: nijel
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-8631-001.patch, HDFS-8631-002.patch, 
> HDFS-8631-003.patch, HDFS-8631-004.patch, HDFS-8631-005.patch, 
> HDFS-8631-006.patch, HDFS-8631-007.patch, HDFS-8631-008.patch, 
> HDFS-8631-009.patch, HDFS-8631-010.patch
>
>
> User is able do quota management from filesystem object. Same operation can 
> be allowed trough REST API.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14758) Decrease lease hard limit

2019-08-20 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911771#comment-16911771
 ] 

Wei-Chiu Chuang commented on HDFS-14758:


Somewhat similar, how to you think about HDFS-14694? upon close failure, client 
recovers lease immediately.

> Decrease lease hard limit
> -
>
> Key: HDFS-14758
> URL: https://issues.apache.org/jira/browse/HDFS-14758
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Eric Payne
>Priority: Minor
>
> The hard limit is currently hard-coded to be 1 hour. This also determines the 
> NN automatic lease recovery interval. Something like 20 min will make more 
> sense.
> After the 5 min soft limit, other clients can recover the lease. If no one 
> else takes the lease away, the original client still can renew the lease 
> within the hard limit. So even after a NN full GC of 8 minutes, leases can be 
> still valid.
> However, there is one risk in reducing the hard limit. E.g. Reduced to 20 
> min. If the NN crashes and the manual failover takes more than 20 minutes, 
> clients will abort.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1965) Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-20 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911769#comment-16911769
 ] 

Xiaoyu Yao commented on HDDS-1965:
--

I tried the instruction above but it does not seem to work. Check the file 
ScmBlockLocationTestingClient.java from   
[https://github.com/apache/hadoop/find/trunk] also shows "No matching files 
found." This repros on both Mac and Linux. cc: [~adoroszlai], [~anu]


 $git checkout – 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java
 error: pathspec 
'hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java'
 did not match any file(s) known to git

$ ls hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/
 TestBucketManagerImpl.java TestOzoneManagerHttpServer.java package-info.java 
response
 TestChunkStreams.java TestOzoneManagerStarter.java ratis
 TestKeyDeletingService.java TestS3BucketManager.java request

 

> Compile error due to leftover ScmBlockLocationTestIngClient file
> 
>
> Key: HDDS-1965
> URL: https://issues.apache.org/jira/browse/HDDS-1965
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:title=https://ci.anzix.net/job/ozone/17667/consoleText}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java:[65,8]
>  class ScmBlockLocationTestingClient is public, should be declared in a file 
> named ScmBlockLocationTestingClient.java
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java:[65,8]
>  duplicate class: org.apache.hadoop.ozone.om.ScmBlockLocationTestingClient
> [INFO] 2 errors 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14759) HDFS cat logs an info message

2019-08-20 Thread Eric Badger (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HDFS-14759:
---
Description: 
HDFS-13699 changed a debug log line into an info log line and this line is 
printed during {{hadoop fs -cat}} operations. This make it very difficult to 
figure out where the log line ends and where the catted file begins, especially 
when the output is sent to a tool for parsing. 

{noformat}
[ebadger@foobar bin]$ hadoop fs -cat /foo 2>/dev/null
2019-08-20 22:09:45,907 INFO  [main] sasl.SaslDataTransferClient 
(SaslDataTransferClient.java:checkTrustAndSend(230)) - SASL encryption trust 
check: localHostTrusted = false, remoteHostTrusted = false
bar
{noformat}

  was:HDFS-13699 changed a debug log line into an info log line and this line 
is printed during {{hadoop fs -cat}} operations. This make it very difficult to 
figure out where the log line ends and where the catted file begins, especially 
when the output is sent to a tool for parsing. 


> HDFS cat logs an info message
> -
>
> Key: HDFS-14759
> URL: https://issues.apache.org/jira/browse/HDFS-14759
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: HDFS-14759.001.patch
>
>
> HDFS-13699 changed a debug log line into an info log line and this line is 
> printed during {{hadoop fs -cat}} operations. This make it very difficult to 
> figure out where the log line ends and where the catted file begins, 
> especially when the output is sent to a tool for parsing. 
> {noformat}
> [ebadger@foobar bin]$ hadoop fs -cat /foo 2>/dev/null
> 2019-08-20 22:09:45,907 INFO  [main] sasl.SaslDataTransferClient 
> (SaslDataTransferClient.java:checkTrustAndSend(230)) - SASL encryption trust 
> check: localHostTrusted = false, remoteHostTrusted = false
> bar
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?focusedWorklogId=298234=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298234
 ]

ASF GitHub Bot logged work on HDDS-1927:


Author: ASF GitHub Bot
Created on: 20/Aug/19 22:08
Start Date: 20/Aug/19 22:08
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1263: 
HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315925326
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OzoneAclUtil.java
 ##
 @@ -0,0 +1,311 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.helpers;
+
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OzoneAclInfo;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLType;
+import org.apache.hadoop.ozone.security.acl.RequestContext;
+
+import java.util.ArrayList;
+import java.util.BitSet;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.DEFAULT;
+import static 
org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLIdentityType.GROUP;
+import static 
org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLIdentityType.USER;
+import static 
org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLType.ALL;
+import static 
org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLType.NONE;
+
+/**
+ * Helper class for ozone acls operations.
+ */
+public final class OzoneAclUtil {
+
+  private OzoneAclUtil(){
+  }
+
+  /**
+   * Helper function to get access acl list for current user.
+   *
+   * @param userName
+   * @param userGroups
+   * @return list of OzoneAcls
+   * */
+  public static List getAclList(String userName,
+  List userGroups, ACLType userRights, ACLType groupRights) {
+
+List listOfAcls = new ArrayList<>();
+
+// User ACL.
+listOfAcls.add(new OzoneAcl(USER, userName, userRights, ACCESS));
+if(userGroups != null) {
+  // Group ACLs of the User.
+  userGroups.forEach((group) -> listOfAcls.add(
+  new OzoneAcl(GROUP, group, groupRights, ACCESS)));
+}
+return listOfAcls;
+  }
+
+  /**
+   * Check if acl right requested for given RequestContext exist
+   * in provided acl list.
+   * Acl validation rules:
+   * 1. If user/group has ALL bit set than all user should have all rights.
+   * 2. If user/group has NONE bit set than user/group will not have any right.
+   * 3. For all other individual rights individual bits should be set.
+   *
+   * @param acls
+   * @param context
+   * @return return true if acl list contains right requsted in context.
+   * */
+  public static boolean checkAclRight(List acls,
+  RequestContext context) throws OMException {
+String[] userGroups = context.getClientUgi().getGroupNames();
+String userName = context.getClientUgi().getUserName();
+ACLType aclToCheck = context.getAclRights();
+for (OzoneAcl a : acls) {
+  if(checkAccessInAcl(a, userGroups, userName, aclToCheck)) {
+return true;
+  }
+}
+return false;
+  }
+
+  private static boolean checkAccessInAcl(OzoneAcl a, String[] groups,
+  String username, ACLType aclToCheck) {
+BitSet rights = a.getAclBitSet();
+switch (a.getType()) {
+case USER:
+  if (a.getName().equals(username)) {
+return checkIfAclBitIsSet(aclToCheck, rights);
+  }
+  break;
+case GROUP:
+  for (String grp : groups) {
+if (a.getName().equals(grp)) {
+  return checkIfAclBitIsSet(aclToCheck, rights);
+}
+  }
+  break;
+
+default:
+  return checkIfAclBitIsSet(aclToCheck, rights);
+}
+return false;
+  }
+
+  /**
+   * Check if acl right requested for 

[jira] [Work logged] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?focusedWorklogId=298232=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298232
 ]

ASF GitHub Bot logged work on HDDS-1927:


Author: ASF GitHub Bot
Created on: 20/Aug/19 22:08
Start Date: 20/Aug/19 22:08
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1263: 
HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315925066
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OzoneAclUtil.java
 ##
 @@ -0,0 +1,311 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.helpers;
+
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OzoneAclInfo;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLType;
+import org.apache.hadoop.ozone.security.acl.RequestContext;
+
+import java.util.ArrayList;
+import java.util.BitSet;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.DEFAULT;
+import static 
org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLIdentityType.GROUP;
+import static 
org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLIdentityType.USER;
+import static 
org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLType.ALL;
+import static 
org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLType.NONE;
+
+/**
+ * Helper class for ozone acls operations.
+ */
+public final class OzoneAclUtil {
+
+  private OzoneAclUtil(){
+  }
+
+  /**
+   * Helper function to get access acl list for current user.
+   *
+   * @param userName
+   * @param userGroups
+   * @return list of OzoneAcls
+   * */
+  public static List getAclList(String userName,
+  List userGroups, ACLType userRights, ACLType groupRights) {
+
+List listOfAcls = new ArrayList<>();
+
+// User ACL.
+listOfAcls.add(new OzoneAcl(USER, userName, userRights, ACCESS));
+if(userGroups != null) {
+  // Group ACLs of the User.
+  userGroups.forEach((group) -> listOfAcls.add(
+  new OzoneAcl(GROUP, group, groupRights, ACCESS)));
+}
+return listOfAcls;
+  }
+
+  /**
+   * Check if acl right requested for given RequestContext exist
+   * in provided acl list.
+   * Acl validation rules:
+   * 1. If user/group has ALL bit set than all user should have all rights.
+   * 2. If user/group has NONE bit set than user/group will not have any right.
+   * 3. For all other individual rights individual bits should be set.
+   *
+   * @param acls
+   * @param context
+   * @return return true if acl list contains right requsted in context.
+   * */
+  public static boolean checkAclRight(List acls,
+  RequestContext context) throws OMException {
+String[] userGroups = context.getClientUgi().getGroupNames();
+String userName = context.getClientUgi().getUserName();
+ACLType aclToCheck = context.getAclRights();
+for (OzoneAcl a : acls) {
+  if(checkAccessInAcl(a, userGroups, userName, aclToCheck)) {
+return true;
+  }
+}
+return false;
+  }
+
+  private static boolean checkAccessInAcl(OzoneAcl a, String[] groups,
+  String username, ACLType aclToCheck) {
+BitSet rights = a.getAclBitSet();
+switch (a.getType()) {
+case USER:
+  if (a.getName().equals(username)) {
+return checkIfAclBitIsSet(aclToCheck, rights);
+  }
+  break;
+case GROUP:
+  for (String grp : groups) {
+if (a.getName().equals(grp)) {
+  return checkIfAclBitIsSet(aclToCheck, rights);
+}
+  }
+  break;
+
+default:
+  return checkIfAclBitIsSet(aclToCheck, rights);
+}
+return false;
+  }
+
+  /**
+   * Check if acl right requested for 

[jira] [Work logged] (HDDS-1973) Implement OM RenewDelegationToken request to use Cache and DoubleBuffer

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1973?focusedWorklogId=298231=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298231
 ]

ASF GitHub Bot logged work on HDDS-1973:


Author: ASF GitHub Bot
Created on: 20/Aug/19 22:06
Start Date: 20/Aug/19 22:06
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1316: 
HDDS-1973. Implement OM RenewDelegationToken request to use Cache and 
DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1316#discussion_r315924477
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/security/OMRenewDelegationTokenRequest.java
 ##
 @@ -0,0 +1,164 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.security;
+
+import java.io.IOException;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import 
org.apache.hadoop.ozone.om.response.security.OMRenewDelegationTokenResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.RenewDelegationTokenResponseProto;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.UpdateRenewDelegationTokenRequest;
+import org.apache.hadoop.ozone.protocolPB.OMPBHelper;
+import org.apache.hadoop.ozone.security.OzoneTokenIdentifier;
+import 
org.apache.hadoop.security.proto.SecurityProtos.RenewDelegationTokenRequestProto;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handle RenewDelegationToken Request.
+ */
+public class OMRenewDelegationTokenRequest extends OMClientRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMGetDelegationTokenRequest.class);
+
+  public OMRenewDelegationTokenRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+RenewDelegationTokenRequestProto renewDelegationTokenRequest =
+getOmRequest().getRenewDelegationTokenRequest();
+
+// Call OM to renew token
+long renewTime = ozoneManager.renewDelegationToken(
+OMPBHelper.convertToDelegationToken(
+renewDelegationTokenRequest.getToken()));
+
+RenewDelegationTokenResponseProto.Builder renewResponse =
+RenewDelegationTokenResponseProto.newBuilder();
+
+renewResponse.setResponse(org.apache.hadoop.security.proto.SecurityProtos
+.RenewDelegationTokenResponseProto.newBuilder()
+.setNewExpiryTime(renewTime));
+
+
+// Client issues RenewDelegationToken request, when received by OM leader
+// it will renew the token. Original RenewDelegationToken request is
+// converted to UpdateRenewDelegationToken request with the token and renew
+// information. This updated request will be submitted to Ratis. In this
+// way delegation token renewd by leader, will be replicated across all
+// OMs. With this approach, original RenewDelegationToken request from
+// client does not need any proto changes.
+
+// Create UpdateRenewDelegationTokenRequest with original request and
+// expiry time.
+OMRequest.Builder omRequest = OMRequest.newBuilder()
+.setUserInfo(getUserInfo())
+.setUpdatedRenewDelegationTokenRequest(
+UpdateRenewDelegationTokenRequest.newBuilder()
+.setRenewDelegationTokenRequest(renewDelegationTokenRequest)
+

[jira] [Commented] (HDFS-14714) RBF: implement getReplicatedBlockStats interface

2019-08-20 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911767#comment-16911767
 ] 

Hadoop QA commented on HDFS-14714:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 42m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
25m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
6 unchanged - 0 fixed = 7 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
49s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 24m 12s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}133m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
|   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14714 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12978103/HDFS-14714.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7563384c6299 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4cb22cd |

[jira] [Commented] (HDFS-14759) HDFS cat logs an info message

2019-08-20 Thread Eric Badger (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911766#comment-16911766
 ] 

Eric Badger commented on HDFS-14759:


I have put up a patch to change the log line back to debug. However, this may 
not be the correct fix. I don't know why logging is going to stdout at all, 
regardless of the level. The correct fix might be to modify FSShell to write 
all logging to stderr. There may have been a regression there. 

> HDFS cat logs an info message
> -
>
> Key: HDFS-14759
> URL: https://issues.apache.org/jira/browse/HDFS-14759
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: HDFS-14759.001.patch
>
>
> HDFS-13699 changed a debug log line into an info log line and this line is 
> printed during {{hadoop fs -cat}} operations. This make it very difficult to 
> figure out where the log line ends and where the catted file begins, 
> especially when the output is sent to a tool for parsing. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14759) HDFS cat logs an info message

2019-08-20 Thread Eric Badger (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HDFS-14759:
---
Attachment: HDFS-14759.001.patch

> HDFS cat logs an info message
> -
>
> Key: HDFS-14759
> URL: https://issues.apache.org/jira/browse/HDFS-14759
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: HDFS-14759.001.patch
>
>
> HDFS-13699 changed a debug log line into an info log line and this line is 
> printed during {{hadoop fs -cat}} operations. This make it very difficult to 
> figure out where the log line ends and where the catted file begins, 
> especially when the output is sent to a tool for parsing. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?focusedWorklogId=298220=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298220
 ]

ASF GitHub Bot logged work on HDDS-1927:


Author: ASF GitHub Bot
Created on: 20/Aug/19 21:59
Start Date: 20/Aug/19 21:59
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1263: 
HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315922467
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
 ##
 @@ -235,123 +231,22 @@ public FileEncryptionInfo getFileEncryptionInfo() {
 return encInfo;
   }
 
-  public List getAcls() {
+  public List getAcls() {
 
 Review comment:
   HDDS-1975 is opened for inherit default ACL for HA code path. I can rebase 
the PR once this went in to use this new method.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 298220)
Time Spent: 5h 50m  (was: 5h 40m)

> Consolidate add/remove Acl into OzoneAclUtil class
> --
>
> Key: HDDS-1927
> URL: https://issues.apache.org/jira/browse/HDDS-1927
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> This Jira is created based on @xiaoyu comment on HDDS-1884
> Can we abstract these add/remove logic into common AclUtil class as we can 
> see similar logic in both bucket manager and key manager? For example,
> public static boolean addAcl(List existingAcls, OzoneAcl newAcl)
> public static boolean removeAcl(List existingAcls, OzoneAcl newAcl)
>  
> But to do this, we need both OmKeyInfo and OMBucketInfo to use list of 
> OzoneAcl/OzoneAclInfo.
> This Jira is to do that refactor, and also address above comment to move 
> common logic to AclUtils.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14759) HDFS cat logs an info message

2019-08-20 Thread Eric Badger (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger reassigned HDFS-14759:
--

Assignee: Eric Badger

> HDFS cat logs an info message
> -
>
> Key: HDFS-14759
> URL: https://issues.apache.org/jira/browse/HDFS-14759
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
>
> HDFS-13699 changed a debug log line into an info log line and this line is 
> printed during {{hadoop fs -cat}} operations. This make it very difficult to 
> figure out where the log line ends and where the catted file begins, 
> especially when the output is sent to a tool for parsing. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14759) HDFS cat logs an info message

2019-08-20 Thread Eric Badger (Jira)
Eric Badger created HDFS-14759:
--

 Summary: HDFS cat logs an info message
 Key: HDFS-14759
 URL: https://issues.apache.org/jira/browse/HDFS-14759
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.3.0
Reporter: Eric Badger


HDFS-13699 changed a debug log line into an info log line and this line is 
printed during {{hadoop fs -cat}} operations. This make it very difficult to 
figure out where the log line ends and where the catted file begins, especially 
when the output is sent to a tool for parsing. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14358) Provide LiveNode and DeadNode filter in DataNode UI

2019-08-20 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911761#comment-16911761
 ] 

Hadoop QA commented on HDFS-14358:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
27m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14358 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12978114/HDFS-14358.005.patch |
| Optional Tests |  dupname  asflicense  shadedclient  |
| uname | Linux a2605a4b2606 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 269b543 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 412 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27592/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Provide LiveNode and DeadNode filter in DataNode UI
> ---
>
> Key: HDFS-14358
> URL: https://issues.apache.org/jira/browse/HDFS-14358
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.2
>Reporter: Ravuri Sushma sree
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14358 (4).patch, HDFS-14358(2).patch, 
> HDFS-14358(3).patch, HDFS-14358.005.patch, HDFS14358.JPG, hdfs-14358.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14754) Erasure Coding : The number of Under-Replicated Blocks never reduced

2019-08-20 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911758#comment-16911758
 ] 

Hadoop QA commented on HDFS-14754:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage |
|   | hadoop.hdfs.server.namenode.TestFsck |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14754 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12978064/HDFS-14754.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4f78f743bb56 4.4.0-157-generic #185-Ubuntu SMP Tue Jul 23 
09:17:01 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4cb22cd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27585/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27585/testReport/ |
| Max. process+thread count | 4052 (vs. ulimit 

[jira] [Commented] (HDFS-10782) Decrease memory frequent exchange of Centralized Cache Management when run balancer

2019-08-20 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-10782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911756#comment-16911756
 ] 

Wei-Chiu Chuang commented on HDFS-10782:


Correct myself. I reviewed the patch, and this really need a test. Shouldn't be 
too hard to come up with one.
Would this make getBlocks() slow when balancer asks NameNode for blocks? This 
patch adds a new ArrayList which can potentially grow up to several millions of 
elements, which is ok-ish.

> Decrease memory frequent exchange of Centralized Cache Management when run 
> balancer
> ---
>
> Key: HDFS-10782
> URL: https://issues.apache.org/jira/browse/HDFS-10782
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover, caching
>Affects Versions: 2.7.1
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
>  Labels: patch
> Attachments: HDFS-10782-branch-2.001.patch
>
>
> CachedBlocks currently are transparent for Balancer when active feature of 
> centralized cache management. This makes DataNode exchange memory frequently, 
> because Balancer does not Distinguish CachedBlock from blocks, so it may 
> trigger mount of cached/uncached ops.
> I think namenode should avoid return CacheBlocks as much as possible when 
> Balanacer#getblocks.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14756) RBF: getQuotaUsage may ignore some folders

2019-08-20 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911755#comment-16911755
 ] 

Hadoop QA commented on HDFS-14756:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
55s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 26m 41s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
|   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14756 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12978100/HDFS-14756.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6201a6b4168c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4cb22cd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27588/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27588/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 

[jira] [Work logged] (HDDS-1973) Implement OM RenewDelegationToken request to use Cache and DoubleBuffer

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1973?focusedWorklogId=298209=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298209
 ]

ASF GitHub Bot logged work on HDDS-1973:


Author: ASF GitHub Bot
Created on: 20/Aug/19 21:31
Start Date: 20/Aug/19 21:31
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1316: HDDS-1973. 
Implement OM RenewDelegationToken request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1316#discussion_r315913008
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/security/OMRenewDelegationTokenRequest.java
 ##
 @@ -0,0 +1,164 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.security;
+
+import java.io.IOException;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import 
org.apache.hadoop.ozone.om.response.security.OMRenewDelegationTokenResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.RenewDelegationTokenResponseProto;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.UpdateRenewDelegationTokenRequest;
+import org.apache.hadoop.ozone.protocolPB.OMPBHelper;
+import org.apache.hadoop.ozone.security.OzoneTokenIdentifier;
+import 
org.apache.hadoop.security.proto.SecurityProtos.RenewDelegationTokenRequestProto;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handle RenewDelegationToken Request.
+ */
+public class OMRenewDelegationTokenRequest extends OMClientRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMGetDelegationTokenRequest.class);
+
+  public OMRenewDelegationTokenRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+RenewDelegationTokenRequestProto renewDelegationTokenRequest =
+getOmRequest().getRenewDelegationTokenRequest();
+
+// Call OM to renew token
+long renewTime = ozoneManager.renewDelegationToken(
+OMPBHelper.convertToDelegationToken(
+renewDelegationTokenRequest.getToken()));
+
+RenewDelegationTokenResponseProto.Builder renewResponse =
+RenewDelegationTokenResponseProto.newBuilder();
+
+renewResponse.setResponse(org.apache.hadoop.security.proto.SecurityProtos
+.RenewDelegationTokenResponseProto.newBuilder()
+.setNewExpiryTime(renewTime));
+
+
+// Client issues RenewDelegationToken request, when received by OM leader
+// it will renew the token. Original RenewDelegationToken request is
+// converted to UpdateRenewDelegationToken request with the token and renew
+// information. This updated request will be submitted to Ratis. In this
+// way delegation token renewd by leader, will be replicated across all
+// OMs. With this approach, original RenewDelegationToken request from
+// client does not need any proto changes.
+
+// Create UpdateRenewDelegationTokenRequest with original request and
+// expiry time.
+OMRequest.Builder omRequest = OMRequest.newBuilder()
+.setUserInfo(getUserInfo())
+.setUpdatedRenewDelegationTokenRequest(
+UpdateRenewDelegationTokenRequest.newBuilder()
+.setRenewDelegationTokenRequest(renewDelegationTokenRequest)
+

[jira] [Commented] (HDFS-14699) Erasure Coding: Can NOT trigger the reconstruction when have the dup internal blocks and missing one internal block

2019-08-20 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911743#comment-16911743
 ] 

Hadoop QA commented on HDFS-14699:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 9 new + 110 unchanged - 0 fixed = 119 total (was 110) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}156m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestPersistBlocks |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14699 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12978057/HDFS-14699.00.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e38d97f63b8b 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4cb22cd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=298208=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298208
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 20/Aug/19 21:30
Start Date: 20/Aug/19 21:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1278: HDDS-1950. S3 
MPU part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#issuecomment-523202633
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 96 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 181 | hadoop-ozone in trunk failed. |
   | -1 | compile | 76 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 126 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1687 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 458 | trunk passed |
   | 0 | spotbugs | 568 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 306 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 264 | hadoop-ozone in the patch failed. |
   | -1 | compile | 185 | hadoop-ozone in the patch failed. |
   | -1 | javac | 185 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 106 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 761 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | the patch passed |
   | -1 | findbugs | 104 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 343 | hadoop-hdds in the patch failed. |
   | -1 | unit | 110 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 6625 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1278 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f8aafe161c1e 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4cb22cd |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/testReport/ |
   | Max. process+thread count | 428 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to 

[jira] [Work logged] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?focusedWorklogId=298205=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298205
 ]

ASF GitHub Bot logged work on HDDS-1927:


Author: ASF GitHub Bot
Created on: 20/Aug/19 21:19
Start Date: 20/Aug/19 21:19
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1263: HDDS-1927. 
Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315908368
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
 ##
 @@ -235,123 +231,22 @@ public FileEncryptionInfo getFileEncryptionInfo() {
 return encInfo;
   }
 
-  public List getAcls() {
+  public List getAcls() {
 
 Review comment:
   Similar issue can be found in the PR of HDDS-1909 and HDDS-1975. 
   The new helper OzoneAclUtil.inheritDefaultAcls should guarantee they are 
merged properly. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 298205)
Time Spent: 5h 40m  (was: 5.5h)

> Consolidate add/remove Acl into OzoneAclUtil class
> --
>
> Key: HDDS-1927
> URL: https://issues.apache.org/jira/browse/HDDS-1927
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> This Jira is created based on @xiaoyu comment on HDDS-1884
> Can we abstract these add/remove logic into common AclUtil class as we can 
> see similar logic in both bucket manager and key manager? For example,
> public static boolean addAcl(List existingAcls, OzoneAcl newAcl)
> public static boolean removeAcl(List existingAcls, OzoneAcl newAcl)
>  
> But to do this, we need both OmKeyInfo and OMBucketInfo to use list of 
> OzoneAcl/OzoneAclInfo.
> This Jira is to do that refactor, and also address above comment to move 
> common logic to AclUtils.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?focusedWorklogId=298188=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298188
 ]

ASF GitHub Bot logged work on HDDS-1927:


Author: ASF GitHub Bot
Created on: 20/Aug/19 21:02
Start Date: 20/Aug/19 21:02
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1263: 
HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315901670
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
 ##
 @@ -235,123 +231,22 @@ public FileEncryptionInfo getFileEncryptionInfo() {
 return encInfo;
   }
 
-  public List getAcls() {
+  public List getAcls() {
 
 Review comment:
   > Good point. CheckAccess does the the byte array based protobuf acl to 
BitSet based acl conversion each time, which is most of the work we do for 
protobuf conversion from OzoneAclInfo to OzoneAcl. Consolidate this into 
OzoneAcl class allows us to change the logic in a single place for optimization 
in future.
   Ya right. I think it should be fine I believe. (But still, we have little 
advantage of not converting entire protobuf object. But I am fine with the 
current way. :))
   
   > Key creation usually needs to inherit the acls from prefix tree, where 
efficient merge of supplied acls along with the acls from in-memory prefix 
tree. It is currently not done properly tracked by HDDS-1913.
   
   HDDS-1913 is for fixing OzoneBucket and RpcClient APIS for acl. Is there 
another jira to fix the above issue?
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 298188)
Time Spent: 5h 20m  (was: 5h 10m)

> Consolidate add/remove Acl into OzoneAclUtil class
> --
>
> Key: HDDS-1927
> URL: https://issues.apache.org/jira/browse/HDDS-1927
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> This Jira is created based on @xiaoyu comment on HDDS-1884
> Can we abstract these add/remove logic into common AclUtil class as we can 
> see similar logic in both bucket manager and key manager? For example,
> public static boolean addAcl(List existingAcls, OzoneAcl newAcl)
> public static boolean removeAcl(List existingAcls, OzoneAcl newAcl)
>  
> But to do this, we need both OmKeyInfo and OMBucketInfo to use list of 
> OzoneAcl/OzoneAclInfo.
> This Jira is to do that refactor, and also address above comment to move 
> common logic to AclUtils.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?focusedWorklogId=298189=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298189
 ]

ASF GitHub Bot logged work on HDDS-1927:


Author: ASF GitHub Bot
Created on: 20/Aug/19 21:02
Start Date: 20/Aug/19 21:02
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1263: 
HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315901670
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
 ##
 @@ -235,123 +231,22 @@ public FileEncryptionInfo getFileEncryptionInfo() {
 return encInfo;
   }
 
-  public List getAcls() {
+  public List getAcls() {
 
 Review comment:
   > Good point. CheckAccess does the the byte array based protobuf acl to 
BitSet based acl conversion each time, which is most of the work we do for 
protobuf conversion from OzoneAclInfo to OzoneAcl. Consolidate this into 
OzoneAcl class allows us to change the logic in a single place for optimization 
in future.
   
   Ya right. I think it should be fine I believe. (But still, we have little 
advantage of not converting entire protobuf object. But I am fine with the 
current way. :))
   
   > Key creation usually needs to inherit the acls from prefix tree, where 
efficient merge of supplied acls along with the acls from in-memory prefix 
tree. It is currently not done properly tracked by HDDS-1913.
   
   HDDS-1913 is for fixing OzoneBucket and RpcClient APIS for acl. Is there 
another jira to fix the above issue?
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 298189)
Time Spent: 5.5h  (was: 5h 20m)

> Consolidate add/remove Acl into OzoneAclUtil class
> --
>
> Key: HDDS-1927
> URL: https://issues.apache.org/jira/browse/HDDS-1927
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> This Jira is created based on @xiaoyu comment on HDDS-1884
> Can we abstract these add/remove logic into common AclUtil class as we can 
> see similar logic in both bucket manager and key manager? For example,
> public static boolean addAcl(List existingAcls, OzoneAcl newAcl)
> public static boolean removeAcl(List existingAcls, OzoneAcl newAcl)
>  
> But to do this, we need both OmKeyInfo and OMBucketInfo to use list of 
> OzoneAcl/OzoneAclInfo.
> This Jira is to do that refactor, and also address above comment to move 
> common logic to AclUtils.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14758) Decrease lease hard limit

2019-08-20 Thread Eric Payne (Jira)
Eric Payne created HDFS-14758:
-

 Summary: Decrease lease hard limit
 Key: HDFS-14758
 URL: https://issues.apache.org/jira/browse/HDFS-14758
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Eric Payne


The hard limit is currently hard-coded to be 1 hour. This also determines the 
NN automatic lease recovery interval. Something like 20 min will make more 
sense.

After the 5 min soft limit, other clients can recover the lease. If no one else 
takes the lease away, the original client still can renew the lease within the 
hard limit. So even after a NN full GC of 8 minutes, leases can be still valid.

However, there is one risk in reducing the hard limit. E.g. Reduced to 20 min. 
If the NN crashes and the manual failover takes more than 20 minutes, clients 
will abort.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1973) Implement OM RenewDelegationToken request to use Cache and DoubleBuffer

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1973?focusedWorklogId=298180=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298180
 ]

ASF GitHub Bot logged work on HDDS-1973:


Author: ASF GitHub Bot
Created on: 20/Aug/19 20:52
Start Date: 20/Aug/19 20:52
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1316: 
HDDS-1973. Implement OM RenewDelegationToken request to use Cache and 
DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1316#discussion_r315896339
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/security/OMRenewDelegationTokenRequest.java
 ##
 @@ -0,0 +1,164 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.security;
+
+import java.io.IOException;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import 
org.apache.hadoop.ozone.om.response.security.OMRenewDelegationTokenResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.RenewDelegationTokenResponseProto;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.UpdateRenewDelegationTokenRequest;
+import org.apache.hadoop.ozone.protocolPB.OMPBHelper;
+import org.apache.hadoop.ozone.security.OzoneTokenIdentifier;
+import 
org.apache.hadoop.security.proto.SecurityProtos.RenewDelegationTokenRequestProto;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handle RenewDelegationToken Request.
+ */
+public class OMRenewDelegationTokenRequest extends OMClientRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMGetDelegationTokenRequest.class);
+
+  public OMRenewDelegationTokenRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+RenewDelegationTokenRequestProto renewDelegationTokenRequest =
+getOmRequest().getRenewDelegationTokenRequest();
+
+// Call OM to renew token
+long renewTime = ozoneManager.renewDelegationToken(
+OMPBHelper.convertToDelegationToken(
+renewDelegationTokenRequest.getToken()));
+
+RenewDelegationTokenResponseProto.Builder renewResponse =
+RenewDelegationTokenResponseProto.newBuilder();
+
+renewResponse.setResponse(org.apache.hadoop.security.proto.SecurityProtos
+.RenewDelegationTokenResponseProto.newBuilder()
+.setNewExpiryTime(renewTime));
+
+
+// Client issues RenewDelegationToken request, when received by OM leader
+// it will renew the token. Original RenewDelegationToken request is
+// converted to UpdateRenewDelegationToken request with the token and renew
+// information. This updated request will be submitted to Ratis. In this
+// way delegation token renewd by leader, will be replicated across all
+// OMs. With this approach, original RenewDelegationToken request from
+// client does not need any proto changes.
+
+// Create UpdateRenewDelegationTokenRequest with original request and
+// expiry time.
+OMRequest.Builder omRequest = OMRequest.newBuilder()
+.setUserInfo(getUserInfo())
+.setUpdatedRenewDelegationTokenRequest(
+UpdateRenewDelegationTokenRequest.newBuilder()
+.setRenewDelegationTokenRequest(renewDelegationTokenRequest)
+

[jira] [Work logged] (HDDS-1973) Implement OM RenewDelegationToken request to use Cache and DoubleBuffer

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1973?focusedWorklogId=298181=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298181
 ]

ASF GitHub Bot logged work on HDDS-1973:


Author: ASF GitHub Bot
Created on: 20/Aug/19 20:52
Start Date: 20/Aug/19 20:52
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1316: 
HDDS-1973. Implement OM RenewDelegationToken request to use Cache and 
DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1316#discussion_r315896339
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/security/OMRenewDelegationTokenRequest.java
 ##
 @@ -0,0 +1,164 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.security;
+
+import java.io.IOException;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import 
org.apache.hadoop.ozone.om.response.security.OMRenewDelegationTokenResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.RenewDelegationTokenResponseProto;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.UpdateRenewDelegationTokenRequest;
+import org.apache.hadoop.ozone.protocolPB.OMPBHelper;
+import org.apache.hadoop.ozone.security.OzoneTokenIdentifier;
+import 
org.apache.hadoop.security.proto.SecurityProtos.RenewDelegationTokenRequestProto;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handle RenewDelegationToken Request.
+ */
+public class OMRenewDelegationTokenRequest extends OMClientRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMGetDelegationTokenRequest.class);
+
+  public OMRenewDelegationTokenRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+RenewDelegationTokenRequestProto renewDelegationTokenRequest =
+getOmRequest().getRenewDelegationTokenRequest();
+
+// Call OM to renew token
+long renewTime = ozoneManager.renewDelegationToken(
+OMPBHelper.convertToDelegationToken(
+renewDelegationTokenRequest.getToken()));
+
+RenewDelegationTokenResponseProto.Builder renewResponse =
+RenewDelegationTokenResponseProto.newBuilder();
+
+renewResponse.setResponse(org.apache.hadoop.security.proto.SecurityProtos
+.RenewDelegationTokenResponseProto.newBuilder()
+.setNewExpiryTime(renewTime));
+
+
+// Client issues RenewDelegationToken request, when received by OM leader
+// it will renew the token. Original RenewDelegationToken request is
+// converted to UpdateRenewDelegationToken request with the token and renew
+// information. This updated request will be submitted to Ratis. In this
+// way delegation token renewd by leader, will be replicated across all
+// OMs. With this approach, original RenewDelegationToken request from
+// client does not need any proto changes.
+
+// Create UpdateRenewDelegationTokenRequest with original request and
+// expiry time.
+OMRequest.Builder omRequest = OMRequest.newBuilder()
+.setUserInfo(getUserInfo())
+.setUpdatedRenewDelegationTokenRequest(
+UpdateRenewDelegationTokenRequest.newBuilder()
+.setRenewDelegationTokenRequest(renewDelegationTokenRequest)
+

[jira] [Commented] (HDFS-14741) RBF: RecoverLease should be return false when the file is open in multiple destination

2019-08-20 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911715#comment-16911715
 ] 

Hadoop QA commented on HDFS-14741:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
56s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 29m 12s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
|   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14741 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12978059/HDFS-14741-trunk-006.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 46a58be366c5 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4cb22cd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27583/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27583/testReport/ |
| Max. process+thread count | 1623 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 

[jira] [Updated] (HDFS-14706) Checksums are not checked if block meta file is less than 7 bytes

2019-08-20 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-14706:
-
Attachment: HDFS-14706.004.patch

> Checksums are not checked if block meta file is less than 7 bytes
> -
>
> Key: HDFS-14706
> URL: https://issues.apache.org/jira/browse/HDFS-14706
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14706.001.patch, HDFS-14706.002.patch, 
> HDFS-14706.003.patch, HDFS-14706.004.patch
>
>
> If a block and its meta file are corrupted in a certain way, the corruption 
> can go unnoticed by a client, causing it to return invalid data.
> The meta file is expected to always have a header of 7 bytes and then a 
> series of checksums depending on the length of the block.
> If the metafile gets corrupted in such a way, that it is between zero and 
> less than 7 bytes in length, then the header is incomplete. In 
> BlockSender.java the logic checks if the metafile length is at least the size 
> of the header and if it is not, it does not error, but instead returns a NULL 
> checksum type to the client.
> https://github.com/apache/hadoop/blob/b77761b0e37703beb2c033029e4c0d5ad1dce794/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java#L327-L357
> If the client receives a NULL checksum client, it will not validate checksums 
> at all, and even corrupted data will be returned to the reader. This means 
> this corrupt will go unnoticed and HDFS will never repair it. Even the Volume 
> Scanner will not notice the corruption as the checksums are silently ignored.
> Additionally, if the meta file does have enough bytes so it attempts to load 
> the header, and the header is corrupted such that it is not valid, it can 
> cause the datanode Volume Scanner to exit, which an exception like the 
> following:
> {code}
> 2019-08-06 18:16:39,151 ERROR datanode.VolumeScanner: 
> VolumeScanner(/tmp/hadoop-sodonnell/dfs/data, 
> DS-7f103313-61ba-4d37-b63d-e8cf7d2ed5f7) exiting because of exception 
> java.lang.IllegalArgumentException: id=51 out of range [0, 5)
>   at 
> org.apache.hadoop.util.DataChecksum$Type.valueOf(DataChecksum.java:76)
>   at 
> org.apache.hadoop.util.DataChecksum.newDataChecksum(DataChecksum.java:167)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:173)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:139)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:153)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.loadLastPartialChunkChecksum(FsVolumeImpl.java:1140)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FinalizedReplica.loadLastPartialChunkChecksum(FinalizedReplica.java:157)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.getPartialChunkChecksumForFinalized(BlockSender.java:451)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:266)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.scanBlock(VolumeScanner.java:446)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.runLoop(VolumeScanner.java:558)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:633)
> 2019-08-06 18:16:39,152 INFO datanode.VolumeScanner: 
> VolumeScanner(/tmp/hadoop-sodonnell/dfs/data, 
> DS-7f103313-61ba-4d37-b63d-e8cf7d2ed5f7) exiting.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1973) Implement OM RenewDelegationToken request to use Cache and DoubleBuffer

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1973?focusedWorklogId=298149=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298149
 ]

ASF GitHub Bot logged work on HDDS-1973:


Author: ASF GitHub Bot
Created on: 20/Aug/19 20:30
Start Date: 20/Aug/19 20:30
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1316: HDDS-1973. 
Implement OM RenewDelegationToken request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1316#discussion_r315886059
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/security/OMRenewDelegationTokenRequest.java
 ##
 @@ -0,0 +1,164 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.security;
+
+import java.io.IOException;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import 
org.apache.hadoop.ozone.om.response.security.OMRenewDelegationTokenResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.RenewDelegationTokenResponseProto;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.UpdateRenewDelegationTokenRequest;
+import org.apache.hadoop.ozone.protocolPB.OMPBHelper;
+import org.apache.hadoop.ozone.security.OzoneTokenIdentifier;
+import 
org.apache.hadoop.security.proto.SecurityProtos.RenewDelegationTokenRequestProto;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handle RenewDelegationToken Request.
+ */
+public class OMRenewDelegationTokenRequest extends OMClientRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMGetDelegationTokenRequest.class);
+
+  public OMRenewDelegationTokenRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+RenewDelegationTokenRequestProto renewDelegationTokenRequest =
+getOmRequest().getRenewDelegationTokenRequest();
+
+// Call OM to renew token
+long renewTime = ozoneManager.renewDelegationToken(
+OMPBHelper.convertToDelegationToken(
+renewDelegationTokenRequest.getToken()));
+
+RenewDelegationTokenResponseProto.Builder renewResponse =
+RenewDelegationTokenResponseProto.newBuilder();
+
+renewResponse.setResponse(org.apache.hadoop.security.proto.SecurityProtos
+.RenewDelegationTokenResponseProto.newBuilder()
+.setNewExpiryTime(renewTime));
+
+
+// Client issues RenewDelegationToken request, when received by OM leader
+// it will renew the token. Original RenewDelegationToken request is
+// converted to UpdateRenewDelegationToken request with the token and renew
+// information. This updated request will be submitted to Ratis. In this
+// way delegation token renewd by leader, will be replicated across all
+// OMs. With this approach, original RenewDelegationToken request from
+// client does not need any proto changes.
+
+// Create UpdateRenewDelegationTokenRequest with original request and
+// expiry time.
+OMRequest.Builder omRequest = OMRequest.newBuilder()
+.setUserInfo(getUserInfo())
+.setUpdatedRenewDelegationTokenRequest(
+UpdateRenewDelegationTokenRequest.newBuilder()
+.setRenewDelegationTokenRequest(renewDelegationTokenRequest)
+

[jira] [Work logged] (HDDS-1973) Implement OM RenewDelegationToken request to use Cache and DoubleBuffer

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1973?focusedWorklogId=298146=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298146
 ]

ASF GitHub Bot logged work on HDDS-1973:


Author: ASF GitHub Bot
Created on: 20/Aug/19 20:24
Start Date: 20/Aug/19 20:24
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1316: HDDS-1973. 
Implement OM RenewDelegationToken request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1316#discussion_r315886059
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/security/OMRenewDelegationTokenRequest.java
 ##
 @@ -0,0 +1,164 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.security;
+
+import java.io.IOException;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import 
org.apache.hadoop.ozone.om.response.security.OMRenewDelegationTokenResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.RenewDelegationTokenResponseProto;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.UpdateRenewDelegationTokenRequest;
+import org.apache.hadoop.ozone.protocolPB.OMPBHelper;
+import org.apache.hadoop.ozone.security.OzoneTokenIdentifier;
+import 
org.apache.hadoop.security.proto.SecurityProtos.RenewDelegationTokenRequestProto;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handle RenewDelegationToken Request.
+ */
+public class OMRenewDelegationTokenRequest extends OMClientRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMGetDelegationTokenRequest.class);
+
+  public OMRenewDelegationTokenRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+RenewDelegationTokenRequestProto renewDelegationTokenRequest =
+getOmRequest().getRenewDelegationTokenRequest();
+
+// Call OM to renew token
+long renewTime = ozoneManager.renewDelegationToken(
+OMPBHelper.convertToDelegationToken(
+renewDelegationTokenRequest.getToken()));
+
+RenewDelegationTokenResponseProto.Builder renewResponse =
+RenewDelegationTokenResponseProto.newBuilder();
+
+renewResponse.setResponse(org.apache.hadoop.security.proto.SecurityProtos
+.RenewDelegationTokenResponseProto.newBuilder()
+.setNewExpiryTime(renewTime));
+
+
+// Client issues RenewDelegationToken request, when received by OM leader
+// it will renew the token. Original RenewDelegationToken request is
+// converted to UpdateRenewDelegationToken request with the token and renew
+// information. This updated request will be submitted to Ratis. In this
+// way delegation token renewd by leader, will be replicated across all
+// OMs. With this approach, original RenewDelegationToken request from
+// client does not need any proto changes.
+
+// Create UpdateRenewDelegationTokenRequest with original request and
+// expiry time.
+OMRequest.Builder omRequest = OMRequest.newBuilder()
+.setUserInfo(getUserInfo())
+.setUpdatedRenewDelegationTokenRequest(
+UpdateRenewDelegationTokenRequest.newBuilder()
+.setRenewDelegationTokenRequest(renewDelegationTokenRequest)
+

[jira] [Work logged] (HDDS-1973) Implement OM RenewDelegationToken request to use Cache and DoubleBuffer

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1973?focusedWorklogId=298143=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298143
 ]

ASF GitHub Bot logged work on HDDS-1973:


Author: ASF GitHub Bot
Created on: 20/Aug/19 20:18
Start Date: 20/Aug/19 20:18
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1316: HDDS-1973. 
Implement OM RenewDelegationToken request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1316#discussion_r315883423
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/security/OMRenewDelegationTokenRequest.java
 ##
 @@ -0,0 +1,164 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.security;
+
+import java.io.IOException;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import 
org.apache.hadoop.ozone.om.response.security.OMRenewDelegationTokenResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.RenewDelegationTokenResponseProto;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.UpdateRenewDelegationTokenRequest;
+import org.apache.hadoop.ozone.protocolPB.OMPBHelper;
+import org.apache.hadoop.ozone.security.OzoneTokenIdentifier;
+import 
org.apache.hadoop.security.proto.SecurityProtos.RenewDelegationTokenRequestProto;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handle RenewDelegationToken Request.
+ */
+public class OMRenewDelegationTokenRequest extends OMClientRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMGetDelegationTokenRequest.class);
 
 Review comment:
   This should be OMRenewDelegationTokenRequest.class
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 298143)
Time Spent: 0.5h  (was: 20m)

> Implement OM RenewDelegationToken request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1973
> URL: https://issues.apache.org/jira/browse/HDDS-1973
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Implement OM RenewDelegationToken request to use OM Cache, double buffer.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14358) Provide LiveNode and DeadNode filter in DataNode UI

2019-08-20 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14358:
-
Attachment: (was: HDFS-14358.005.patch)

> Provide LiveNode and DeadNode filter in DataNode UI
> ---
>
> Key: HDFS-14358
> URL: https://issues.apache.org/jira/browse/HDFS-14358
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.2
>Reporter: Ravuri Sushma sree
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14358 (4).patch, HDFS-14358(2).patch, 
> HDFS-14358(3).patch, HDFS-14358.005.patch, HDFS14358.JPG, hdfs-14358.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14358) Provide LiveNode and DeadNode filter in DataNode UI

2019-08-20 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14358:
-
Attachment: HDFS-14358.005.patch

> Provide LiveNode and DeadNode filter in DataNode UI
> ---
>
> Key: HDFS-14358
> URL: https://issues.apache.org/jira/browse/HDFS-14358
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.2
>Reporter: Ravuri Sushma sree
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14358 (4).patch, HDFS-14358(2).patch, 
> HDFS-14358(3).patch, HDFS-14358.005.patch, HDFS14358.JPG, hdfs-14358.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14754) Erasure Coding : The number of Under-Replicated Blocks never reduced

2019-08-20 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14754:
-
Attachment: HDFS-14754.002.patch

> Erasure Coding :  The number of Under-Replicated Blocks never reduced
> -
>
> Key: HDFS-14754
> URL: https://issues.apache.org/jira/browse/HDFS-14754
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Critical
> Attachments: HDFS-14754.001.patch, HDFS-14754.002.patch
>
>
> Using EC RS-3-2, 6 DN 
> We came accross a scenario where in the EC 5 blocks , same block is 
> replicated thrice and two blocks got missing
> Replicated block was not deleting and missing block is not able to ReConstruct



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13541) NameNode Port based selective encryption

2019-08-20 Thread Chen Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-13541:
--
Labels: release-blocker  (was: )

> NameNode Port based selective encryption
> 
>
> Key: HDFS-13541
> URL: https://issues.apache.org/jira/browse/HDFS-13541
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, security
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
>  Labels: release-blocker
> Attachments: HDFS-13541-branch-3.1.001.patch, 
> HDFS-13541-branch-3.2.001.patch, HDFS-13541-branch-3.2.002.patch, NameNode 
> Port based selective encryption-v1.pdf
>
>
> Here at LinkedIn, one issue we face is that we need to enforce different 
> security requirement based on the location of client and the cluster. 
> Specifically, for clients from outside of the data center, it is required by 
> regulation that all traffic must be encrypted. But for clients within the 
> same data center, unencrypted connections are more desired to avoid the high 
> encryption overhead. 
> HADOOP-10221 introduced pluggable SASL resolver, based on which HADOOP-10335 
> introduced WhitelistBasedResolver which solves the same problem. However we 
> found it difficult to fit into our environment for several reasons. In this 
> JIRA, on top of pluggable SASL resolver, *we propose a different approach of 
> running RPC two ports on NameNode, and the two ports will be enforcing 
> encrypted and unencrypted connections respectively, and the following 
> DataNode access will simply follow the same behaviour of 
> encryption/unencryption*. Then by blocking unencrypted port on datacenter 
> firewall, we can completely block unencrypted external access.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13541) NameNode Port based selective encryption

2019-08-20 Thread Chen Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-13541:
--
Target Version/s: 2.10.0, 3.3.0  (was: 3.3.0)

> NameNode Port based selective encryption
> 
>
> Key: HDFS-13541
> URL: https://issues.apache.org/jira/browse/HDFS-13541
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, security
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
>  Labels: release-blocker
> Attachments: HDFS-13541-branch-3.1.001.patch, 
> HDFS-13541-branch-3.2.001.patch, HDFS-13541-branch-3.2.002.patch, NameNode 
> Port based selective encryption-v1.pdf
>
>
> Here at LinkedIn, one issue we face is that we need to enforce different 
> security requirement based on the location of client and the cluster. 
> Specifically, for clients from outside of the data center, it is required by 
> regulation that all traffic must be encrypted. But for clients within the 
> same data center, unencrypted connections are more desired to avoid the high 
> encryption overhead. 
> HADOOP-10221 introduced pluggable SASL resolver, based on which HADOOP-10335 
> introduced WhitelistBasedResolver which solves the same problem. However we 
> found it difficult to fit into our environment for several reasons. In this 
> JIRA, on top of pluggable SASL resolver, *we propose a different approach of 
> running RPC two ports on NameNode, and the two ports will be enforcing 
> encrypted and unencrypted connections respectively, and the following 
> DataNode access will simply follow the same behaviour of 
> encryption/unencryption*. Then by blocking unencrypted port on datacenter 
> firewall, we can completely block unencrypted external access.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14311) Multi-threading conflict at layoutVersion when loading block pool storage

2019-08-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911692#comment-16911692
 ] 

Hudson commented on HDFS-14311:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17152 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17152/])
HDFS-14311. Multi-threading conflict at layoutVersion when loading block 
(weichiu: rev 4cb22cd867a9295efc815dc95525b5c3e5960ea6)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java


> Multi-threading conflict at layoutVersion when loading block pool storage
> -
>
> Key: HDFS-14311
> URL: https://issues.apache.org/jira/browse/HDFS-14311
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Affects Versions: 2.9.2
>Reporter: Yicong Cai
>Assignee: Yicong Cai
>Priority: Major
> Fix For: 2.10.0, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14311.1.patch, HDFS-14311.2.patch, 
> HDFS-14311.branch-2.1.patch
>
>
> When DataNode upgrade from 2.7.3 to 2.9.2, there is a conflict at 
> StorageInfo.layoutVersion in loading block pool storage process.
> It will cause this exception:
>  
> {panel:title=exceptions}
> 2019-02-15 10:18:01,357 [13783] - INFO [Thread-33:BlockPoolSliceStorage@395] 
> - Restored 36974 block files from trash before the layout upgrade. These 
> blocks will be moved to the previous directory during the upgrade
> 2019-02-15 10:18:01,358 [13784] - WARN [Thread-33:BlockPoolSliceStorage@226] 
> - Failed to analyze storage directories for block pool 
> BP-1216718839-10.120.232.23-1548736842023
> java.io.IOException: Datanode state: LV = -57 CTime = 0 is newer than the 
> namespace state: LV = -63 CTime = 0
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.doTransition(BlockPoolSliceStorage.java:406)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadStorageDirectory(BlockPoolSliceStorage.java:177)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadBpStorageDirectories(BlockPoolSliceStorage.java:221)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:250)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.loadBlockPoolSliceStorage(DataStorage.java:460)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:390)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:556)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1649)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1610)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:388)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
>  at java.lang.Thread.run(Thread.java:748)
> 2019-02-15 10:18:01,358 [13784] - WARN [Thread-33:DataStorage@472] - Failed 
> to add storage directory [DISK]file:/mnt/dfs/2/hadoop/hdfs/data/ for block 
> pool BP-1216718839-10.120.232.23-1548736842023
> java.io.IOException: Datanode state: LV = -57 CTime = 0 is newer than the 
> namespace state: LV = -63 CTime = 0
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.doTransition(BlockPoolSliceStorage.java:406)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadStorageDirectory(BlockPoolSliceStorage.java:177)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadBpStorageDirectories(BlockPoolSliceStorage.java:221)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:250)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.loadBlockPoolSliceStorage(DataStorage.java:460)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:390)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:556)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1649)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1610)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:388)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
>  at java.lang.Thread.run(Thread.java:748) 
> {panel}
>  
> 

[jira] [Commented] (HDFS-13201) Fix prompt message in testPolicyAndStateCantBeNull

2019-08-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911691#comment-16911691
 ] 

Hudson commented on HDFS-13201:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17152 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17152/])
HDFS-13201. Fix prompt message in testPolicyAndStateCantBeNull. (weichiu: rev 
aa6995fde289719e0b300e11568c5e68c36b5d05)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/protocol/TestErasureCodingPolicyInfo.java


> Fix prompt message in testPolicyAndStateCantBeNull
> --
>
> Key: HDFS-13201
> URL: https://issues.apache.org/jira/browse/HDFS-13201
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
>Assignee: chencan
>Priority: Minor
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-13201.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14729) Upgrade Bootstrap and jQuery versions used in HDFS UIs

2019-08-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911689#comment-16911689
 ] 

Hudson commented on HDFS-14729:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17152 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17152/])
HDFS-14729. Upgrade Bootstrap and jQuery versions used in HDFS UIs. (sunilg: 
rev bd9246232123416201eb8c257b3cd8ab0ad32664)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/css/bootstrap-editable.css
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/journal/index.html
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/js/bootstrap.min.js
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/fonts/glyphicons-halflings-regular.ttf
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/js/bootstrap-editable.min.js
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/js/npm.js
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/css/bootstrap-theme.min.css
* (delete) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/fonts/glyphicons-halflings-regular.svg
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/css/bootstrap-theme.css.map
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/fonts/glyphicons-halflings-regular.svg
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/css/bootstrap.css.map
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/fonts/glyphicons-halflings-regular.woff2
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/fonts/glyphicons-halflings-regular.woff
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery-3.3.1.min.js
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/css/bootstrap.css.map
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/js/bootstrap.js
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/css/bootstrap-theme.css.map
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery-3.4.1.min.js
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/css/bootstrap-theme.min.css.map
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/css/bootstrap.min.css.map
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/fonts/glyphicons-halflings-regular.eot
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/css/bootstrap-theme.min.css.map
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/fonts/glyphicons-halflings-regular.eot
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/css/bootstrap.css
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/fonts/glyphicons-halflings-regular.ttf
* (edit) LICENSE.txt
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/js/bootstrap.js
* (edit) hadoop-hdfs-project/hadoop-hdfs/pom.xml
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/js/npm.js
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/css/bootstrap.min.css
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/datanode.html
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/status.html
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/fonts/glyphicons-halflings-regular.woff2
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/fonts/glyphicons-halflings-regular.woff
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/css/bootstrap-editable.css
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/css/bootstrap.min.css.map
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/css/bootstrap-theme.min.css
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/css/bootstrap.css
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/css/bootstrap-theme.css
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/css/bootstrap-theme.css
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* (delete) 

[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-08-20 Thread Siddharth Wagle (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911687#comment-16911687
 ] 

Siddharth Wagle commented on HDFS-2470:
---

In 08 removed the root dir permission setting.
Regarding setting the default for StorageDirectory, I am not sure we should do 
that since it might have a chance of breaking some dependency somewhere else, 
as stated earlier with shortcircuit reads.

> NN should automatically set permissions on dfs.namenode.*.dir
> -
>
> Key: HDFS-2470
> URL: https://issues.apache.org/jira/browse/HDFS-2470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Aaron T. Myers
>Assignee: Siddharth Wagle
>Priority: Major
> Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, 
> HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, 
> HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch
>
>
> Much as the DN currently sets the correct permissions for the 
> dfs.datanode.data.dir, the NN should do the same for the 
> dfs.namenode.(name|edit).dir.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14358) Provide LiveNode and DeadNode filter in DataNode UI

2019-08-20 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911686#comment-16911686
 ] 

Hadoop QA commented on HDFS-14358:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red} HDFS-14358 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14358 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27584/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Provide LiveNode and DeadNode filter in DataNode UI
> ---
>
> Key: HDFS-14358
> URL: https://issues.apache.org/jira/browse/HDFS-14358
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.2
>Reporter: Ravuri Sushma sree
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14358 (4).patch, HDFS-14358(2).patch, 
> HDFS-14358(3).patch, HDFS-14358.005.patch, HDFS14358.JPG, hdfs-14358.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-08-20 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDFS-2470:
--
Attachment: HDFS-2470.08.patch

> NN should automatically set permissions on dfs.namenode.*.dir
> -
>
> Key: HDFS-2470
> URL: https://issues.apache.org/jira/browse/HDFS-2470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Aaron T. Myers
>Assignee: Siddharth Wagle
>Priority: Major
> Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, 
> HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, 
> HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch
>
>
> Much as the DN currently sets the correct permissions for the 
> dfs.datanode.data.dir, the NN should do the same for the 
> dfs.namenode.(name|edit).dir.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14706) Checksums are not checked if block meta file is less than 7 bytes

2019-08-20 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911683#comment-16911683
 ] 

Hadoop QA commented on HDFS-14706:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
6s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 10s{color} 
| {color:red} root generated 1 new + 1472 unchanged - 0 fixed = 1473 total (was 
1472) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 34s{color} | {color:orange} root: The patch generated 1 new + 370 unchanged 
- 6 fixed = 371 total (was 376) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
22s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
10s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}111m 43s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 4s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}242m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14706 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12978052/HDFS-14706.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 

[jira] [Commented] (HDFS-14726) Fix JN incompatibility issue in branch-2 due to backport of HDFS-10519

2019-08-20 Thread Chen Liang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911658#comment-16911658
 ] 

Chen Liang commented on HDFS-14726:
---

Sorry for the delay on getting back to this. Post v002 patch to include what 
Erik suggested.

> Fix JN incompatibility issue in branch-2 due to backport of HDFS-10519
> --
>
> Key: HDFS-14726
> URL: https://issues.apache.org/jira/browse/HDFS-14726
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node
>Affects Versions: 2.10.0
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Blocker
> Attachments: HDFS-14726-branch-2.001.patch, 
> HDFS-14726-branch-2.002.patch
>
>
> HDFS-10519 has been backported to branch-2. However HDFS-10519 introduced an 
> incompatibility issue between NN and JN due to the new protobuf field 
> {{committedTxnId}} in {{HdfsServer.proto}}. This field was introduced as a 
> required field so if JN and NN are not on same version, it will run into 
> missing field exception. Although currently we can get around by making sure 
> JN always gets upgraded properly before NN, we can potentially fix this 
> incompatibility by changing the field to optional. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14726) Fix JN incompatibility issue in branch-2 due to backport of HDFS-10519

2019-08-20 Thread Chen Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14726:
--
Attachment: HDFS-14726-branch-2.002.patch

> Fix JN incompatibility issue in branch-2 due to backport of HDFS-10519
> --
>
> Key: HDFS-14726
> URL: https://issues.apache.org/jira/browse/HDFS-14726
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node
>Affects Versions: 2.10.0
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Blocker
> Attachments: HDFS-14726-branch-2.001.patch, 
> HDFS-14726-branch-2.002.patch
>
>
> HDFS-10519 has been backported to branch-2. However HDFS-10519 introduced an 
> incompatibility issue between NN and JN due to the new protobuf field 
> {{committedTxnId}} in {{HdfsServer.proto}}. This field was introduced as a 
> required field so if JN and NN are not on same version, it will run into 
> missing field exception. Although currently we can get around by making sure 
> JN always gets upgraded properly before NN, we can potentially fix this 
> incompatibility by changing the field to optional. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?focusedWorklogId=298104=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298104
 ]

ASF GitHub Bot logged work on HDDS-1927:


Author: ASF GitHub Bot
Created on: 20/Aug/19 18:47
Start Date: 20/Aug/19 18:47
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1263: HDDS-1927. 
Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315848058
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
 ##
 @@ -235,123 +231,22 @@ public FileEncryptionInfo getFileEncryptionInfo() {
 return encInfo;
   }
 
-  public List getAcls() {
+  public List getAcls() {
 
 Review comment:
   Good point. CheckAccess does the the byte array based protobuf acl to BitSet 
based acl conversion each time, which is most of the work we do for protobuf 
conversion from OzoneAclInfo to OzoneAcl. Consolidate this into OzoneAcl class 
allows us to change the logic in a single place for optimization in future. 
   
   Key creation usually needs to inherit the acls from prefix tree, where 
efficient merge of supplied acls along with the acls from in-memory prefix 
tree. It is currently not done properly tracked by HDDS-1913. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 298104)
Time Spent: 5h 10m  (was: 5h)

> Consolidate add/remove Acl into OzoneAclUtil class
> --
>
> Key: HDDS-1927
> URL: https://issues.apache.org/jira/browse/HDDS-1927
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> This Jira is created based on @xiaoyu comment on HDDS-1884
> Can we abstract these add/remove logic into common AclUtil class as we can 
> see similar logic in both bucket manager and key manager? For example,
> public static boolean addAcl(List existingAcls, OzoneAcl newAcl)
> public static boolean removeAcl(List existingAcls, OzoneAcl newAcl)
>  
> But to do this, we need both OmKeyInfo and OMBucketInfo to use list of 
> OzoneAcl/OzoneAclInfo.
> This Jira is to do that refactor, and also address above comment to move 
> common logic to AclUtils.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14582) Failed to start DN with ArithmeticException when NULL checksum used

2019-08-20 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911634#comment-16911634
 ] 

Wei-Chiu Chuang commented on HDFS-14582:


+1 committing the patch

> Failed to start DN with ArithmeticException when NULL checksum used
> ---
>
> Key: HDFS-14582
> URL: https://issues.apache.org/jira/browse/HDFS-14582
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-14582.001.patch
>
>
> {code}
> Caused by: java.lang.ArithmeticException: / by zero
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.validateIntegrityAndSetLength(BlockPoolSlice.java:823)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addReplicaToReplicasMap(BlockPoolSlice.java:627)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:702)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice$AddReplicaProcessor.compute(BlockPoolSlice.java:1047)
> at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
> at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
> at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
> at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
> at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10782) Decrease memory frequent exchange of Centralized Cache Management when run balancer

2019-08-20 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-10782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911624#comment-16911624
 ] 

Wei-Chiu Chuang commented on HDFS-10782:


Not familiar with CCM.
Could you also provide a patch for trunk?
Given this is a memory optimization, no test is needed.

> Decrease memory frequent exchange of Centralized Cache Management when run 
> balancer
> ---
>
> Key: HDFS-10782
> URL: https://issues.apache.org/jira/browse/HDFS-10782
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover, caching
>Affects Versions: 2.7.1
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
>  Labels: patch
> Attachments: HDFS-10782-branch-2.001.patch
>
>
> CachedBlocks currently are transparent for Balancer when active feature of 
> centralized cache management. This makes DataNode exchange memory frequently, 
> because Balancer does not Distinguish CachedBlock from blocks, so it may 
> trigger mount of cached/uncached ops.
> I think namenode should avoid return CacheBlocks as much as possible when 
> Balanacer#getblocks.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1946) CertificateClient should not persist keys/certs to ozone.metadata.dir

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1946?focusedWorklogId=298085=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298085
 ]

ASF GitHub Bot logged work on HDDS-1946:


Author: ASF GitHub Bot
Created on: 20/Aug/19 18:11
Start Date: 20/Aug/19 18:11
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #1311: HDDS-1946. 
CertificateClient should not persist keys/certs to ozone.m…
URL: https://github.com/apache/hadoop/pull/1311#issuecomment-523132564
 
 
   Thanks @vivekratnavel  for working on this. The changes look good to me 
overall with two issues:
   
   1. The integration test failure testSecureOmInitFailures is related where 
the component name need to be passed into getKeyLocation().
   
   2. The key/cert location change also needs further changes for secure GRPC 
as they are currently calling into the getKeyLocation() without giving 
component name. When we move the DN keys under ".../dn/keys", the GRPC 
client/server (DNs) will not be able to find the keys under ".../keys". 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 298085)
Time Spent: 1h 50m  (was: 1h 40m)

> CertificateClient should not persist keys/certs to ozone.metadata.dir
> -
>
> Key: HDDS-1946
> URL: https://issues.apache.org/jira/browse/HDDS-1946
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> For example, when OM and SCM are deployed on the same host with 
> ozone.metadata.dir defined. SCM can start successfully but OM can not because 
> the key/cert from OM will collide with SCM.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12749) DN may not send block report to NN after NN restart

2019-08-20 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911596#comment-16911596
 ] 

Wei-Chiu Chuang commented on HDFS-12749:


Not much progress -- I'll commit 005 patch later today.

> DN may not send block report to NN after NN restart
> ---
>
> Key: HDFS-12749
> URL: https://issues.apache.org/jira/browse/HDFS-12749
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1, 2.8.3, 2.7.5, 3.0.0, 2.9.1
>Reporter: TanYuxin
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-12749-branch-2.7.002.patch, 
> HDFS-12749-trunk.003.patch, HDFS-12749-trunk.004.patch, 
> HDFS-12749-trunk.005.patch, HDFS-12749.001.patch
>
>
> Now our cluster have thousands of DN, millions of files and blocks. When NN 
> restart, NN's load is very high.
> After NN restart,DN will call BPServiceActor#reRegister method to register. 
> But register RPC will get a IOException since NN is busy dealing with Block 
> Report.  The exception is caught at BPServiceActor#processCommand.
> Next is the caught IOException:
> {code:java}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Error processing 
> datanode Command
> java.io.IOException: Failed on local exception: java.io.IOException: 
> java.net.SocketTimeoutException: 6 millis timeout while waiting for 
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
> local=/DataNode_IP:Port remote=NameNode_Host/IP:Port]; Host Details : local 
> host is: "DataNode_Host/Datanode_IP"; destination host is: 
> "NameNode_Host":Port;
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:773)
> at org.apache.hadoop.ipc.Client.call(Client.java:1474)
> at org.apache.hadoop.ipc.Client.call(Client.java:1407)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy13.registerDatanode(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.registerDatanode(DatanodeProtocolClientSideTranslatorPB.java:126)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.register(BPServiceActor.java:793)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.reRegister(BPServiceActor.java:926)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:604)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:898)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:711)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:864)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The un-catched IOException breaks BPServiceActor#register, and the Block 
> Report can not be sent immediately. 
> {code}
>   /**
>* Register one bp with the corresponding NameNode
>* 
>* The bpDatanode needs to register with the namenode on startup in order
>* 1) to report which storage it is serving now and 
>* 2) to receive a registrationID
>*  
>* issued by the namenode to recognize registered datanodes.
>* 
>* @param nsInfo current NamespaceInfo
>* @see FSNamesystem#registerDatanode(DatanodeRegistration)
>* @throws IOException
>*/
>   void register(NamespaceInfo nsInfo) throws IOException {
> // The handshake() phase loaded the block pool storage
> // off disk - so update the bpRegistration object from that info
> DatanodeRegistration newBpRegistration = bpos.createRegistration();
> LOG.info(this + " beginning handshake with NN");
> while (shouldRun()) {
>   try {
> // Use returned registration from namenode with updated fields
> newBpRegistration = bpNamenode.registerDatanode(newBpRegistration);
> newBpRegistration.setNamespaceInfo(nsInfo);
> bpRegistration = newBpRegistration;
> break;
>   } catch(EOFException e) {  // namenode might have just restarted
> LOG.info("Problem connecting to server: " + nnAddr + " :"
> + e.getLocalizedMessage());
> sleepAndLogInterrupts(1000, "connecting to server");
>   } catch(SocketTimeoutException e) {  // namenode is busy
> LOG.info("Problem connecting to server: " + nnAddr);
> sleepAndLogInterrupts(1000, "connecting to server");
>   }
> }
> 
> LOG.info("Block pool " + this + " successfully registered with NN");
> bpos.registrationSucceeded(this, bpRegistration);
> // random short delay - helps scatter the BR from all DNs
> scheduler.scheduleBlockReport(dnConf.initialBlockReportDelay);
>   }

[jira] [Updated] (HDFS-14665) HttpFS: LISTSTATUS response is missing HDFS-specific fields

2019-08-20 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14665:
---
Fix Version/s: 3.1.3

> HttpFS: LISTSTATUS response is missing HDFS-specific fields
> ---
>
> Key: HDFS-14665
> URL: https://issues.apache.org/jira/browse/HDFS-14665
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
>
> WebHDFS:
> {code:java}
> GET /webhdfs/v1/tmp/?op=LISTSTATUS=hdfs HTTP/1.1
> {code}
> {code}
> {
>   "FileStatuses": {
> "FileStatus": [
> ...
>   {
> "accessTime": 0,
> "blockSize": 0,
> "childrenNum": 0,
> "fileId": 16395,
> "group": "hadoop",
> "length": 0,
> "modificationTime": 1563893395614,
> "owner": "mapred",
> "pathSuffix": "logs",
> "permission": "1777",
> "replication": 0,
> "storagePolicy": 0,
> "type": "DIRECTORY"
>   }
> ]
>   }
> }
> {code}
> HttpFS:
> {code:java}
> GET /webhdfs/v1/tmp/?op=LISTSTATUS=hdfs HTTP/1.1
> {code}
> {code}
> {
>   "FileStatuses": {
> "FileStatus": [
> ...
>   {
> "pathSuffix": "logs",
> "type": "DIRECTORY",
> "length": 0,
> "owner": "mapred",
> "group": "hadoop",
> "permission": "1777",
> "accessTime": 0,
> "modificationTime": 1563893395614,
> "blockSize": 0,
> "replication": 0
>   }
> ]
>   }
> }
> {code}
> You can see the same LISTSTATUS request to HttpFS is missing 3 fields:
> {code}
> "childrenNum" (should only be none 0 for directories)
> "fileId"
> "storagePolicy"
> {code}
> The same applies to LISTSTATUS_BATCH, which might be using the same 
> underlying calls to compose the response.
> Root cause:
> [toJsonInner|https://github.com/apache/hadoop/blob/17e8cf501b384af93726e4f2e6f5e28c6e3a8f65/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java#L120]
>  didn't serialize the HDFS-specific keys from FileStatus.
> Also may file another Jira to align the order of the keys in the responses.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14665) HttpFS: LISTSTATUS response is missing HDFS-specific fields

2019-08-20 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14665:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

The PR was merged so I am resolving this one. Thanks [~smeng]!

> HttpFS: LISTSTATUS response is missing HDFS-specific fields
> ---
>
> Key: HDFS-14665
> URL: https://issues.apache.org/jira/browse/HDFS-14665
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
>
> WebHDFS:
> {code:java}
> GET /webhdfs/v1/tmp/?op=LISTSTATUS=hdfs HTTP/1.1
> {code}
> {code}
> {
>   "FileStatuses": {
> "FileStatus": [
> ...
>   {
> "accessTime": 0,
> "blockSize": 0,
> "childrenNum": 0,
> "fileId": 16395,
> "group": "hadoop",
> "length": 0,
> "modificationTime": 1563893395614,
> "owner": "mapred",
> "pathSuffix": "logs",
> "permission": "1777",
> "replication": 0,
> "storagePolicy": 0,
> "type": "DIRECTORY"
>   }
> ]
>   }
> }
> {code}
> HttpFS:
> {code:java}
> GET /webhdfs/v1/tmp/?op=LISTSTATUS=hdfs HTTP/1.1
> {code}
> {code}
> {
>   "FileStatuses": {
> "FileStatus": [
> ...
>   {
> "pathSuffix": "logs",
> "type": "DIRECTORY",
> "length": 0,
> "owner": "mapred",
> "group": "hadoop",
> "permission": "1777",
> "accessTime": 0,
> "modificationTime": 1563893395614,
> "blockSize": 0,
> "replication": 0
>   }
> ]
>   }
> }
> {code}
> You can see the same LISTSTATUS request to HttpFS is missing 3 fields:
> {code}
> "childrenNum" (should only be none 0 for directories)
> "fileId"
> "storagePolicy"
> {code}
> The same applies to LISTSTATUS_BATCH, which might be using the same 
> underlying calls to compose the response.
> Root cause:
> [toJsonInner|https://github.com/apache/hadoop/blob/17e8cf501b384af93726e4f2e6f5e28c6e3a8f65/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java#L120]
>  didn't serialize the HDFS-specific keys from FileStatus.
> Also may file another Jira to align the order of the keys in the responses.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14714) RBF: implement getReplicatedBlockStats interface

2019-08-20 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911594#comment-16911594
 ] 

Ayush Saxena commented on HDFS-14714:
-

Thanx [~zhangchen] for the patch.

{code:java}
+assertNotEquals("There should be some corrupt blocks", 0,
+routerStat.getCorruptBlocks());
+assertEquals(
+"The router stats result should equal to the sum of subcluter's stats",
+routerStat.getCorruptBlocks(),
+stats.get(0).getCorruptBlocks() + stats.get(1).getCorruptBlocks());
{code}

Seems assertNotEquals isn't required. when we are checking with the exact 
value, checking not equal to 0 doesn't make sense and for assertEquals here the 
expected value should come before.

For assertions we have hard-coded values like 2 and all, these are dependent on 
number of NS, if in future the number of namespaces changes for some purpose, I 
guess this test would require modifications, may be we can try avoiding 
hard-coding values and take number from number of namespaces and similarly.

> RBF: implement getReplicatedBlockStats interface
> 
>
> Key: HDFS-14714
> URL: https://issues.apache.org/jira/browse/HDFS-14714
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14714.001.patch, HDFS-14714.002.patch, 
> HDFS-14714.003.patch, HDFS-14714.004.patch
>
>
> It's not implemented now, we sometime need this interface for cluster monitor
> {code:java}
> // current implementation
> public ReplicatedBlockStats getReplicatedBlockStats() throws IOException {
>   rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
>   return null;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14311) Multi-threading conflict at layoutVersion when loading block pool storage

2019-08-20 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14311:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks all! I pushed the commit to branch-2 branch-2.9 and branch-2.8 as well.

> Multi-threading conflict at layoutVersion when loading block pool storage
> -
>
> Key: HDFS-14311
> URL: https://issues.apache.org/jira/browse/HDFS-14311
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Affects Versions: 2.9.2
>Reporter: Yicong Cai
>Assignee: Yicong Cai
>Priority: Major
> Fix For: 2.10.0, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14311.1.patch, HDFS-14311.2.patch, 
> HDFS-14311.branch-2.1.patch
>
>
> When DataNode upgrade from 2.7.3 to 2.9.2, there is a conflict at 
> StorageInfo.layoutVersion in loading block pool storage process.
> It will cause this exception:
>  
> {panel:title=exceptions}
> 2019-02-15 10:18:01,357 [13783] - INFO [Thread-33:BlockPoolSliceStorage@395] 
> - Restored 36974 block files from trash before the layout upgrade. These 
> blocks will be moved to the previous directory during the upgrade
> 2019-02-15 10:18:01,358 [13784] - WARN [Thread-33:BlockPoolSliceStorage@226] 
> - Failed to analyze storage directories for block pool 
> BP-1216718839-10.120.232.23-1548736842023
> java.io.IOException: Datanode state: LV = -57 CTime = 0 is newer than the 
> namespace state: LV = -63 CTime = 0
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.doTransition(BlockPoolSliceStorage.java:406)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadStorageDirectory(BlockPoolSliceStorage.java:177)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadBpStorageDirectories(BlockPoolSliceStorage.java:221)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:250)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.loadBlockPoolSliceStorage(DataStorage.java:460)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:390)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:556)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1649)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1610)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:388)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
>  at java.lang.Thread.run(Thread.java:748)
> 2019-02-15 10:18:01,358 [13784] - WARN [Thread-33:DataStorage@472] - Failed 
> to add storage directory [DISK]file:/mnt/dfs/2/hadoop/hdfs/data/ for block 
> pool BP-1216718839-10.120.232.23-1548736842023
> java.io.IOException: Datanode state: LV = -57 CTime = 0 is newer than the 
> namespace state: LV = -63 CTime = 0
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.doTransition(BlockPoolSliceStorage.java:406)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadStorageDirectory(BlockPoolSliceStorage.java:177)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadBpStorageDirectories(BlockPoolSliceStorage.java:221)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:250)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.loadBlockPoolSliceStorage(DataStorage.java:460)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:390)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:556)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1649)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1610)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:388)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
>  at java.lang.Thread.run(Thread.java:748) 
> {panel}
>  
> root cause:
> BlockPoolSliceStorage instance is shared for all storage locations recover 
> transition. In BlockPoolSliceStorage.doTransition, it will read the old 
> layoutVersion from local storage, compare with current DataNode version, then 

[jira] [Updated] (HDFS-14311) Multi-threading conflict at layoutVersion when loading block pool storage

2019-08-20 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14311:
---
Fix Version/s: 2.9.3
   2.8.6
   2.10.0

> Multi-threading conflict at layoutVersion when loading block pool storage
> -
>
> Key: HDFS-14311
> URL: https://issues.apache.org/jira/browse/HDFS-14311
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Affects Versions: 2.9.2
>Reporter: Yicong Cai
>Assignee: Yicong Cai
>Priority: Major
> Fix For: 2.10.0, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14311.1.patch, HDFS-14311.2.patch, 
> HDFS-14311.branch-2.1.patch
>
>
> When DataNode upgrade from 2.7.3 to 2.9.2, there is a conflict at 
> StorageInfo.layoutVersion in loading block pool storage process.
> It will cause this exception:
>  
> {panel:title=exceptions}
> 2019-02-15 10:18:01,357 [13783] - INFO [Thread-33:BlockPoolSliceStorage@395] 
> - Restored 36974 block files from trash before the layout upgrade. These 
> blocks will be moved to the previous directory during the upgrade
> 2019-02-15 10:18:01,358 [13784] - WARN [Thread-33:BlockPoolSliceStorage@226] 
> - Failed to analyze storage directories for block pool 
> BP-1216718839-10.120.232.23-1548736842023
> java.io.IOException: Datanode state: LV = -57 CTime = 0 is newer than the 
> namespace state: LV = -63 CTime = 0
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.doTransition(BlockPoolSliceStorage.java:406)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadStorageDirectory(BlockPoolSliceStorage.java:177)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadBpStorageDirectories(BlockPoolSliceStorage.java:221)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:250)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.loadBlockPoolSliceStorage(DataStorage.java:460)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:390)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:556)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1649)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1610)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:388)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
>  at java.lang.Thread.run(Thread.java:748)
> 2019-02-15 10:18:01,358 [13784] - WARN [Thread-33:DataStorage@472] - Failed 
> to add storage directory [DISK]file:/mnt/dfs/2/hadoop/hdfs/data/ for block 
> pool BP-1216718839-10.120.232.23-1548736842023
> java.io.IOException: Datanode state: LV = -57 CTime = 0 is newer than the 
> namespace state: LV = -63 CTime = 0
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.doTransition(BlockPoolSliceStorage.java:406)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadStorageDirectory(BlockPoolSliceStorage.java:177)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadBpStorageDirectories(BlockPoolSliceStorage.java:221)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:250)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.loadBlockPoolSliceStorage(DataStorage.java:460)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:390)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:556)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1649)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1610)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:388)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
>  at java.lang.Thread.run(Thread.java:748) 
> {panel}
>  
> root cause:
> BlockPoolSliceStorage instance is shared for all storage locations recover 
> transition. In BlockPoolSliceStorage.doTransition, it will read the old 
> layoutVersion from local storage, compare with current DataNode version, then 
> do upgrade. In doUpgrade, add the transition work as a sub-thread, the 

[jira] [Commented] (HDFS-14350) dfs.datanode.ec.reconstruction.threads not take effect

2019-08-20 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911583#comment-16911583
 ] 

Hadoop QA commented on HDFS-14350:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
54s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 57s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
37s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}152m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM |
|   | hadoop.hdfs.server.namenode.ha.TestHASafeMode |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-582/7/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/582 |
| JIRA Issue | HDFS-14350 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux e0a6ef064c5d 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 

[jira] [Commented] (HDFS-14311) Multi-threading conflict at layoutVersion when loading block pool storage

2019-08-20 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911582#comment-16911582
 ] 

Wei-Chiu Chuang commented on HDFS-14311:


+1 failed tests doesn't reproduce for me locally. Pushed rev2 patch to trunk 
branch-3.2 branch-3.1

> Multi-threading conflict at layoutVersion when loading block pool storage
> -
>
> Key: HDFS-14311
> URL: https://issues.apache.org/jira/browse/HDFS-14311
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Affects Versions: 2.9.2
>Reporter: Yicong Cai
>Assignee: Yicong Cai
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14311.1.patch, HDFS-14311.2.patch, 
> HDFS-14311.branch-2.1.patch
>
>
> When DataNode upgrade from 2.7.3 to 2.9.2, there is a conflict at 
> StorageInfo.layoutVersion in loading block pool storage process.
> It will cause this exception:
>  
> {panel:title=exceptions}
> 2019-02-15 10:18:01,357 [13783] - INFO [Thread-33:BlockPoolSliceStorage@395] 
> - Restored 36974 block files from trash before the layout upgrade. These 
> blocks will be moved to the previous directory during the upgrade
> 2019-02-15 10:18:01,358 [13784] - WARN [Thread-33:BlockPoolSliceStorage@226] 
> - Failed to analyze storage directories for block pool 
> BP-1216718839-10.120.232.23-1548736842023
> java.io.IOException: Datanode state: LV = -57 CTime = 0 is newer than the 
> namespace state: LV = -63 CTime = 0
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.doTransition(BlockPoolSliceStorage.java:406)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadStorageDirectory(BlockPoolSliceStorage.java:177)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadBpStorageDirectories(BlockPoolSliceStorage.java:221)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:250)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.loadBlockPoolSliceStorage(DataStorage.java:460)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:390)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:556)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1649)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1610)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:388)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
>  at java.lang.Thread.run(Thread.java:748)
> 2019-02-15 10:18:01,358 [13784] - WARN [Thread-33:DataStorage@472] - Failed 
> to add storage directory [DISK]file:/mnt/dfs/2/hadoop/hdfs/data/ for block 
> pool BP-1216718839-10.120.232.23-1548736842023
> java.io.IOException: Datanode state: LV = -57 CTime = 0 is newer than the 
> namespace state: LV = -63 CTime = 0
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.doTransition(BlockPoolSliceStorage.java:406)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadStorageDirectory(BlockPoolSliceStorage.java:177)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadBpStorageDirectories(BlockPoolSliceStorage.java:221)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:250)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.loadBlockPoolSliceStorage(DataStorage.java:460)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:390)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:556)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1649)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1610)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:388)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
>  at java.lang.Thread.run(Thread.java:748) 
> {panel}
>  
> root cause:
> BlockPoolSliceStorage instance is shared for all storage locations recover 
> transition. In BlockPoolSliceStorage.doTransition, it will read the old 
> layoutVersion from local storage, compare with current DataNode version, then 
> do upgrade. In doUpgrade, 

[jira] [Updated] (HDFS-14311) Multi-threading conflict at layoutVersion when loading block pool storage

2019-08-20 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14311:
---
Fix Version/s: 3.1.3
   3.2.1
   3.3.0

> Multi-threading conflict at layoutVersion when loading block pool storage
> -
>
> Key: HDFS-14311
> URL: https://issues.apache.org/jira/browse/HDFS-14311
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Affects Versions: 2.9.2
>Reporter: Yicong Cai
>Assignee: Yicong Cai
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14311.1.patch, HDFS-14311.2.patch, 
> HDFS-14311.branch-2.1.patch
>
>
> When DataNode upgrade from 2.7.3 to 2.9.2, there is a conflict at 
> StorageInfo.layoutVersion in loading block pool storage process.
> It will cause this exception:
>  
> {panel:title=exceptions}
> 2019-02-15 10:18:01,357 [13783] - INFO [Thread-33:BlockPoolSliceStorage@395] 
> - Restored 36974 block files from trash before the layout upgrade. These 
> blocks will be moved to the previous directory during the upgrade
> 2019-02-15 10:18:01,358 [13784] - WARN [Thread-33:BlockPoolSliceStorage@226] 
> - Failed to analyze storage directories for block pool 
> BP-1216718839-10.120.232.23-1548736842023
> java.io.IOException: Datanode state: LV = -57 CTime = 0 is newer than the 
> namespace state: LV = -63 CTime = 0
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.doTransition(BlockPoolSliceStorage.java:406)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadStorageDirectory(BlockPoolSliceStorage.java:177)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadBpStorageDirectories(BlockPoolSliceStorage.java:221)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:250)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.loadBlockPoolSliceStorage(DataStorage.java:460)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:390)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:556)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1649)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1610)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:388)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
>  at java.lang.Thread.run(Thread.java:748)
> 2019-02-15 10:18:01,358 [13784] - WARN [Thread-33:DataStorage@472] - Failed 
> to add storage directory [DISK]file:/mnt/dfs/2/hadoop/hdfs/data/ for block 
> pool BP-1216718839-10.120.232.23-1548736842023
> java.io.IOException: Datanode state: LV = -57 CTime = 0 is newer than the 
> namespace state: LV = -63 CTime = 0
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.doTransition(BlockPoolSliceStorage.java:406)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadStorageDirectory(BlockPoolSliceStorage.java:177)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadBpStorageDirectories(BlockPoolSliceStorage.java:221)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:250)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.loadBlockPoolSliceStorage(DataStorage.java:460)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:390)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:556)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1649)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1610)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:388)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
>  at java.lang.Thread.run(Thread.java:748) 
> {panel}
>  
> root cause:
> BlockPoolSliceStorage instance is shared for all storage locations recover 
> transition. In BlockPoolSliceStorage.doTransition, it will read the old 
> layoutVersion from local storage, compare with current DataNode version, then 
> do upgrade. In doUpgrade, add the transition work as a sub-thread, the 
> transition work will 

[jira] [Updated] (HDFS-13201) Fix prompt message in testPolicyAndStateCantBeNull

2019-08-20 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13201:
---
Fix Version/s: 3.1.3
   3.2.1

> Fix prompt message in testPolicyAndStateCantBeNull
> --
>
> Key: HDFS-13201
> URL: https://issues.apache.org/jira/browse/HDFS-13201
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
>Assignee: chencan
>Priority: Minor
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-13201.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13201) Fix prompt message in testPolicyAndStateCantBeNull

2019-08-20 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13201:
---
Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks!

> Fix prompt message in testPolicyAndStateCantBeNull
> --
>
> Key: HDFS-13201
> URL: https://issues.apache.org/jira/browse/HDFS-13201
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
>Assignee: chencan
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-13201.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13201) Fix prompt message in testPolicyAndStateCantBeNull

2019-08-20 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911574#comment-16911574
 ] 

Wei-Chiu Chuang commented on HDFS-13201:


+1

> Fix prompt message in testPolicyAndStateCantBeNull
> --
>
> Key: HDFS-13201
> URL: https://issues.apache.org/jira/browse/HDFS-13201
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
>Assignee: chencan
>Priority: Minor
> Attachments: HDFS-13201.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13201) Fix prompt message in testPolicyAndStateCantBeNull

2019-08-20 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-13201:
--

Assignee: chencan

> Fix prompt message in testPolicyAndStateCantBeNull
> --
>
> Key: HDFS-13201
> URL: https://issues.apache.org/jira/browse/HDFS-13201
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
>Assignee: chencan
>Priority: Minor
> Attachments: HDFS-13201.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14714) RBF: implement getReplicatedBlockStats interface

2019-08-20 Thread Chen Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911570#comment-16911570
 ] 

Chen Zhang commented on HDFS-14714:
---

Thanks [~elgoiri] for your review and comments, the code in unit test looks 
much cleaner now. updated the patch v4. 

> RBF: implement getReplicatedBlockStats interface
> 
>
> Key: HDFS-14714
> URL: https://issues.apache.org/jira/browse/HDFS-14714
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14714.001.patch, HDFS-14714.002.patch, 
> HDFS-14714.003.patch, HDFS-14714.004.patch
>
>
> It's not implemented now, we sometime need this interface for cluster monitor
> {code:java}
> // current implementation
> public ReplicatedBlockStats getReplicatedBlockStats() throws IOException {
>   rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
>   return null;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14714) RBF: implement getReplicatedBlockStats interface

2019-08-20 Thread Chen Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang updated HDFS-14714:
--
Attachment: HDFS-14714.004.patch

> RBF: implement getReplicatedBlockStats interface
> 
>
> Key: HDFS-14714
> URL: https://issues.apache.org/jira/browse/HDFS-14714
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14714.001.patch, HDFS-14714.002.patch, 
> HDFS-14714.003.patch, HDFS-14714.004.patch
>
>
> It's not implemented now, we sometime need this interface for cluster monitor
> {code:java}
> // current implementation
> public ReplicatedBlockStats getReplicatedBlockStats() throws IOException {
>   rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
>   return null;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14757) TestBalancerRPCDelay.testBalancerRPCDelay failed

2019-08-20 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HDFS-14757:
--

 Summary: TestBalancerRPCDelay.testBalancerRPCDelay failed
 Key: HDFS-14757
 URL: https://issues.apache.org/jira/browse/HDFS-14757
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Wei-Chiu Chuang


{noformat}
Error Message
Unfinished stubbing detected here:
-> at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.spyFSNamesystem(TestBalancer.java:1948)

E.g. thenReturn() may be missing.
Examples of correct stubbing:
when(mock.isOk()).thenReturn(true);
when(mock.isOk()).thenThrow(exception);
doThrow(exception).when(mock).someVoidMethod();
Hints:
 1. missing thenReturn()
 2. although stubbed methods may return mocks, you cannot inline mock creation 
(mock()) call inside a thenReturn method (see issue 53)
Stacktrace
org.mockito.exceptions.misusing.UnfinishedStubbingException: 

Unfinished stubbing detected here:
-> at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.spyFSNamesystem(TestBalancer.java:1948)

E.g. thenReturn() may be missing.
Examples of correct stubbing:
when(mock.isOk()).thenReturn(true);
when(mock.isOk()).thenThrow(exception);
doThrow(exception).when(mock).someVoidMethod();
Hints:
 1. missing thenReturn()
 2. although stubbed methods may return mocks, you cannot inline mock creation 
(mock()) call inside a thenReturn method (see issue 53)

at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.spyFSNamesystem(TestBalancer.java:1957)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.doTest(TestBalancer.java:811)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancerRPCDelay(TestBalancer.java:1976)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancerRPCDelay.testBalancerRPCDelay(TestBalancerRPCDelay.java:30)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14714) RBF: implement getReplicatedBlockStats interface

2019-08-20 Thread Chen Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang updated HDFS-14714:
--
Attachment: (was: HDFS-14714.004.patch)

> RBF: implement getReplicatedBlockStats interface
> 
>
> Key: HDFS-14714
> URL: https://issues.apache.org/jira/browse/HDFS-14714
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14714.001.patch, HDFS-14714.002.patch, 
> HDFS-14714.003.patch
>
>
> It's not implemented now, we sometime need this interface for cluster monitor
> {code:java}
> // current implementation
> public ReplicatedBlockStats getReplicatedBlockStats() throws IOException {
>   rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
>   return null;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14714) RBF: implement getReplicatedBlockStats interface

2019-08-20 Thread Chen Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang updated HDFS-14714:
--
Attachment: HDFS-14714.004.patch

> RBF: implement getReplicatedBlockStats interface
> 
>
> Key: HDFS-14714
> URL: https://issues.apache.org/jira/browse/HDFS-14714
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14714.001.patch, HDFS-14714.002.patch, 
> HDFS-14714.003.patch, HDFS-14714.004.patch
>
>
> It's not implemented now, we sometime need this interface for cluster monitor
> {code:java}
> // current implementation
> public ReplicatedBlockStats getReplicatedBlockStats() throws IOException {
>   rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
>   return null;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14739) RBF: LS command for mount point shows wrong owner and permission information.

2019-08-20 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911565#comment-16911565
 ] 

Ayush Saxena commented on HDFS-14739:
-

Thanx [~xuzq_zander]  for confirming, I checked the first issue and I am able 
to repro.

Something like this fixes it for me. Give a check, if it works for you or if 
something better.
{code:java}
   String childPath;
if (src.equals(Path.SEPARATOR)) {
  childPath = Path.SEPARATOR + child;
} else {
  childPath = src + Path.SEPARATOR + child;
}
HdfsFileStatus dirStatus = getMountPointStatus(childPath, 0, 
date);{code}


Second one too, seems genuine I guess we need to  suppress the exception, 
rather than throwing {{Default Ns not enabled}} back to the client, we should 
have similar behavior wrt NN when there is no entries.

> RBF: LS command for mount point shows wrong owner and permission information.
> -
>
> Key: HDFS-14739
> URL: https://issues.apache.org/jira/browse/HDFS-14739
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Priority: Major
> Attachments: image-2019-08-16-17-15-50-614.png, 
> image-2019-08-16-17-16-00-863.png, image-2019-08-16-17-16-34-325.png
>
>
> ||source||target namespace||destination||owner||group||permission||
> |/mnt|ns0|/mnt|mnt|mnt_group|755|
> |/mnt/test1|ns1|/mnt/test1|mnt_test1|mnt_test1_group|755|
> |/test1|ns1|/test1|test1|test1_group|755|
> When do getListing("/mnt"), the owner of  */mnt/test1* should be *mnt_test1* 
> instead of *test1* in result.
>  
> And if the mount table as blew, we should support getListing("/mnt") instead 
> of throw IOException when dfs.federation.router.default.nameservice.enable is 
> false.
> ||source||target namespace||destination||owner||group||permission||
> |/mnt/test1|ns0|/mnt/test1|test1|test1|755|
> |/mnt/test2|ns1|/mnt/test2|test2|test2|755|
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14311) Multi-threading conflict at layoutVersion when loading block pool storage

2019-08-20 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14311:
---
Summary: Multi-threading conflict at layoutVersion when loading block pool 
storage  (was: multi-threading conflict at layoutVersion when loading block 
pool storage)

> Multi-threading conflict at layoutVersion when loading block pool storage
> -
>
> Key: HDFS-14311
> URL: https://issues.apache.org/jira/browse/HDFS-14311
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Affects Versions: 2.9.2
>Reporter: Yicong Cai
>Assignee: Yicong Cai
>Priority: Major
> Attachments: HDFS-14311.1.patch, HDFS-14311.2.patch, 
> HDFS-14311.branch-2.1.patch
>
>
> When DataNode upgrade from 2.7.3 to 2.9.2, there is a conflict at 
> StorageInfo.layoutVersion in loading block pool storage process.
> It will cause this exception:
>  
> {panel:title=exceptions}
> 2019-02-15 10:18:01,357 [13783] - INFO [Thread-33:BlockPoolSliceStorage@395] 
> - Restored 36974 block files from trash before the layout upgrade. These 
> blocks will be moved to the previous directory during the upgrade
> 2019-02-15 10:18:01,358 [13784] - WARN [Thread-33:BlockPoolSliceStorage@226] 
> - Failed to analyze storage directories for block pool 
> BP-1216718839-10.120.232.23-1548736842023
> java.io.IOException: Datanode state: LV = -57 CTime = 0 is newer than the 
> namespace state: LV = -63 CTime = 0
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.doTransition(BlockPoolSliceStorage.java:406)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadStorageDirectory(BlockPoolSliceStorage.java:177)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadBpStorageDirectories(BlockPoolSliceStorage.java:221)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:250)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.loadBlockPoolSliceStorage(DataStorage.java:460)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:390)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:556)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1649)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1610)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:388)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
>  at java.lang.Thread.run(Thread.java:748)
> 2019-02-15 10:18:01,358 [13784] - WARN [Thread-33:DataStorage@472] - Failed 
> to add storage directory [DISK]file:/mnt/dfs/2/hadoop/hdfs/data/ for block 
> pool BP-1216718839-10.120.232.23-1548736842023
> java.io.IOException: Datanode state: LV = -57 CTime = 0 is newer than the 
> namespace state: LV = -63 CTime = 0
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.doTransition(BlockPoolSliceStorage.java:406)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadStorageDirectory(BlockPoolSliceStorage.java:177)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadBpStorageDirectories(BlockPoolSliceStorage.java:221)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:250)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.loadBlockPoolSliceStorage(DataStorage.java:460)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:390)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:556)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1649)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1610)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:388)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
>  at java.lang.Thread.run(Thread.java:748) 
> {panel}
>  
> root cause:
> BlockPoolSliceStorage instance is shared for all storage locations recover 
> transition. In BlockPoolSliceStorage.doTransition, it will read the old 
> layoutVersion from local storage, compare with current DataNode version, then 
> do upgrade. In doUpgrade, add the 

[jira] [Updated] (HDFS-14756) RBF: getQuotaUsage may ignore some folders

2019-08-20 Thread Chen Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang updated HDFS-14756:
--
Attachment: HDFS-14756.001.patch

> RBF: getQuotaUsage may ignore some folders
> --
>
> Key: HDFS-14756
> URL: https://issues.apache.org/jira/browse/HDFS-14756
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14756.001.patch
>
>
> {{getValidQuotaLocations}} want to filter duplicate subfolders, but it used 
> wrong method to determine the parent folder. In this logic, if we have 2 
> mountpoint like /miui and /miuiads, then /miuiads will be ignored.
> {code:java}
> private List getValidQuotaLocations(String path)
> throws IOException {
>   final List locations = getQuotaRemoteLocations(path);
>   // NameService -> Locations
>   ListMultimap validLocations =
>   ArrayListMultimap.create();
>   for (RemoteLocation loc : locations) {
> final String nsId = loc.getNameserviceId();
> final Collection dests = validLocations.get(nsId);
> // Ensure the paths in the same nameservice is different.
> // Do not include parent-child paths.
> boolean isChildPath = false;
> for (RemoteLocation d : dests) {
>   if (StringUtils.startsWith(loc.getDest(), d.getDest())) {
> isChildPath = true;
> break;
>   }
> }
> if (!isChildPath) {
>   validLocations.put(nsId, loc);
> }
>   }
>   return Collections
>   .unmodifiableList(new ArrayList<>(validLocations.values()));
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14756) RBF: getQuotaUsage may ignore some folders

2019-08-20 Thread Chen Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang updated HDFS-14756:
--
Status: Patch Available  (was: Open)

Uploaded initial patch

> RBF: getQuotaUsage may ignore some folders
> --
>
> Key: HDFS-14756
> URL: https://issues.apache.org/jira/browse/HDFS-14756
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14756.001.patch
>
>
> {{getValidQuotaLocations}} want to filter duplicate subfolders, but it used 
> wrong method to determine the parent folder. In this logic, if we have 2 
> mountpoint like /miui and /miuiads, then /miuiads will be ignored.
> {code:java}
> private List getValidQuotaLocations(String path)
> throws IOException {
>   final List locations = getQuotaRemoteLocations(path);
>   // NameService -> Locations
>   ListMultimap validLocations =
>   ArrayListMultimap.create();
>   for (RemoteLocation loc : locations) {
> final String nsId = loc.getNameserviceId();
> final Collection dests = validLocations.get(nsId);
> // Ensure the paths in the same nameservice is different.
> // Do not include parent-child paths.
> boolean isChildPath = false;
> for (RemoteLocation d : dests) {
>   if (StringUtils.startsWith(loc.getDest(), d.getDest())) {
> isChildPath = true;
> break;
>   }
> }
> if (!isChildPath) {
>   validLocations.put(nsId, loc);
> }
>   }
>   return Collections
>   .unmodifiableList(new ArrayList<>(validLocations.values()));
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?focusedWorklogId=298026=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298026
 ]

ASF GitHub Bot logged work on HDDS-1927:


Author: ASF GitHub Bot
Created on: 20/Aug/19 16:49
Start Date: 20/Aug/19 16:49
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1263: 
HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315794716
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
 ##
 @@ -235,123 +231,22 @@ public FileEncryptionInfo getFileEncryptionInfo() {
 return encInfo;
   }
 
-  public List getAcls() {
+  public List getAcls() {
 
 Review comment:
   Thank You for info. Here we are only modifying acl depending on 
add/remove/set Acl. I think for this we can perform on protobuf structures. 
Like how it was done before in OmKeyInfo.java (add/remove/setAcl). As keeping 
in this way, we will avoid unnecessary protobuf conversions.
   If you feel this is the way to do, I am okay with it.
   
   And also this will not only help acls But bucket/key creation also, as now 
Bucket/KeyInfo will have proto buf -> internal Ozone object. This also can be 
avoided. (So, when each key creation we don't need to convert acls set during 
the creation of key from proto to OzoneAcl Objects.)
   
   > And also this will not only help acls But bucket/key creation also, as now 
Bucket/KeyInfo will have proto buf -> internal Ozone object. This also can be 
avoided. (So, when each key creation we don't need to convert acls set during 
the creation of key from proto to OzoneAcl Objects.)
   
   Because currently OmKeyInfo maintains direct protobuf structures, and even 
checkAccess uses the List to checkAccess. So, I am not sure what 
additional changes this require, as this PR is changing from proto to OzoneAcl 
in OmKeyInfo.java. (My point is to do vice-versa)
   
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 298026)
Time Spent: 4h 50m  (was: 4h 40m)

> Consolidate add/remove Acl into OzoneAclUtil class
> --
>
> Key: HDDS-1927
> URL: https://issues.apache.org/jira/browse/HDDS-1927
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> This Jira is created based on @xiaoyu comment on HDDS-1884
> Can we abstract these add/remove logic into common AclUtil class as we can 
> see similar logic in both bucket manager and key manager? For example,
> public static boolean addAcl(List existingAcls, OzoneAcl newAcl)
> public static boolean removeAcl(List existingAcls, OzoneAcl newAcl)
>  
> But to do this, we need both OmKeyInfo and OMBucketInfo to use list of 
> OzoneAcl/OzoneAclInfo.
> This Jira is to do that refactor, and also address above comment to move 
> common logic to AclUtils.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?focusedWorklogId=298027=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298027
 ]

ASF GitHub Bot logged work on HDDS-1927:


Author: ASF GitHub Bot
Created on: 20/Aug/19 16:49
Start Date: 20/Aug/19 16:49
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1263: 
HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315794716
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
 ##
 @@ -235,123 +231,22 @@ public FileEncryptionInfo getFileEncryptionInfo() {
 return encInfo;
   }
 
-  public List getAcls() {
+  public List getAcls() {
 
 Review comment:
   Thank You for info. Here we are only modifying acl depending on 
add/remove/set Acl. I think for this we can perform on protobuf structures. 
Like how it was done before in OmKeyInfo.java (add/remove/setAcl). As keeping 
in this way, we will avoid unnecessary protobuf conversions.
   If you feel this is the way to do, I am okay with it.
   
   And also this will not only help acls But bucket/key creation also, as now 
Bucket/KeyInfo will have proto buf -> internal Ozone object. This also can be 
avoided. (So, when each key creation we don't need to convert acls set during 
the creation of key from proto to OzoneAcl Objects.)
   
   > And also this will not only help acls But bucket/key creation also, as now 
Bucket/KeyInfo will have proto buf -> internal Ozone object. This also can be 
avoided. (So, when each key creation we don't need to convert acls set during 
the creation of key from proto to OzoneAcl Objects.)
   
   Because currently OmKeyInfo maintains direct protobuf structures, and even 
checkAccess uses the `List` to checkAccess. So, I am not sure 
what additional changes this require, as this PR is changing from proto to 
OzoneAcl in OmKeyInfo.java. (My point is to do vice-versa)
   
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 298027)
Time Spent: 5h  (was: 4h 50m)

> Consolidate add/remove Acl into OzoneAclUtil class
> --
>
> Key: HDDS-1927
> URL: https://issues.apache.org/jira/browse/HDDS-1927
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> This Jira is created based on @xiaoyu comment on HDDS-1884
> Can we abstract these add/remove logic into common AclUtil class as we can 
> see similar logic in both bucket manager and key manager? For example,
> public static boolean addAcl(List existingAcls, OzoneAcl newAcl)
> public static boolean removeAcl(List existingAcls, OzoneAcl newAcl)
>  
> But to do this, we need both OmKeyInfo and OMBucketInfo to use list of 
> OzoneAcl/OzoneAclInfo.
> This Jira is to do that refactor, and also address above comment to move 
> common logic to AclUtils.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?focusedWorklogId=298021=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298021
 ]

ASF GitHub Bot logged work on HDDS-1927:


Author: ASF GitHub Bot
Created on: 20/Aug/19 16:46
Start Date: 20/Aug/19 16:46
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1263: 
HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r315794716
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
 ##
 @@ -235,123 +231,22 @@ public FileEncryptionInfo getFileEncryptionInfo() {
 return encInfo;
   }
 
-  public List getAcls() {
+  public List getAcls() {
 
 Review comment:
   Thank You for info. Here we are only modifying acl depending on 
add/remove/set Acl. I think for this we can perform on protobuf structures. 
Like how it was done before in OmKeyInfo.java (add/remove/setAcl). As keeping 
in this way, we will avoid unnecessary protobuf conversions.
   If you feel this is the way to do, I am okay with it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 298021)
Time Spent: 4h 40m  (was: 4.5h)

> Consolidate add/remove Acl into OzoneAclUtil class
> --
>
> Key: HDDS-1927
> URL: https://issues.apache.org/jira/browse/HDDS-1927
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> This Jira is created based on @xiaoyu comment on HDDS-1884
> Can we abstract these add/remove logic into common AclUtil class as we can 
> see similar logic in both bucket manager and key manager? For example,
> public static boolean addAcl(List existingAcls, OzoneAcl newAcl)
> public static boolean removeAcl(List existingAcls, OzoneAcl newAcl)
>  
> But to do this, we need both OmKeyInfo and OMBucketInfo to use list of 
> OzoneAcl/OzoneAclInfo.
> This Jira is to do that refactor, and also address above comment to move 
> common logic to AclUtils.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14564) Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable

2019-08-20 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911540#comment-16911540
 ] 

Hadoop QA commented on HDFS-14564:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
23s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
23s{color} | {color:blue} branch/hadoop-hdfs-project/hadoop-hdfs-native-client 
no findbugs output file (findbugsXml.xml) {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 18m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
43s{color} | {color:green} root: The patch generated 0 new + 110 unchanged - 1 
fixed = 110 total (was 111) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
48s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  1m  
9s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
53s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
19s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
27s{color} | {color:blue} hadoop-hdfs-project/hadoop-hdfs-native-client has no 
data from findbugs {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
40s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 45s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m  7s{color} 
| 

[jira] [Commented] (HDFS-14666) When using nfs3, the renaming fails when the target file exists.

2019-08-20 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911542#comment-16911542
 ] 

Wei-Chiu Chuang commented on HDFS-14666:


Ah i see what you're saying. Looks like a bug. I am not familiar with native 
file systems to tell if the right fix is to always overwrite. Is there a Linux 
syscall spec or NFS protocol spec that we can reference?

> When using nfs3, the renaming fails when the target file exists.
> 
>
> Key: HDFS-14666
> URL: https://issues.apache.org/jira/browse/HDFS-14666
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Reporter: fengchuang
>Assignee: fengchuang
>Priority: Major
> Attachments: 1563945191461.jpg, 1563945214054.jpg, 
> HDFS-14666.001.patch
>
>
> mount -t nfs -o vers=3,proto=tcp,nolock  127.0.0.1:/  /home/test/nfs3test/
> cd  /home/test/nfs3test/
> echo "1">1.txt
> echo "2">2.txt
> mv 1.txt 2.txt
> tip overwite?y
> but fail.
> log:
>  
> org.apache.hadoop.fs.FileAlreadyExistsException: rename destination /2.txt 
> already exists
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.validateOverwrite(FSDirRenameOp.java:542)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.unprotectedRenameTo(FSDirRenameOp.java:383)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameTo(FSDirRenameOp.java:296)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:246)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:2924)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename2(NameNodeRpcServer.java:1052)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename2(ClientNamenodeProtocolServerSideTranslatorPB.java:657)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1731)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>  at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
>  at org.apache.hadoop.hdfs.DFSClient.rename(DFSClient.java:1574)
>  at 
> org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.rename(RpcProgramNfs3.java:1400)
>  at 
> org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.rename(RpcProgramNfs3.java:1328)
>  at 
> org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:2259)
>  at org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:188)
>  at 
> org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
>  at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>  at 
> org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
>  at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281)
>  at 
> org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:133)
>  at 
> org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
>  at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>  at 
> org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
>  at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
>  at 
> org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
>  at 
> org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
>  at 
> org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
>  at 
> 

[jira] [Created] (HDFS-14756) RBF: getQuotaUsage may ignore some folders

2019-08-20 Thread Chen Zhang (Jira)
Chen Zhang created HDFS-14756:
-

 Summary: RBF: getQuotaUsage may ignore some folders
 Key: HDFS-14756
 URL: https://issues.apache.org/jira/browse/HDFS-14756
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Zhang
Assignee: Chen Zhang


{{getValidQuotaLocations}} want to filter duplicate subfolders, but it used 
wrong method to determine the parent folder. In this logic, if we have 2 
mountpoint like /miui and /miuiads, then /miuiads will be ignored.
{code:java}
private List getValidQuotaLocations(String path)
throws IOException {
  final List locations = getQuotaRemoteLocations(path);

  // NameService -> Locations
  ListMultimap validLocations =
  ArrayListMultimap.create();

  for (RemoteLocation loc : locations) {
final String nsId = loc.getNameserviceId();
final Collection dests = validLocations.get(nsId);

// Ensure the paths in the same nameservice is different.
// Do not include parent-child paths.
boolean isChildPath = false;

for (RemoteLocation d : dests) {
  if (StringUtils.startsWith(loc.getDest(), d.getDest())) {
isChildPath = true;
break;
  }
}

if (!isChildPath) {
  validLocations.put(nsId, loc);
}
  }

  return Collections
  .unmodifiableList(new ArrayList<>(validLocations.values()));
}
{code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14637) Namenode may not replicate blocks to meet the policy after enabling upgradeDomain

2019-08-20 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911529#comment-16911529
 ] 

Wei-Chiu Chuang commented on HDFS-14637:


Excellent work! I had a quick glance at the patch and I think I now understand 
the gist of it.

Will try to get another look at this patch later today.

> Namenode may not replicate blocks to meet the policy after enabling 
> upgradeDomain
> -
>
> Key: HDFS-14637
> URL: https://issues.apache.org/jira/browse/HDFS-14637
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14637.001.patch, HDFS-14637.002.patch, 
> HDFS-14637.003.patch, HDFS-14637.004.patch, HDFS-14637.005.patch
>
>
> After changing the network topology or placement policy on a cluster and 
> restarting the namenode, the namenode will scan all blocks on the cluster at 
> startup, and check if they meet the current placement policy. If they do not, 
> they are added to the replication queue and the namenode will arrange for 
> them to be replicated to ensure the placement policy is used.
> If you start with a cluster with no UpgradeDomain, and then enable 
> UpgradeDomain, then on restart the NN does notice all the blocks violate the 
> placement policy and it adds them to the replication queue. I believe there 
> are some issues in the logic that prevents the blocks from replicating 
> depending on the setup:
> With UD enabled, but no racks configured, and possible on a 2 rack cluster, 
> the queued replication work never makes any progress, as in 
> blockManager.validateReconstructionWork(), it checks to see if the new 
> replica increases the number of racks, and if it does not, it skips it and 
> tries again later.
> {code:java}
> DatanodeStorageInfo[] targets = rw.getTargets();
> if ((numReplicas.liveReplicas() >= requiredRedundancy) &&
> (!isPlacementPolicySatisfied(block)) ) {
>   if (!isInNewRack(rw.getSrcNodes(), targets[0].getDatanodeDescriptor())) {
> // No use continuing, unless a new rack in this case
> return false;
>   }
>   // mark that the reconstruction work is to replicate internal block to a
>   // new rack.
>   rw.setNotEnoughRack();
> }
> {code}
> Additionally, in blockManager.scheduleReconstruction() is there some logic 
> that sets the number of new replicas required to one, if the live replicas >= 
> requiredReduncancy:
> {code:java}
> int additionalReplRequired;
> if (numReplicas.liveReplicas() < requiredRedundancy) {
>   additionalReplRequired = requiredRedundancy - numReplicas.liveReplicas()
>   - pendingNum;
> } else {
>   additionalReplRequired = 1; // Needed on a new rack
> }{code}
> With UD, it is possible for 2 new replicas to be needed to meet the block 
> placement policy, if all existing replicas are on nodes with the same domain. 
> For traditional '2 rack redundancy', only 1 new replica would ever have been 
> needed in this scenario.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13709) Report bad block to NN when transfer block encounter EIO exception

2019-08-20 Thread Chen Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911526#comment-16911526
 ] 

Chen Zhang commented on HDFS-13709:
---

Got it, thanks [~jojochuang] for your explanation.

> Report bad block to NN when transfer block encounter EIO exception
> --
>
> Key: HDFS-13709
> URL: https://issues.apache.org/jira/browse/HDFS-13709
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-13709.002.patch, HDFS-13709.003.patch, 
> HDFS-13709.004.patch, HDFS-13709.005.patch, HDFS-13709.patch
>
>
> In our online cluster, the BlockPoolSliceScanner is turned off, and sometimes 
> disk bad track may cause data loss.
> For example, there are 3 replicas on 3 machines A/B/C, if a bad track occurs 
> on A's replica data, and someday B and C crushed at the same time, NN will 
> try to replicate data from A but failed, this block is corrupt now but no one 
> knows, because NN think there is at least 1 healthy replica and it keep 
> trying to replicate it.
> When reading a replica which have data on bad track, OS will return an EIO 
> error, if DN reports the bad block as soon as it got an EIO,  we can find 
> this case ASAP and try to avoid data loss



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14583) FileStatus#toString() will throw IllegalArgumentException

2019-08-20 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911519#comment-16911519
 ] 

Íñigo Goiri commented on HDFS-14583:


For this JIRA, the unit test is good.
However, I would take the opportunity and extend the coverage here.
Let's test a couple of these values here and the full toString().

> FileStatus#toString() will throw IllegalArgumentException
> -
>
> Key: HDFS-14583
> URL: https://issues.apache.org/jira/browse/HDFS-14583
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
>  Labels: HDFS
> Attachments: HDFS-14583-trunk-0001.patch, HDFS-14583-trunk-002.patch
>
>
> FileStatus#toString() will throw IllegalArgumentException, stack and error 
> message like this:
> {code:java}
> java.lang.IllegalArgumentException: Can not create a Path from an empty string
>   at org.apache.hadoop.fs.Path.checkPathArg(Path.java:172)
>   at org.apache.hadoop.fs.Path.(Path.java:184)
>   at 
> org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus.getSymlink(HdfsLocatedFileStatus.java:117)
>   at org.apache.hadoop.fs.FileStatus.toString(FileStatus.java:462)
>   at 
> org.apache.hadoop.hdfs.web.TestJsonUtil.testHdfsFileStatus(TestJsonUtil.java:123)
> {code}
> Test Code like this:
> {code:java}
> @Test
> public void testHdfsFileStatus() throws IOException {
>   HdfsFileStatus hdfsFileStatus = new HdfsFileStatus.Builder()
>   .replication(1)
>   .blocksize(1024)
>   .perm(new FsPermission((short) 777))
>   .owner("owner")
>   .group("group")
>   .symlink(new byte[0])
>   .path(new byte[0])
>   .fileId(1010)
>   .isdir(true)
>   .build();
>   System.out.println("HdfsFileStatus = " + hdfsFileStatus.toString());
> }{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14747) RBF: IsFileClosed should be return false when the file is open in multiple destination

2019-08-20 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911514#comment-16911514
 ] 

Íñigo Goiri commented on HDFS-14747:


I would do them separately, eahc one has a proper unit test for coverage so we 
are better this way.
For the test, I'm not very keen on having a test that succeeds through 2 
different paths; it should either assert false or check the exception, not both.

> RBF: IsFileClosed should be return false when the file is open in multiple 
> destination
> --
>
> Key: HDFS-14747
> URL: https://issues.apache.org/jira/browse/HDFS-14747
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14747-trunk-001.patch
>
>
> *IsFileClosed* should be return false when the file is open or be writing in 
> multiple destinations.
> Liks this:
> Mount point has multiple destination(ns0 and ns1).
> And the file is in ns0 but it is be writing, ns1 doesn't has this file.
> In this case *IsFileClosed* should return false instead of throw 
> FileNotFoundException.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14741) RBF: RecoverLease should be return false when the file is open in multiple destination

2019-08-20 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911512#comment-16911512
 ] 

Íñigo Goiri commented on HDFS-14741:


The test looks good now.
Can we update the comments there?
For the comments in the test, no need to mention the test, just say:
{code}
Test recoverLease when the result is false.
{code}
I would also simplify the comment inside:
{code}
// ns0 returns false and ns1 throws FileNotFoundException
{code}

> RBF: RecoverLease should be return false when the file is open in multiple 
> destination
> --
>
> Key: HDFS-14741
> URL: https://issues.apache.org/jira/browse/HDFS-14741
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14741-trunk-001.patch, HDFS-14741-trunk-002.patch, 
> HDFS-14741-trunk-003.patch, HDFS-14741-trunk-004.patch, 
> HDFS-14741-trunk-005.patch, HDFS-14741-trunk-006.patch
>
>
> RecoverLease should be return false when the file is open or be writing in 
> multiple destinations.
> Liks this:
> Mount point has multiple destination(ns0 and ns1).
> And the file is in ns0 but it is be writing, ns1 doesn't has this file.
> In this case *recoverLease* should return false instead of throw 
> FileNotFoundException.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14396) Failed to load image from FSImageFile when downgrade from 3.x to 2.x

2019-08-20 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911506#comment-16911506
 ] 

Wei-Chiu Chuang commented on HDFS-14396:


Ok. Make sense. +1 from me.

> Failed to load image from FSImageFile when downgrade from 3.x to 2.x
> 
>
> Key: HDFS-14396
> URL: https://issues.apache.org/jira/browse/HDFS-14396
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14396.001.patch, HDFS-14396.002.patch
>
>
> After fixing HDFS-13596, try to downgrade from 3.x to 2.x. But namenode can't 
> start because exception occurs. The message follows
> {code:java}
> 2019-01-23 17:22:18,730 ERROR org.apache.hadoop.hdfs.server.namenode.FSImage: 
> Failed to load image from 
> FSImageFile(file=/data1/hadoopdata/hadoop-namenode/current/fsimage_0025310,
>  cpktTxId=00
> 25310)
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:243)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:179)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:226)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:885)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:869)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:742)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:673)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:998)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:700)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:612)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:672)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:839)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:823)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1517)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1583)
> 2019-01-23 17:22:19,023 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: Failed to load FSImage file, see error(s) above for more 
> info.
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:688)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:998)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:700)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:612)
> {code}
> This issue occurs because 3.x namenode saves image with EC fields during 
> upgrade
> Try to fix it



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14747) RBF: IsFileClosed should be return false when the file is open in multiple destination

2019-08-20 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-14747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14747:
---
Summary: RBF: IsFileClosed should be return false when the file is open in 
multiple destination  (was: RBF:IsFileClosed should be return false when the 
file is open in multiple destination)

> RBF: IsFileClosed should be return false when the file is open in multiple 
> destination
> --
>
> Key: HDFS-14747
> URL: https://issues.apache.org/jira/browse/HDFS-14747
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14747-trunk-001.patch
>
>
> *IsFileClosed* should be return false when the file is open or be writing in 
> multiple destinations.
> Liks this:
> Mount point has multiple destination(ns0 and ns1).
> And the file is in ns0 but it is be writing, ns1 doesn't has this file.
> In this case *IsFileClosed* should return false instead of throw 
> FileNotFoundException.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14714) RBF: implement getReplicatedBlockStats interface

2019-08-20 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911503#comment-16911503
 ] 

Íñigo Goiri commented on HDFS-14714:


A couple minor comments:
* In the assertEquals in line 1412, we should have the expected value first.
* When we extract FSNamesystem, we could extract BlockManager  from it.
* I would do all the extracts in the top of the for loop.
* Add a general comment to testGetRelicatedBlockStats explaining how you 
generate a corrupted block.


> RBF: implement getReplicatedBlockStats interface
> 
>
> Key: HDFS-14714
> URL: https://issues.apache.org/jira/browse/HDFS-14714
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14714.001.patch, HDFS-14714.002.patch, 
> HDFS-14714.003.patch
>
>
> It's not implemented now, we sometime need this interface for cluster monitor
> {code:java}
> // current implementation
> public ReplicatedBlockStats getReplicatedBlockStats() throws IOException {
>   rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
>   return null;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13709) Report bad block to NN when transfer block encounter EIO exception

2019-08-20 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911504#comment-16911504
 ] 

Wei-Chiu Chuang commented on HDFS-13709:


bq. In which case we need to backport the patch to branch-2? Usually the bugfix 
and some critical improvements?
At this point, most patch should go into branch-2 too, except for features not 
in Hadoop 2.x (erasure coding).
I would only put critical fixes into branch-2.8 though. It's a quite stable 
release, and code has diverged quite a lot, so the effort is non trivial.
bq. Some people open a new Jira to backport to branch-2, some update a new 
patch in the same Jira, which is better in the practice?
Either way. I think if the jira was initially in branch-3, but after awhile 
people want to add to branch-2, then better to use a new jira. If the patch 
applies without conflict/trivial conflict, then the same jira can be used.

> Report bad block to NN when transfer block encounter EIO exception
> --
>
> Key: HDFS-13709
> URL: https://issues.apache.org/jira/browse/HDFS-13709
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-13709.002.patch, HDFS-13709.003.patch, 
> HDFS-13709.004.patch, HDFS-13709.005.patch, HDFS-13709.patch
>
>
> In our online cluster, the BlockPoolSliceScanner is turned off, and sometimes 
> disk bad track may cause data loss.
> For example, there are 3 replicas on 3 machines A/B/C, if a bad track occurs 
> on A's replica data, and someday B and C crushed at the same time, NN will 
> try to replicate data from A but failed, this block is corrupt now but no one 
> knows, because NN think there is at least 1 healthy replica and it keep 
> trying to replicate it.
> When reading a replica which have data on bad track, OS will return an EIO 
> error, if DN reports the bad block as soon as it got an EIO,  we can find 
> this case ASAP and try to avoid data loss



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1596) Create service endpoint to download configuration from SCM

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1596?focusedWorklogId=297981=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-297981
 ]

ASF GitHub Bot logged work on HDDS-1596:


Author: ASF GitHub Bot
Created on: 20/Aug/19 16:00
Start Date: 20/Aug/19 16:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #861: HDDS-1596. Create 
service endpoint to download configuration from SCM
URL: https://github.com/apache/hadoop/pull/861#issuecomment-523080779
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | yamllint | 1 | yamllint was not available. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for branch |
   | -1 | mvninstall | 134 | hadoop-ozone in trunk failed. |
   | -1 | compile | 51 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 55 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 721 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 141 | trunk passed |
   | 0 | spotbugs | 194 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 98 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | -1 | mvninstall | 134 | hadoop-ozone in the patch failed. |
   | -1 | compile | 51 | hadoop-ozone in the patch failed. |
   | -1 | javac | 51 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 27 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | -0 | checkstyle | 29 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 4 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 624 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 142 | the patch passed |
   | -1 | findbugs | 108 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 286 | hadoop-hdds in the patch passed. |
   | -1 | unit | 107 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 3809 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-861/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/861 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml yamllint shellcheck shelldocs 
|
   | uname | Linux 2e0bec708a51 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 094d736 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-861/10/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-861/10/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-861/10/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-861/10/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-861/10/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-861/10/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-861/10/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-861/10/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-861/10/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-861/10/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 

[jira] [Commented] (HDFS-14755) [Dynamometer] Hadoop-2 DataNode fail to start

2019-08-20 Thread Erik Krogen (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911483#comment-16911483
 ] 

Erik Krogen commented on HDFS-14755:


Interesting! That's good to know. I'll move specific review comments onto the 
PR, in that case.

> [Dynamometer] Hadoop-2 DataNode fail to start
> -
>
> Key: HDFS-14755
> URL: https://issues.apache.org/jira/browse/HDFS-14755
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>
> When using a fsimage of Hadoop-2 with hadoop-dynamometer, datanodes fail to 
> start with the following error.
> {noformat}
> Exception in thread "main" java.lang.IllegalAccessError: tried to access 
> method 
> org.apache.hadoop.hdfs.server.datanode.StorageLocation.getUri()Ljava/net/URI; 
> from class org.apache.hadoop.tools.dynamometer.SimulatedDataNodes
> at 
> org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.run(SimulatedDataNodes.java:113)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at 
> org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.main(SimulatedDataNodes.java:88)
> ./start-component.sh: line 317: kill: (9876) - No such process
> {noformat}
> The cause of this error is an incompatibility of StorageLocation.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14755) [Dynamometer] Hadoop-2 DataNode fail to start

2019-08-20 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911480#comment-16911480
 ] 

Takanobu Asanuma commented on HDFS-14755:
-

Hi [~xkrogen]. Thanks for your review and providing the information!

Actually, when using hadoop-dynamometer(latest trunk) with the patch, I have 
confirmed that HDP-2.6(hadoop-2.7.3) cluster just runs successfully. Though I 
haven't confirmed workload jobs yet.

> [Dynamometer] Hadoop-2 DataNode fail to start
> -
>
> Key: HDFS-14755
> URL: https://issues.apache.org/jira/browse/HDFS-14755
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>
> When using a fsimage of Hadoop-2 with hadoop-dynamometer, datanodes fail to 
> start with the following error.
> {noformat}
> Exception in thread "main" java.lang.IllegalAccessError: tried to access 
> method 
> org.apache.hadoop.hdfs.server.datanode.StorageLocation.getUri()Ljava/net/URI; 
> from class org.apache.hadoop.tools.dynamometer.SimulatedDataNodes
> at 
> org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.run(SimulatedDataNodes.java:113)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at 
> org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.main(SimulatedDataNodes.java:88)
> ./start-component.sh: line 317: kill: (9876) - No such process
> {noformat}
> The cause of this error is an incompatibility of StorageLocation.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14722) RBF: GetMountPointStatus should return mountTable information when getFileInfoAll throw IOException

2019-08-20 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911472#comment-16911472
 ] 

Ayush Saxena commented on HDFS-14722:
-

Thanx [~xuzq_zander]  for the patch. Do we need to add multiple entries for the 
test, I guess adding one entry itself should work?

For the user part, can we use {{LambdaTestUtils.doAs(..)}}? rather than setting 
unsettling login user?

> RBF: GetMountPointStatus should return mountTable information when 
> getFileInfoAll throw IOException
> ---
>
> Key: HDFS-14722
> URL: https://issues.apache.org/jira/browse/HDFS-14722
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14722-trunk-001.patch, HDFS-14722-trunk-002.patch, 
> HDFS-14722-trunk-003.patch, HDFS-14722-trunk-bug-discuss.patch
>
>
> When IOException in getFileInfoAll, we should return the mountTable 
> informations instead of super information.
> Code like:
> {code:java}
> // RouterClientProtocol#getMountPointStatus
> try {
>   String mName = name.startsWith("/") ? name : "/" + name;
>   MountTableResolver mountTable = (MountTableResolver) subclusterResolver;
>   MountTable entry = mountTable.getMountPoint(mName);
>   if (entry != null) {
> RemoteMethod method = new RemoteMethod("getFileInfo",
> new Class[] {String.class}, new RemoteParam());
> HdfsFileStatus fInfo = getFileInfoAll(
> entry.getDestinations(), method, mountStatusTimeOut);
> if (fInfo != null) {
>   permission = fInfo.getPermission();
>   owner = fInfo.getOwner();
>   group = fInfo.getGroup();
>   childrenNum = fInfo.getChildrenNum();
> } else {
>   permission = entry.getMode();
>   owner = entry.getOwnerName();
>   group = entry.getGroupName();
> }
>   }
> } catch (IOException e) {
>   LOG.error("Cannot get mount point: {}", e.getMessage());
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14276) [SBN read] Reduce tailing overhead

2019-08-20 Thread Erik Krogen (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911469#comment-16911469
 ] 

Erik Krogen commented on HDFS-14276:


Hi [~ayushtkn], I just took a look at v01, LGTM. +1

> [SBN read] Reduce tailing overhead
> --
>
> Key: HDFS-14276
> URL: https://issues.apache.org/jira/browse/HDFS-14276
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, namenode
>Affects Versions: 3.3.0
> Environment: Hardware: 4-node cluster, each node has 4 core, Xeon 
> 2.5Ghz, 25GB memory.
> Software: CentOS 7.4, CDH 6.0 + Consistent Reads from Standby, Kerberos, SSL, 
> RPC encryption + Data Transfer Encryption.
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-14276-01.patch, HDFS-14276.000.patch, Screen Shot 
> 2019-02-12 at 10.51.41 PM.png, Screen Shot 2019-02-14 at 11.50.37 AM.png
>
>
> When Observer sets {{dfs.ha.tail-edits.period}} = {{0ms}}, it tails edit log 
> continuously in order to fetch the latest edits, but there is a lot of 
> overhead in doing so.
> Critically, edit log tailer should _not_ update NameDirSize metric every 
> time. It has nothing to do with fetching edits, and it involves lots of 
> directory space calculation.
> Profiler suggests a non-trivial chunk of time is spent for nothing.
> Other than this, the biggest overhead is in the communication to 
> serialize/deserialize messages to/from JNs. I am looking for ways to reduce 
> the cost because it's burning 30% of my CPU time even when the cluster is 
> idle.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14714) RBF: implement getReplicatedBlockStats interface

2019-08-20 Thread Chen Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911459#comment-16911459
 ] 

Chen Zhang commented on HDFS-14714:
---

Thanks [~ayushtkn] for your review, {{DFSTestUtil.waitCorruptReplicas(...)}} 
works here.

Uploaded patch v3

> RBF: implement getReplicatedBlockStats interface
> 
>
> Key: HDFS-14714
> URL: https://issues.apache.org/jira/browse/HDFS-14714
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14714.001.patch, HDFS-14714.002.patch, 
> HDFS-14714.003.patch
>
>
> It's not implemented now, we sometime need this interface for cluster monitor
> {code:java}
> // current implementation
> public ReplicatedBlockStats getReplicatedBlockStats() throws IOException {
>   rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
>   return null;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14714) RBF: implement getReplicatedBlockStats interface

2019-08-20 Thread Chen Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang updated HDFS-14714:
--
Attachment: HDFS-14714.003.patch

> RBF: implement getReplicatedBlockStats interface
> 
>
> Key: HDFS-14714
> URL: https://issues.apache.org/jira/browse/HDFS-14714
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14714.001.patch, HDFS-14714.002.patch, 
> HDFS-14714.003.patch
>
>
> It's not implemented now, we sometime need this interface for cluster monitor
> {code:java}
> // current implementation
> public ReplicatedBlockStats getReplicatedBlockStats() throws IOException {
>   rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
>   return null;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13977) NameNode can kill itself if it tries to send too many txns to a QJM simultaneously

2019-08-20 Thread Erik Krogen (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911452#comment-16911452
 ] 

Erik Krogen edited comment on HDFS-13977 at 8/20/19 3:22 PM:
-

Thanks for taking a look [~shv] and [~jojochuang]!

{quote}
I understand this becomes a problem when people stop using 
EditLogFileOutputStream along with QuorumOutputStream. When they use both 
FileOutputStream would trigger the sync, which is performed uniformly for all 
output streams both File and Quorum. But if one uses QuorumOutputStream only, 
then sync is never triggered as you discovered.
{quote}
Yes, great point. This could help to explain why this issue hasn't been more 
widespread.

{quote}
Could you please give more details about the test. I understand the mocked 
spyLoggers counts number of calls to sendEdits(). With your fix sendEdits() is 
called only once on each of the three streams. Why is it called 5 times when 
sync is never enforced (shouldForceSync() = false)?
{quote}
With my fix reverted from {{QuorumOutputStream}}, {{sendEdits()}} is never 
called. There are, however, 5 _total interactions_ with each mock, which you 
may be confusing with interactions to {{sendEdits()}}. See the test failure 
output with the fix reverted:
{code}
Wanted but not invoked:
asyncLogger.sendEdits(
1L,
1L,
,

);
-> at 
org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManagerUnit.testFSEditLogAutoSyncToQuorumStream(TestQuorumJournalManagerUnit.java:255)

However, there were exactly 5 interactions with this mock:
asyncLogger.getJournalState();
-> at 
org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournalState(AsyncLoggerSet.java:212)

asyncLogger.newEpoch(1L);
-> at 
org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.newEpoch(AsyncLoggerSet.java:231)

asyncLogger.setEpoch(1L);
-> at 
org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.setEpoch(AsyncLoggerSet.java:66)

asyncLogger.startLogSegment(1L, -65);
-> at 
org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.startLogSegment(AsyncLoggerSet.java:240)

asyncLogger.startLogSegment(1L, -65);
-> at 
org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.startLogSegment(AsyncLoggerSet.java:240)
{code}
There were 0 calls to {{sendEdits()}}, but 5 other calls on the mock.

Edit: Also attached v001 patch fixing the checkstyle issue. I just removed the 
locally created {{conf}} and used the one from the test setup.


was (Author: xkrogen):
Thanks for taking a look [~shv] and [~jojochuang]!

{quote}
I understand this becomes a problem when people stop using 
EditLogFileOutputStream along with QuorumOutputStream. When they use both 
FileOutputStream would trigger the sync, which is performed uniformly for all 
output streams both File and Quorum. But if one uses QuorumOutputStream only, 
then sync is never triggered as you discovered.
{quote}
Yes, great point. This could help to explain why this issue hasn't been more 
widespread.

{quote}
Could you please give more details about the test. I understand the mocked 
spyLoggers counts number of calls to sendEdits(). With your fix sendEdits() is 
called only once on each of the three streams. Why is it called 5 times when 
sync is never enforced (shouldForceSync() = false)?
{quote}
With my fix reverted from {{QuorumOutputStream}}, {{sendEdits()}} is never 
called. There are, however, 5 _total interactions_ with each mock, which you 
may be confusing with interactions to {{sendEdits()}}. See the test failure 
output with the fix reverted:
{code}
Wanted but not invoked:
asyncLogger.sendEdits(
1L,
1L,
,

);
-> at 
org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManagerUnit.testFSEditLogAutoSyncToQuorumStream(TestQuorumJournalManagerUnit.java:255)

However, there were exactly 5 interactions with this mock:
asyncLogger.getJournalState();
-> at 
org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournalState(AsyncLoggerSet.java:212)

asyncLogger.newEpoch(1L);
-> at 
org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.newEpoch(AsyncLoggerSet.java:231)

asyncLogger.setEpoch(1L);
-> at 
org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.setEpoch(AsyncLoggerSet.java:66)

asyncLogger.startLogSegment(1L, -65);
-> at 
org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.startLogSegment(AsyncLoggerSet.java:240)

asyncLogger.startLogSegment(1L, -65);
-> at 
org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.startLogSegment(AsyncLoggerSet.java:240)
{code}
There were 0 calls to {{sendEdits()}}, but 5 other calls on the mock.

> NameNode can kill itself if it tries to send too many txns to a QJM 
> simultaneously
> --
>
> Key: HDFS-13977
> URL: https://issues.apache.org/jira/browse/HDFS-13977
> Project: Hadoop HDFS
>  Issue Type: Bug
>  

<    1   2   3   4   >