[GitHub] [hadoop-ozone] prashantpogde commented on issue #783: HDDS-2976. Recon throws error while trying to get snapshot over https

2020-04-10 Thread GitBox
prashantpogde commented on issue #783: HDDS-2976. Recon throws error while 
trying to get snapshot over https
URL: https://github.com/apache/hadoop-ozone/pull/783#issuecomment-612322340
 
 
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #806: HDDS-3374. OMVolumeSetOwnerRequest doesn't check if user is already the owner

2020-04-10 Thread GitBox
bharatviswa504 commented on a change in pull request #806: HDDS-3374. 
OMVolumeSetOwnerRequest doesn't check if user is already the owner
URL: https://github.com/apache/hadoop-ozone/pull/806#discussion_r406973632
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeRequest.java
 ##
 @@ -106,6 +106,13 @@ protected UserVolumeInfo 
addVolumeToOwnerList(UserVolumeInfo volumeList,
   objectID = volumeList.getObjectID();
 }
 
+// Sanity check, a user should not own same volume twice
+//  TODO: May want to remove this due to perf if user owns a lot of 
volumes.
+if (prevVolList.contains(volume)) {
 
 Review comment:
   Can we document this behavior, so that the end-user knows about this 
behavior?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3368) Ozone filesystem jar should not include webapps folder

2020-04-10 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-3368:
-
Fix Version/s: 0.6.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Ozone filesystem jar should not include webapps folder
> --
>
> Key: HDDS-3368
> URL: https://issues.apache.org/jira/browse/HDDS-3368
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.5.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> hadoop-ozone-filesystem-lib-current jar includes webapps folder of hdds 
> datanode. 
> This should not be included in the filesystem jar.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-3291) Write operation when both OM followers are shutdown

2020-04-10 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reopened HDDS-3291:
--

This has been reverted, as this is causing an issue for Hadoop applications.

> Write operation when both OM followers are shutdown
> ---
>
> Key: HDDS-3291
> URL: https://issues.apache.org/jira/browse/HDDS-3291
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> steps taken :
> --
> 1. In OM HA environment, shutdown both OM followers.
> 2. Start PUT key operation.
> PUT key operation is hung.
> Cluster details : 
> https://quasar-vwryte-1.quasar-vwryte.root.hwx.site:7183/cmf/home
> Snippet of OM log on LEADER:
> {code:java}
> 2020-03-24 04:16:46,249 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om2-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:46,249 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om3-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:46,250 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om2: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:46,250 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om3: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:46,750 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om3-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:46,750 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om2-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:46,750 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om3: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:46,750 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om2: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:47,250 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om3-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:47,251 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om2-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:47,251 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om3: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:47,251 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om2: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:47,751 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om2-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:47,751 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om3-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:47,752 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om2: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:47,752 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om3: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:48,252 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om2-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:48,252 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om3-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:48,252 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om2: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:48,252 INFO 

[jira] [Updated] (HDDS-3377) Remove guava 26.0-android jar

2020-04-10 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDDS-3377:
--
Description: 
I missed this during HDDS-3000

guava-26.0-android is not used but if it's in the classpath (copied explicitly 
in pom file), it could potentially load this one and cause runtime error.

{noformat}
$ find . -name guava*
./hadoop-ozone/ozonefs-lib-legacy/target/classes/libs/META-INF/maven/com.google.guava/guava
./hadoop-ozone/dist/target/ozone-0.4.0.7.1.1.0-SNAPSHOT/share/ozone/lib/guava-26.0-android.jar
./hadoop-ozone/dist/target/ozone-0.4.0.7.1.1.0-SNAPSHOT/share/ozone/lib/guava-28.2-jre.jar
{noformat}

  was:
I missed this during HDDS-3000

guava-26.0-android is not used but if it's in the classpath (copied explicitly 
in pom file), it could potentially load this one and cause runtime error.


> Remove guava 26.0-android jar
> -
>
> Key: HDDS-3377
> URL: https://issues.apache.org/jira/browse/HDDS-3377
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I missed this during HDDS-3000
> guava-26.0-android is not used but if it's in the classpath (copied 
> explicitly in pom file), it could potentially load this one and cause runtime 
> error.
> {noformat}
> $ find . -name guava*
> ./hadoop-ozone/ozonefs-lib-legacy/target/classes/libs/META-INF/maven/com.google.guava/guava
> ./hadoop-ozone/dist/target/ozone-0.4.0.7.1.1.0-SNAPSHOT/share/ozone/lib/guava-26.0-android.jar
> ./hadoop-ozone/dist/target/ozone-0.4.0.7.1.1.0-SNAPSHOT/share/ozone/lib/guava-28.2-jre.jar
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel commented on issue #783: HDDS-2976. Recon throws error while trying to get snapshot over https

2020-04-10 Thread GitBox
vivekratnavel commented on issue #783: HDDS-2976. Recon throws error while 
trying to get snapshot over https
URL: https://github.com/apache/hadoop-ozone/pull/783#issuecomment-612170412
 
 
   @prashantpogde Can you rebase this branch with the current master, so we can 
get a clean CI run? Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3377) Remove guava 26.0-android jar

2020-04-10 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080857#comment-17080857
 ] 

Wei-Chiu Chuang commented on HDDS-3377:
---

Looking at git history, this guava was added in HDDS-1382. [~elek] thoughts?

> Remove guava 26.0-android jar
> -
>
> Key: HDDS-3377
> URL: https://issues.apache.org/jira/browse/HDDS-3377
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I missed this during HDDS-3000
> guava-26.0-android is not used but if it's in the classpath (copied 
> explicitly in pom file), it could potentially load this one and cause runtime 
> error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3377) Remove guava 26.0-android jar

2020-04-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3377:
-
Labels: pull-request-available  (was: )

> Remove guava 26.0-android jar
> -
>
> Key: HDDS-3377
> URL: https://issues.apache.org/jira/browse/HDDS-3377
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>
> I missed this during HDDS-3000
> guava-26.0-android is not used but if it's in the classpath (copied 
> explicitly in pom file), it could potentially load this one and cause runtime 
> error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] jojochuang opened a new pull request #808: HDDS-3377. Remove guava 26.0-android jar.

2020-04-10 Thread GitBox
jojochuang opened a new pull request #808: HDDS-3377. Remove guava 26.0-android 
jar.
URL: https://github.com/apache/hadoop-ozone/pull/808
 
 
   ## What changes were proposed in this pull request?
   
   Do not explicitly copy guava 26.0-jre to the lib/ directory, because we 
already depend on guava-28.2.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3377
   
   ## How was this patch tested?
   
   (Please explain how this patch was tested. Ex: unit tests, manual tests)
   (If this patch involves UI changes, please attach a screen-shot; otherwise, 
remove this)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3377) Remove guava 26.0-android jar

2020-04-10 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDDS-3377:
-

Assignee: Wei-Chiu Chuang

> Remove guava 26.0-android jar
> -
>
> Key: HDDS-3377
> URL: https://issues.apache.org/jira/browse/HDDS-3377
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> I missed this during HDDS-3000
> guava-26.0-android is not used but if it's in the classpath (copied 
> explicitly in pom file), it could potentially load this one and cause runtime 
> error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3377) Remove guava 26.0-android jar

2020-04-10 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HDDS-3377:
-

 Summary: Remove guava 26.0-android jar
 Key: HDDS-3377
 URL: https://issues.apache.org/jira/browse/HDDS-3377
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Wei-Chiu Chuang


I missed this during HDDS-3000

guava-26.0-android is not used but if it's in the classpath (copied explicitly 
in pom file), it could potentially load this one and cause runtime error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #804: HDDS-3368. Ozone filesystem jar should not include webapps folder

2020-04-10 Thread GitBox
bharatviswa504 merged pull request #804: HDDS-3368. Ozone filesystem jar should 
not include webapps folder
URL: https://github.com/apache/hadoop-ozone/pull/804
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #804: HDDS-3368. Ozone filesystem jar should not include webapps folder

2020-04-10 Thread GitBox
bharatviswa504 commented on issue #804: HDDS-3368. Ozone filesystem jar should 
not include webapps folder
URL: https://github.com/apache/hadoop-ozone/pull/804#issuecomment-612164785
 
 
   Thank You @vivekratnavel for the contribution.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #806: HDDS-3374. OMVolumeSetOwnerRequest doesn't check if user is already the owner

2020-04-10 Thread GitBox
smengcl commented on a change in pull request #806: HDDS-3374. 
OMVolumeSetOwnerRequest doesn't check if user is already the owner
URL: https://github.com/apache/hadoop-ozone/pull/806#discussion_r406888217
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeRequest.java
 ##
 @@ -106,6 +106,13 @@ protected UserVolumeInfo 
addVolumeToOwnerList(UserVolumeInfo volumeList,
   objectID = volumeList.getObjectID();
 }
 
+// Sanity check, a user should not own same volume twice
+//  TODO: May want to remove this due to perf if user owns a lot of 
volumes.
+if (prevVolList.contains(volume)) {
+  throw new IOException("Invalid operation: User " + owner +
+  " is about to own a same volume " + volume + " twice!" +
+  " Check for DB consistency error.");
+}
 
 // Add the new volume to the list
 prevVolList.add(volume);
 
 Review comment:
   Will do. Thanks :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #806: HDDS-3374. OMVolumeSetOwnerRequest doesn't check if user is already the owner

2020-04-10 Thread GitBox
smengcl commented on a change in pull request #806: HDDS-3374. 
OMVolumeSetOwnerRequest doesn't check if user is already the owner
URL: https://github.com/apache/hadoop-ozone/pull/806#discussion_r406888013
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeRequest.java
 ##
 @@ -106,6 +106,13 @@ protected UserVolumeInfo 
addVolumeToOwnerList(UserVolumeInfo volumeList,
   objectID = volumeList.getObjectID();
 }
 
+// Sanity check, a user should not own same volume twice
+//  TODO: May want to remove this due to perf if user owns a lot of 
volumes.
+if (prevVolList.contains(volume)) {
 
 Review comment:
   I was also thinking of adding a new type of exception. But now I think 
@dineshchitlangia 's suggestion might be better -- handle it silently and maybe 
throw a warning.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #806: HDDS-3374. OMVolumeSetOwnerRequest doesn't check if user is already the owner

2020-04-10 Thread GitBox
smengcl commented on a change in pull request #806: HDDS-3374. 
OMVolumeSetOwnerRequest doesn't check if user is already the owner
URL: https://github.com/apache/hadoop-ozone/pull/806#discussion_r406887127
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerSetOwner.java
 ##
 @@ -0,0 +1,123 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om;
+
+import java.io.IOException;
+import java.util.UUID;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.protocol.ClientProtocol;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.security.UserGroupInformation;
+
+import static 
org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_RATIS_PIPELINE_LIMIT;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS_NATIVE;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_ENABLED;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_OPEN_KEY_EXPIRE_THRESHOLD_SECONDS;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.Timeout;
+
+/**
+ * Test OzoneManager list volume operation under combinations of configs.
+ */
+public class TestOzoneManagerSetOwner {
+
+  @Rule
+  public Timeout timeout = new Timeout(120_000);
+
+  private UserGroupInformation loginUser;
+
+  @Before
+  public void init() throws Exception {
+loginUser = UserGroupInformation.getLoginUser();
+  }
+
+  /**
+   * Create a MiniDFSCluster for testing.
+   */
+  private MiniOzoneCluster startCluster(boolean aclEnabled) throws Exception {
+
+OzoneConfiguration conf = new OzoneConfiguration();
+String clusterId = UUID.randomUUID().toString();
+String scmId = UUID.randomUUID().toString();
+String omId = UUID.randomUUID().toString();
+conf.setInt(OZONE_OPEN_KEY_EXPIRE_THRESHOLD_SECONDS, 2);
+conf.set(OZONE_ADMINISTRATORS, "user1");
+conf.setInt(OZONE_SCM_RATIS_PIPELINE_LIMIT, 10);
+
+// Use native impl here, default impl doesn't do actual checks
+conf.set(OZONE_ACL_AUTHORIZER_CLASS, OZONE_ACL_AUTHORIZER_CLASS_NATIVE);
+// Note: OM doesn't support live config reloading
+conf.setBoolean(OZONE_ACL_ENABLED, aclEnabled);
+
 
 Review comment:
   Yes I acknowledge that. This integration test is a quick hack from 
https://github.com/apache/hadoop-ozone/pull/696 's test since I discovered this 
issue when debugging that PR. Just a POC for now. Will figure out a way to 
write this in `TestOMVolumeSetOwnerRequest`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3376) Exclude unwanted jars from Ozone Filesystem jar

2020-04-10 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-3376:


 Summary: Exclude unwanted jars from Ozone Filesystem jar
 Key: HDDS-3376
 URL: https://issues.apache.org/jira/browse/HDDS-3376
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Affects Versions: 0.5.0
Reporter: Vivek Ratnavel Subramanian


This is a followup Jira to HDDS-3368 to clean up unwanted jars like jackson 
being packaged with Ozone Filesystem jar.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3375) S3A failing complete multipart upload with Ozone S3

2020-04-10 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-3375:
-
Status: Patch Available  (was: Open)

> S3A failing complete multipart upload with Ozone S3
> ---
>
> Key: HDDS-3375
> URL: https://issues.apache.org/jira/browse/HDDS-3375
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:java}
> javax.xml.bind.UnmarshalException: unexpected element (uri:"", 
> local:"CompleteMultipartUpload"). Expected elements are 
> <{http://s3.amazonaws.com/doc/2006-03-01/}CompleteMultipartUpload>,<{http://s3.amazonaws.com/doc/2006-03-01/}Part>
> at 
> com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext.handleEvent(UnmarshallingContext.java:744)
> at 
> com.sun.xml.bind.v2.runtime.unmarshaller.Loader.reportError(Loader.java:262)
> at 
> com.sun.xml.bind.v2.runtime.unmarshaller.Loader.reportError(Loader.java:257)
> at 
> com.sun.xml.bind.v2.runtime.unmarshaller.Loader.reportUnexpectedChildElement(Loader.java:124)
> at 
> com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext$DefaultRootLoader.childElement(UnmarshallingContext.java:1149)
> at 
> com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext._startElement(UnmarshallingContext.java:574)
> at 
> com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext.startElement(UnmarshallingContext.java:556)
> at 
> com.sun.xml.bind.v2.runtime.unmarshaller.SAXConnector.startElement(SAXConnector.java:168)
> at 
> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:509)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:374)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl$NSContentDriver.scanRootElementHook(XMLNSDocumentScannerImpl.java:613)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3132)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(XMLDocumentScannerImpl.java:852)
> {code}
> It seems http://s3.amazonaws.com/doc/2006-03-01/ is expected in the element.
> But in class CompleteMultipartUploadRequest,  namespace 
> http://s3.amazonaws.com/doc/2006-03-01/ is not defined here.
> Reported by [~sammichen]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3093) Allow forced overwrite of local file

2020-04-10 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-3093:

Fix Version/s: 0.6.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Allow forced overwrite of local file
> 
>
> Key: HDDS-3093
> URL: https://issues.apache.org/jira/browse/HDDS-3093
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{ozone sh key get}} refuses to overwrite existing local file.  I would like 
> to add a {{--force}} flag (default: false) to allow overriding this behavior, 
> to make it easier to repeatedly get a key without forcing me to delete it 
> locally first.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] dineshchitlangia merged pull request #800: HDDS-3093. Allow forced overwrite of local file

2020-04-10 Thread GitBox
dineshchitlangia merged pull request #800: HDDS-3093. Allow forced overwrite of 
local file
URL: https://github.com/apache/hadoop-ozone/pull/800
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #806: HDDS-3374. OMVolumeSetOwnerRequest doesn't check if user is already the owner

2020-04-10 Thread GitBox
bharatviswa504 commented on a change in pull request #806: HDDS-3374. 
OMVolumeSetOwnerRequest doesn't check if user is already the owner
URL: https://github.com/apache/hadoop-ozone/pull/806#discussion_r406866823
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeRequest.java
 ##
 @@ -106,6 +106,13 @@ protected UserVolumeInfo 
addVolumeToOwnerList(UserVolumeInfo volumeList,
   objectID = volumeList.getObjectID();
 }
 
+// Sanity check, a user should not own same volume twice
+//  TODO: May want to remove this due to perf if user owns a lot of 
volumes.
+if (prevVolList.contains(volume)) {
 
 Review comment:
   I am thinking instead of returning an error like AccessDenied, can we return 
a code that this owner is already owner for this volume.
   Because access Denied looks not proper here.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #806: HDDS-3374. OMVolumeSetOwnerRequest doesn't check if user is already the owner

2020-04-10 Thread GitBox
bharatviswa504 commented on a change in pull request #806: HDDS-3374. 
OMVolumeSetOwnerRequest doesn't check if user is already the owner
URL: https://github.com/apache/hadoop-ozone/pull/806#discussion_r406867227
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerSetOwner.java
 ##
 @@ -0,0 +1,123 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om;
+
+import java.io.IOException;
+import java.util.UUID;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.protocol.ClientProtocol;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.security.UserGroupInformation;
+
+import static 
org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_RATIS_PIPELINE_LIMIT;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS_NATIVE;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_ENABLED;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_OPEN_KEY_EXPIRE_THRESHOLD_SECONDS;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.Timeout;
+
+/**
+ * Test OzoneManager list volume operation under combinations of configs.
+ */
+public class TestOzoneManagerSetOwner {
+
+  @Rule
+  public Timeout timeout = new Timeout(120_000);
+
+  private UserGroupInformation loginUser;
+
+  @Before
+  public void init() throws Exception {
+loginUser = UserGroupInformation.getLoginUser();
+  }
+
+  /**
+   * Create a MiniDFSCluster for testing.
+   */
+  private MiniOzoneCluster startCluster(boolean aclEnabled) throws Exception {
+
+OzoneConfiguration conf = new OzoneConfiguration();
+String clusterId = UUID.randomUUID().toString();
+String scmId = UUID.randomUUID().toString();
+String omId = UUID.randomUUID().toString();
+conf.setInt(OZONE_OPEN_KEY_EXPIRE_THRESHOLD_SECONDS, 2);
+conf.set(OZONE_ADMINISTRATORS, "user1");
+conf.setInt(OZONE_SCM_RATIS_PIPELINE_LIMIT, 10);
+
+// Use native impl here, default impl doesn't do actual checks
+conf.set(OZONE_ACL_AUTHORIZER_CLASS, OZONE_ACL_AUTHORIZER_CLASS_NATIVE);
+// Note: OM doesn't support live config reloading
+conf.setBoolean(OZONE_ACL_ENABLED, aclEnabled);
+
 
 Review comment:
   We don't need an IT test to test this behavior. We have a UT 
TestOMVolumeSetOwnerRequest which we can use this to cover this test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #806: HDDS-3374. OMVolumeSetOwnerRequest doesn't check if user is already the owner

2020-04-10 Thread GitBox
dineshchitlangia commented on a change in pull request #806: HDDS-3374. 
OMVolumeSetOwnerRequest doesn't check if user is already the owner
URL: https://github.com/apache/hadoop-ozone/pull/806#discussion_r406866343
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
 ##
 @@ -143,6 +143,11 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 
   oldOwner = omVolumeArgs.getOwnerName();
 
+  if (oldOwner.equals(newOwner)) {
+throw new OMException("Owner of volume " + volume + " is already " +
+newOwner, OMException.ResultCodes.ACCESS_DENIED);
+  }
+
 
 Review comment:
   Like previous comment, we can make similar change here too.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #806: HDDS-3374. OMVolumeSetOwnerRequest doesn't check if user is already the owner

2020-04-10 Thread GitBox
dineshchitlangia commented on a change in pull request #806: HDDS-3374. 
OMVolumeSetOwnerRequest doesn't check if user is already the owner
URL: https://github.com/apache/hadoop-ozone/pull/806#discussion_r406865345
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeRequest.java
 ##
 @@ -106,6 +106,13 @@ protected UserVolumeInfo 
addVolumeToOwnerList(UserVolumeInfo volumeList,
   objectID = volumeList.getObjectID();
 }
 
+// Sanity check, a user should not own same volume twice
+//  TODO: May want to remove this due to perf if user owns a lot of 
volumes.
+if (prevVolList.contains(volume)) {
+  throw new IOException("Invalid operation: User " + owner +
+  " is about to own a same volume " + volume + " twice!" +
+  " Check for DB consistency error.");
+}
 
 // Add the new volume to the list
 prevVolList.add(volume);
 
 Review comment:
   Instead of throwing exception here, I was wondering if we can instead 
perform the subsequent "add new volume to list" if 
!prevVolList.contains(volume) and complement with WARN log.
   
   ```suggestion
   // Avoid adding a user to the same volume twice
   if (!prevVolList.contains(volume)) {
 // Add the new volume to the list
 prevVolList.add(volume);
 UserVolumeInfo newVolList = UserVolumeInfo.newBuilder()
 .setObjectID(objectID)
 .setUpdateID(txID)
 .addAllVolumeNames(prevVolList).build();
   }
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3372) Delete HISTORY.txt

2020-04-10 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-3372.
--
Fix Version/s: 0.6.0
   Resolution: Fixed

> Delete HISTORY.txt
> --
>
> Key: HDDS-3372
> URL: https://issues.apache.org/jira/browse/HDDS-3372
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> During HDDS-2294, the old file was not deleted.
> This Jira aims to remove the old file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #805: HDDS-3372. Delete HISTORY.txt

2020-04-10 Thread GitBox
bharatviswa504 merged pull request #805: HDDS-3372. Delete HISTORY.txt
URL: https://github.com/apache/hadoop-ozone/pull/805
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #805: HDDS-3372. Delete HISTORY.txt

2020-04-10 Thread GitBox
bharatviswa504 commented on issue #805: HDDS-3372. Delete HISTORY.txt
URL: https://github.com/apache/hadoop-ozone/pull/805#issuecomment-612132851
 
 
   Thank You @dineshchitlangia for the contribution


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 opened a new pull request #807: HDDS-3375. S3A failing complete multipart upload with Ozone S3.

2020-04-10 Thread GitBox
bharatviswa504 opened a new pull request #807: HDDS-3375. S3A failing complete 
multipart upload with Ozone S3.
URL: https://github.com/apache/hadoop-ozone/pull/807
 
 
   ## What changes were proposed in this pull request?
   
   S3A failing complete multipart upload request due to namespace missing in 
the request.
   
   S3 documentation also shows the namespace in the request, in response 
namespace is set, only missing in the request.
   
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3375
   
   ## How was this patch tested?
   
   Existing tests should cover this.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3375) S3A failing complete multipart upload with Ozone S3

2020-04-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3375:
-
Labels: pull-request-available  (was: )

> S3A failing complete multipart upload with Ozone S3
> ---
>
> Key: HDDS-3375
> URL: https://issues.apache.org/jira/browse/HDDS-3375
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> javax.xml.bind.UnmarshalException: unexpected element (uri:"", 
> local:"CompleteMultipartUpload"). Expected elements are 
> <{http://s3.amazonaws.com/doc/2006-03-01/}CompleteMultipartUpload>,<{http://s3.amazonaws.com/doc/2006-03-01/}Part>
> at 
> com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext.handleEvent(UnmarshallingContext.java:744)
> at 
> com.sun.xml.bind.v2.runtime.unmarshaller.Loader.reportError(Loader.java:262)
> at 
> com.sun.xml.bind.v2.runtime.unmarshaller.Loader.reportError(Loader.java:257)
> at 
> com.sun.xml.bind.v2.runtime.unmarshaller.Loader.reportUnexpectedChildElement(Loader.java:124)
> at 
> com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext$DefaultRootLoader.childElement(UnmarshallingContext.java:1149)
> at 
> com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext._startElement(UnmarshallingContext.java:574)
> at 
> com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext.startElement(UnmarshallingContext.java:556)
> at 
> com.sun.xml.bind.v2.runtime.unmarshaller.SAXConnector.startElement(SAXConnector.java:168)
> at 
> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:509)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:374)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl$NSContentDriver.scanRootElementHook(XMLNSDocumentScannerImpl.java:613)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3132)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(XMLDocumentScannerImpl.java:852)
> {code}
> It seems http://s3.amazonaws.com/doc/2006-03-01/ is expected in the element.
> But in class CompleteMultipartUploadRequest,  namespace 
> http://s3.amazonaws.com/doc/2006-03-01/ is not defined here.
> Reported by [~sammichen]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3375) S3A failing complete multipart upload with Ozone S3

2020-04-10 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-3375:


 Summary: S3A failing complete multipart upload with Ozone S3
 Key: HDDS-3375
 URL: https://issues.apache.org/jira/browse/HDDS-3375
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham



{code:java}
javax.xml.bind.UnmarshalException: unexpected element (uri:"", 
local:"CompleteMultipartUpload"). Expected elements are 
<{http://s3.amazonaws.com/doc/2006-03-01/}CompleteMultipartUpload>,<{http://s3.amazonaws.com/doc/2006-03-01/}Part>
at 
com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext.handleEvent(UnmarshallingContext.java:744)
at 
com.sun.xml.bind.v2.runtime.unmarshaller.Loader.reportError(Loader.java:262)
at 
com.sun.xml.bind.v2.runtime.unmarshaller.Loader.reportError(Loader.java:257)
at 
com.sun.xml.bind.v2.runtime.unmarshaller.Loader.reportUnexpectedChildElement(Loader.java:124)
at 
com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext$DefaultRootLoader.childElement(UnmarshallingContext.java:1149)
at 
com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext._startElement(UnmarshallingContext.java:574)
at 
com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext.startElement(UnmarshallingContext.java:556)
at 
com.sun.xml.bind.v2.runtime.unmarshaller.SAXConnector.startElement(SAXConnector.java:168)
at 
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:509)
at 
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:374)
at 
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl$NSContentDriver.scanRootElementHook(XMLNSDocumentScannerImpl.java:613)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3132)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(XMLDocumentScannerImpl.java:852)

{code}


It seems http://s3.amazonaws.com/doc/2006-03-01/ is expected in the element.
But in class CompleteMultipartUploadRequest,  namespace 
http://s3.amazonaws.com/doc/2006-03-01/ is not defined here.

Reported by [~sammichen]









--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on issue #696: HDDS-3056. Allow users to list volumes they have access to, and optionally allow all users to list all volumes

2020-04-10 Thread GitBox
smengcl commented on issue #696: HDDS-3056. Allow users to list volumes they 
have access to, and optionally allow all users to list all volumes
URL: https://github.com/apache/hadoop-ozone/pull/696#issuecomment-612020402
 
 
   I just posted https://github.com/apache/hadoop-ozone/pull/806, might be 
related to the issue.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3374) OMVolumeSetOwnerRequest doesn't check if user is already the owner

2020-04-10 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-3374:
-
Description: 
OMVolumeSetOwnerRequest doesn't seem to check if the user is already the owner.
If the user is already the owner, it shouldn't proceed to the update logic, 
otherwise the resulting volume list for that user in {{UserVolumeInfo}} would 
have duplicate volume entry. As demonstrated in the test case.

-It also doesn't seem to remove the volume from the UserVolumeInfo from the 
previous owner.- Checked 
[here|https://github.com/apache/hadoop-ozone/blob/80e9f0a7238953e41b06d22f0419f04ab31d4212/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java#L152-L153].

[~bharat]

  was:
1. OMVolumeSetOwnerRequest doesn't check if the user is already the owner
2. It also doesn't seem to remove the volume from the UserVolumeInfo from the 
previous owner.

[~bharat]


> OMVolumeSetOwnerRequest doesn't check if user is already the owner
> --
>
> Key: HDDS-3374
> URL: https://issues.apache.org/jira/browse/HDDS-3374
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> OMVolumeSetOwnerRequest doesn't seem to check if the user is already the 
> owner.
> If the user is already the owner, it shouldn't proceed to the update logic, 
> otherwise the resulting volume list for that user in {{UserVolumeInfo}} would 
> have duplicate volume entry. As demonstrated in the test case.
> -It also doesn't seem to remove the volume from the UserVolumeInfo from the 
> previous owner.- Checked 
> [here|https://github.com/apache/hadoop-ozone/blob/80e9f0a7238953e41b06d22f0419f04ab31d4212/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java#L152-L153].
> [~bharat]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3374) OMVolumeSetOwnerRequest doesn't check if user is already the owner

2020-04-10 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-3374:
-
Status: Patch Available  (was: Open)

> OMVolumeSetOwnerRequest doesn't check if user is already the owner
> --
>
> Key: HDDS-3374
> URL: https://issues.apache.org/jira/browse/HDDS-3374
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> 1. OMVolumeSetOwnerRequest doesn't check if the user is already the owner
> 2. It also doesn't seem to remove the volume from the UserVolumeInfo from the 
> previous owner.
> [~bharat]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3374) OMVolumeSetOwnerRequest doesn't check if user is already the owner

2020-04-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3374:
-
Labels: pull-request-available  (was: )

> OMVolumeSetOwnerRequest doesn't check if user is already the owner
> --
>
> Key: HDDS-3374
> URL: https://issues.apache.org/jira/browse/HDDS-3374
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>
> 1. OMVolumeSetOwnerRequest doesn't check if the user is already the owner
> 2. It also doesn't seem to remove the volume from the UserVolumeInfo from the 
> previous owner.
> [~bharat]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl opened a new pull request #806: HDDS-3374. OMVolumeSetOwnerRequest doesn't check if user is already the owner

2020-04-10 Thread GitBox
smengcl opened a new pull request #806: HDDS-3374. OMVolumeSetOwnerRequest 
doesn't check if user is already the owner
URL: https://github.com/apache/hadoop-ozone/pull/806
 
 
   ## What changes were proposed in this pull request?
   
   1. OMVolumeSetOwnerRequest doesn't check if the user is already the owner
   2. It also doesn't seem to remove the volume from the UserVolumeInfo from 
the previous owner.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3374
   
   ## How was this patch tested?
   
   The test case (`setOwner` twice on the same volume with the same user) 
should pass.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3374) OMVolumeSetOwnerRequest doesn't check if user is already the owner

2020-04-10 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-3374:
-
Description: 
1. OMVolumeSetOwnerRequest doesn't check if the user is already the owner
2. It also doesn't seem to remove the volume from the UserVolumeInfo from the 
previous owner.

[~bharat]

  was:
1. OMVolumeSetOwnerRequest doesn't check if the user is already the owner
2. It also doesn't seem to remove the volume from the UserVolumeInfo from the 
previous owner.


> OMVolumeSetOwnerRequest doesn't check if user is already the owner
> --
>
> Key: HDDS-3374
> URL: https://issues.apache.org/jira/browse/HDDS-3374
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> 1. OMVolumeSetOwnerRequest doesn't check if the user is already the owner
> 2. It also doesn't seem to remove the volume from the UserVolumeInfo from the 
> previous owner.
> [~bharat]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3374) OMVolumeSetOwnerRequest doesn't check if the user is already the owner

2020-04-10 Thread Siyao Meng (Jira)
Siyao Meng created HDDS-3374:


 Summary: OMVolumeSetOwnerRequest doesn't check if the user is 
already the owner
 Key: HDDS-3374
 URL: https://issues.apache.org/jira/browse/HDDS-3374
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Siyao Meng
Assignee: Siyao Meng


1. OMVolumeSetOwnerRequest doesn't check if the user is already the owner
2. It also doesn't seem to remove the volume from the UserVolumeInfo from the 
previous owner.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3374) OMVolumeSetOwnerRequest doesn't check if user is already the owner

2020-04-10 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-3374:
-
Summary: OMVolumeSetOwnerRequest doesn't check if user is already the owner 
 (was: OMVolumeSetOwnerRequest doesn't check if the user is already the owner)

> OMVolumeSetOwnerRequest doesn't check if user is already the owner
> --
>
> Key: HDDS-3374
> URL: https://issues.apache.org/jira/browse/HDDS-3374
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> 1. OMVolumeSetOwnerRequest doesn't check if the user is already the owner
> 2. It also doesn't seem to remove the volume from the UserVolumeInfo from the 
> previous owner.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl edited a comment on issue #696: HDDS-3056. Allow users to list volumes they have access to, and optionally allow all users to list all volumes

2020-04-10 Thread GitBox
smengcl edited a comment on issue #696: HDDS-3056. Allow users to list volumes 
they have access to, and optionally allow all users to list all volumes
URL: https://github.com/apache/hadoop-ozone/pull/696#issuecomment-611991079
 
 
   I'm able to dig a bit into the root cause of the timeout in the new 
integration test I added.
   The symptom is that the tests succeed if I run each test case separately. 
But fails on the **second** test when I run all tests together.
   
   Turns out, when a mini ozone cluster launches for a second time in the 
**same** test class. In `setOwner()` the OM would [add the same volume to owner 
list](https://github.com/apache/hadoop-ozone/blob/80e9f0a7238953e41b06d22f0419f04ab31d4212/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java#L156-L158)
 for a second time (the list already has the volume entry) and **succeed**, 
which is very weird.
   The result of this is a **malformed** list in `UserVolumeInfo` for the user, 
see `prevVolList` variable in the below screenshot:
   https://user-images.githubusercontent.com/50227127/78987769-cc2eb780-7ae3-11ea-9dc7-544b3783c667.png;>
   
   This eventually causes `testAclDisabledListAllDisallowed` to get stuck in 
the `it.hasNext()` infinite loop and timeout because of how `VolumeIterator` 
and 
[`OmMetadataManagerImpl#listVolumes`](https://github.com/apache/hadoop-ozone/blob/876bec0130094b24472a7017fdb1fd81a65023bc/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java#L825-L828)
 works.
   
   I am able to confirm my discovery by setting a breakpoint inside 
`addVolumeToOwnerList()`.
   
   If I only run `testAclDisabledListAllDisallowed` this one test directly in 
IntelliJ, the test case would just pass. This makes the problem weirder. 
Because I do call the shutdown function in `MiniOzoneClusterImpl` to do the 
cleanup by the end of each test case. And it did [delete the temp directory for 
the entire 
cluster](https://github.com/apache/hadoop-ozone/blob/e2ebbf874d5e33565b27a24a02cfb4cee6330ea1/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java#L392).
 This in theory should have performed the clean up work.
   
   My questions:
   
   1. Unless there are some other in-memory cache (`TableCache`) that is 
accidentally persisted across mini cluster (i.e. not fully cleaned up in 
`MiniOzoneClusterImpl`)? If this is the case we just need to somehow fix the 
test utility.
   
   2. Or could it be the case that the 
[`userTable`](https://github.com/apache/hadoop-ozone/blob/876bec0130094b24472a7017fdb1fd81a65023bc/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java#L140)
 is flushed by mistake? In this case this would be a major bug (outside the 
scope of this jira) that should be fixed.
   
   Pinging for some help @bharatviswa504 @elek 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl edited a comment on issue #696: HDDS-3056. Allow users to list volumes they have access to, and optionally allow all users to list all volumes

2020-04-10 Thread GitBox
smengcl edited a comment on issue #696: HDDS-3056. Allow users to list volumes 
they have access to, and optionally allow all users to list all volumes
URL: https://github.com/apache/hadoop-ozone/pull/696#issuecomment-611991079
 
 
   I'm able to dig a bit into the root cause of the timeout in the new 
integration test I added.
   The symptom is that the tests succeed if I run each test case separately. 
But fails on the **second** test when I run all tests together.
   
   Turns out, when a mini ozone cluster launches for a second time in the 
**same** test class. In `setOwner()` the OM would [add the same volume to owner 
list](https://github.com/apache/hadoop-ozone/blob/80e9f0a7238953e41b06d22f0419f04ab31d4212/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java#L156-L158)
 for a second time (the list already has the volume entry) and **succeed**, 
which is very weird.
   The result of this is a **malformed** list in `UserVolumeInfo` for the user, 
see `prevVolList` variable in the below screenshot:
   https://user-images.githubusercontent.com/50227127/78987769-cc2eb780-7ae3-11ea-9dc7-544b3783c667.png;>
   
   This eventually causes `testAclDisabledListAllDisallowed` to get stuck in 
the `it.hasNext()` infinite loop and timeout because of how `VolumeIterator` 
and 
[`OmMetadataManagerImpl#listVolumes`](https://github.com/apache/hadoop-ozone/blob/876bec0130094b24472a7017fdb1fd81a65023bc/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java#L825-L828)
 works.
   
   I am able to confirm my discovery by setting a breakpoint inside 
`addVolumeToOwnerList()`.
   
   If I only run `testAclDisabledListAllDisallowed` this one test directly in 
IntelliJ, the test case would just pass. This makes the problem weirder. 
Because I do call the shutdown function in `MiniOzoneClusterImpl` to do the 
cleanup. And it did [delete the temp directory for the entire 
cluster](https://github.com/apache/hadoop-ozone/blob/e2ebbf874d5e33565b27a24a02cfb4cee6330ea1/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java#L392).
 This in theory should have performed the clean up work.
   
   My questions:
   
   1. Unless there are some other in-memory cache (`TableCache`) that is 
accidentally persisted across mini cluster (i.e. not fully cleaned up in 
`MiniOzoneClusterImpl`)? If this is the case we just need to somehow fix the 
test utility.
   
   2. Or could it be the case that the 
[`userTable`](https://github.com/apache/hadoop-ozone/blob/876bec0130094b24472a7017fdb1fd81a65023bc/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java#L140)
 is flushed by mistake? In this case this would be a major bug (outside the 
scope of this jira) that should be fixed.
   
   Pinging for some help @bharatviswa504 @elek 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl edited a comment on issue #696: HDDS-3056. Allow users to list volumes they have access to, and optionally allow all users to list all volumes

2020-04-10 Thread GitBox
smengcl edited a comment on issue #696: HDDS-3056. Allow users to list volumes 
they have access to, and optionally allow all users to list all volumes
URL: https://github.com/apache/hadoop-ozone/pull/696#issuecomment-611991079
 
 
   I'm able to dig a bit into the root cause of the timeout in the new 
integration test I added.
   The symptom is that the tests succeed if I run each test case separately. 
But fails on the **second** test when I run all tests together.
   
   Turns out, when a mini ozone cluster launches for a second time in the 
**same** test class. In `setOwner()` call the OM side would [add the same 
volume to owner 
list](https://github.com/apache/hadoop-ozone/blob/80e9f0a7238953e41b06d22f0419f04ab31d4212/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java#L156-L158)
 for a second time and **succeed**, which is very weird.
   The result of this is a **malformed** list in `UserVolumeInfo` for the user, 
see `prevVolList` variable in below screenshot:
   https://user-images.githubusercontent.com/50227127/78987769-cc2eb780-7ae3-11ea-9dc7-544b3783c667.png;>
   
   This eventually causes `testAclDisabledListAllDisallowed` to get stuck in 
the `it.hasNext()` infinite loop and timeout because of how `VolumeIterator` 
and 
[`OmMetadataManagerImpl#listVolumes`](https://github.com/apache/hadoop-ozone/blob/876bec0130094b24472a7017fdb1fd81a65023bc/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java#L825-L828)
 works.
   
   I am able to confirm my discovery by setting a breakpoint inside 
`addVolumeToOwnerList()`.
   
   If I only run `testAclDisabledListAllDisallowed` this one test directly in 
IntelliJ, the test case would just pass. This makes the problem weirder. 
Because I do call the shutdown function in `MiniOzoneClusterImpl` to do the 
cleanup. And it did [delete the temp directory for the entire 
cluster](https://github.com/apache/hadoop-ozone/blob/e2ebbf874d5e33565b27a24a02cfb4cee6330ea1/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java#L392).
 This in theory should have performed the clean up work.
   
   My questions:
   
   1. Unless there are some other in-memory cache (`TableCache`) that is 
accidentally persisted across mini cluster (i.e. not fully cleaned up in 
`MiniOzoneClusterImpl`)? If this is the case we just need to somehow fix the 
test utility.
   
   2. Or could it be the case that the 
[`userTable`](https://github.com/apache/hadoop-ozone/blob/876bec0130094b24472a7017fdb1fd81a65023bc/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java#L140)
 is flushed by mistake? In this case this would be a major bug (outside the 
scope of this jira) that should be fixed.
   
   Pinging for some help @bharatviswa504 @elek 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl edited a comment on issue #696: HDDS-3056. Allow users to list volumes they have access to, and optionally allow all users to list all volumes

2020-04-10 Thread GitBox
smengcl edited a comment on issue #696: HDDS-3056. Allow users to list volumes 
they have access to, and optionally allow all users to list all volumes
URL: https://github.com/apache/hadoop-ozone/pull/696#issuecomment-611991079
 
 
   I'm able to dig a bit into the root cause of the timeout.
   
   Turns out, when a mini ozone cluster launches for a second time in the 
**same** test class. In `setOwner()` call the OM side would [add the same 
volume to owner 
list](https://github.com/apache/hadoop-ozone/blob/80e9f0a7238953e41b06d22f0419f04ab31d4212/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java#L156-L158)
 for a second time and **succeed**, which is very weird.
   The result of this is a **malformed** list in `UserVolumeInfo` for the user, 
see `prevVolList` variable in below screenshot:
   https://user-images.githubusercontent.com/50227127/78987769-cc2eb780-7ae3-11ea-9dc7-544b3783c667.png;>
   
   This eventually causes `testAclDisabledListAllDisallowed` to get stuck in 
the `it.hasNext()` infinite loop and timeout because of how `VolumeIterator` 
and 
[`OmMetadataManagerImpl#listVolumes`](https://github.com/apache/hadoop-ozone/blob/876bec0130094b24472a7017fdb1fd81a65023bc/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java#L825-L828)
 works.
   
   I am able to confirm my discovery by setting a breakpoint inside 
`addVolumeToOwnerList()`.
   
   If I only run `testAclDisabledListAllDisallowed` this one test directly in 
IntelliJ, the test case would just pass. This makes the problem weirder. 
Because I do call the shutdown function in `MiniOzoneClusterImpl` to do the 
cleanup. And it did [delete the temp directory for the entire 
cluster](https://github.com/apache/hadoop-ozone/blob/e2ebbf874d5e33565b27a24a02cfb4cee6330ea1/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java#L392).
 This in theory should have performed the clean up work.
   
   My questions:
   
   1. Unless there are some other in-memory cache (`TableCache`) that is 
accidentally persisted across mini cluster (i.e. not fully cleaned up in 
`MiniOzoneClusterImpl`)? If this is the case we just need to somehow fix the 
test utility.
   
   2. Or could it be the case that the 
[`userTable`](https://github.com/apache/hadoop-ozone/blob/876bec0130094b24472a7017fdb1fd81a65023bc/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java#L140)
 is flushed by mistake? In this case this would be a major bug (outside the 
scope of this jira) that should be fixed.
   
   Pinging for some help @bharatviswa504 @elek 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl edited a comment on issue #696: HDDS-3056. Allow users to list volumes they have access to, and optionally allow all users to list all volumes

2020-04-10 Thread GitBox
smengcl edited a comment on issue #696: HDDS-3056. Allow users to list volumes 
they have access to, and optionally allow all users to list all volumes
URL: https://github.com/apache/hadoop-ozone/pull/696#issuecomment-611991079
 
 
   I'm able to dig a bit into the root cause of the timeout.
   
   Turns out, when a mini ozone cluster launches for a second time in the 
**same** test class. In `setOwner()` call the OM side would [add the same 
volume to owner 
list](https://github.com/apache/hadoop-ozone/blob/80e9f0a7238953e41b06d22f0419f04ab31d4212/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java#L156-L158)
 for a second time and **succeed**, which is very weird.
   The result of this is a **malformed** list in `UserVolumeInfo` for the user, 
see `prevVolList` variable in below screenshot:
   https://user-images.githubusercontent.com/50227127/78987769-cc2eb780-7ae3-11ea-9dc7-544b3783c667.png;>
   
   This causes `testAclDisabledListAllDisallowed` to get stuck in the 
`it.hasNext()` infinite loop and eventually timeout.
   
   I am able to confirm my discovery by setting a breakpoint inside 
`addVolumeToOwnerList()`.
   
   If I only run `testAclDisabledListAllDisallowed` this one test directly in 
IntelliJ, the test case would just pass. This makes the problem weirder. 
Because I do call the shutdown function in `MiniOzoneClusterImpl` to do the 
cleanup. And it did [delete the temp directory for the entire 
cluster](https://github.com/apache/hadoop-ozone/blob/e2ebbf874d5e33565b27a24a02cfb4cee6330ea1/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java#L392).
 This in theory should have performed the clean up work.
   
   My questions:
   
   1. Unless there are some other in-memory cache (`TableCache`) that is 
accidentally persisted across mini cluster (i.e. not fully cleaned up in 
`MiniOzoneClusterImpl`)? If this is the case we just need to somehow fix the 
test utility.
   
   2. Or could it be the case that the 
[`userTable`](https://github.com/apache/hadoop-ozone/blob/876bec0130094b24472a7017fdb1fd81a65023bc/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java#L140)
 is flushed by mistake? In this case this would be a major bug (outside the 
scope of this jira) that should be fixed.
   
   Pinging for some help @bharatviswa504 @elek 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on issue #696: HDDS-3056. Allow users to list volumes they have access to, and optionally allow all users to list all volumes

2020-04-10 Thread GitBox
smengcl commented on issue #696: HDDS-3056. Allow users to list volumes they 
have access to, and optionally allow all users to list all volumes
URL: https://github.com/apache/hadoop-ozone/pull/696#issuecomment-611991079
 
 
   I'm able to dig a bit into the root cause of the timeout.
   
   Turns out, when a mini ozone cluster launches for a second time in the 
**same** test class. In `setOwner()` call the OM side would [add the same 
volume to owner 
list](https://github.com/apache/hadoop-ozone/blob/80e9f0a7238953e41b06d22f0419f04ab31d4212/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java#L156-L158)
 for a second time and **succeed**, which is very weird. This causes 
`testAclDisabledListAllDisallowed` to get stuck in the `it.hasNext()` infinite 
loop and eventually timeout.
   
   I am able to confirm my discovery by setting a breakpoint inside 
`addVolumeToOwnerList()`.
   
   If I only run `testAclDisabledListAllDisallowed` this one test directly in 
IntelliJ, the test case would just pass. This makes the problem weirder. 
Because I do call the shutdown function in `MiniOzoneClusterImpl` to do the 
cleanup. And it did [delete the temp directory for the entire 
cluster](https://github.com/apache/hadoop-ozone/blob/e2ebbf874d5e33565b27a24a02cfb4cee6330ea1/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java#L392).
 This in theory should have performed the clean up work.
   
   My questions:
   
   1. Unless there are some other in-memory cache (`TableCache`) that is 
accidentally persisted across mini cluster (i.e. not fully cleaned up in 
`MiniOzoneClusterImpl`)? If this is the case we just need to somehow fix the 
test utility.
   
   2. Or could it be the case that the 
[`userTable`](https://github.com/apache/hadoop-ozone/blob/876bec0130094b24472a7017fdb1fd81a65023bc/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java#L140)
 is flushed by mistake? In this case this would be a major bug (outside the 
scope of this jira) that should be fixed.
   
   Pinging for some help @bharatviswa504 @elek 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3135) Enable test added in HDDS-3084 when blocking issues are resolved

2020-04-10 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell resolved HDDS-3135.
-
Fix Version/s: 0.6.0
   Resolution: Fixed

> Enable test added in HDDS-3084 when blocking issues are resolved
> 
>
> Key: HDDS-3135
> URL: https://issues.apache.org/jira/browse/HDDS-3135
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Affects Versions: 0.6.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Once the blocking issues HDDS-3107 and HDDS-3116 are resolved the test added 
> by HDDS-3084 show be enabled by renaming 
> "hadoop-ozone/dist/src/main/compose/ozone-topology/hdds-3084.sh" to "test.sh".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3135) Enable test added in HDDS-3084 when blocking issues are resolved

2020-04-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3135:
-
Labels: pull-request-available  (was: )

> Enable test added in HDDS-3084 when blocking issues are resolved
> 
>
> Key: HDDS-3135
> URL: https://issues.apache.org/jira/browse/HDDS-3135
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Affects Versions: 0.6.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>
> Once the blocking issues HDDS-3107 and HDDS-3116 are resolved the test added 
> by HDDS-3084 show be enabled by renaming 
> "hadoop-ozone/dist/src/main/compose/ozone-topology/hdds-3084.sh" to "test.sh".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel merged pull request #790: HDDS-3135. Enable topology acceptance test added in HDDS-3084 to read data when racks stopped

2020-04-10 Thread GitBox
sodonnel merged pull request #790: HDDS-3135. Enable topology acceptance test 
added in HDDS-3084 to read data when racks stopped
URL: https://github.com/apache/hadoop-ozone/pull/790
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on issue #790: HDDS-3135. Enable topology acceptance test added in HDDS-3084 to read data when racks stopped

2020-04-10 Thread GitBox
sodonnel commented on issue #790: HDDS-3135. Enable topology acceptance test 
added in HDDS-3084 to read data when racks stopped
URL: https://github.com/apache/hadoop-ozone/pull/790#issuecomment-611950792
 
 
   Thanks for the review. I will go ahead and merge this one now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on issue #668: HDDS-3139. Pipeline placement should max out pipeline usage

2020-04-10 Thread GitBox
sodonnel commented on issue #668: HDDS-3139. Pipeline placement should max out 
pipeline usage
URL: https://github.com/apache/hadoop-ozone/pull/668#issuecomment-611950545
 
 
   Thanks for the update. Its a holiday weekend here in Europe, so it will 
probably be Tuesday before I get to look at this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on issue #668: HDDS-3139. Pipeline placement should max out pipeline usage

2020-04-10 Thread GitBox
timmylicheng commented on issue #668: HDDS-3139. Pipeline placement should max 
out pipeline usage
URL: https://github.com/apache/hadoop-ozone/pull/668#issuecomment-611927572
 
 
   > > @sodonnel I rebase the code with minor conflicts in test class, but the 
test won't pass. I took a close look and made some change. But I realize the 
issue that I mention in the last comment about how to leverage with 
chooseNodeFromTopology. Wanna learn your thoughts.
   > 
   > I think one problem is this line:
   > 
   > ```
   > datanodeDetails = nodes.stream().findAny().get();
   > ```
   > 
   > The findAny method does not seem to return a random entry - so the same 
node is returned until it uses up its pipeline allocation.
   > 
   > I am also not sure about the limit calculation in getLowerLoadNodes:
   > 
   > ```
   >  int limit = nodes.size() * heavyNodeCriteria
   > / HddsProtos.ReplicationFactor.THREE.getNumber();
   > ```
   > 
   > Adding debug, I find this method starts to return an empty list when there 
are still available nodes to handle the pipeline.
   > 
   > Also in `filterViableNodes()` via the `meetCriteria()` method, nodes with 
more than the heavy load limit are already filtered out, so you are guaranteed 
your healthy node list container only nodes with the capacity to take another 
pipeline. So I wonder why we need to filter the nodes further.
   > 
   > > But I realize the issue that I mention in the last comment about how to 
leverage with chooseNodeFromTopology.
   > 
   > There seems to be some inconsistency in how we pick the nodes (not just in 
this PR, but in the wider code). Eg in `chooseNodeBasedOnRackAwareness()` we 
don't call into NetworkTopology(), but instead we use the 
`getNetworkLocation()` method on the `DatanodeDetails` object to find nodes 
that do not match the anchor's location.
   > 
   > Then later in `chooseNodeFromNetworkTopology()` we try to find a node 
where location is equal to the anchor and that is where we call into 
`networkTopology.chooseRandom()`. Could we not avoid that call, and avoid 
generating a new list of nodes and do something similar to 
`chooseNodeBasedOnRackAwareness()`, using the `getNetworkLocation()` method to 
find matching nodes. That would probably be more efficient that the current 
implementation.
   > 
   > As we are also then able to re-use the same list of healthy nodes 
everywhere without more filtering, maybe we could sort that list once by 
pipeline count in filterViableNodes or meetCriteria and then later always pick 
the node with the lowest load, filling the nodes up that way.
   > 
   > I hope this comment makes sense as it is very long.
   
   @sodonnel Thanks for the consideration. I've updated the patch according to 
your example.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3373) Intermittent failure in TestDnRatisLogParser

2020-04-10 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-3373:
--

 Summary: Intermittent failure in TestDnRatisLogParser
 Key: HDDS-3373
 URL: https://issues.apache.org/jira/browse/HDDS-3373
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Attila Doroszlai
 Attachments: 
TEST-org.apache.hadoop.ozone.dn.ratis.TestDnRatisLogParser.xml, 
org.apache.hadoop.ozone.dn.ratis.TestDnRatisLogParser-output.txt, 
org.apache.hadoop.ozone.dn.ratis.TestDnRatisLogParser.txt

{code:title=https://github.com/apache/hadoop-ozone/pull/783/checks?check_run_id=576054872}
[ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 18.987 
s <<< FAILURE! - in org.apache.hadoop.ozone.dn.ratis.TestDnRatisLogParser
[ERROR] 
testRatisLogParsing(org.apache.hadoop.ozone.dn.ratis.TestDnRatisLogParser)  
Time elapsed: 18.882 s  <<< FAILURE!
java.lang.AssertionError
  ...
  at 
org.apache.hadoop.ozone.dn.ratis.TestDnRatisLogParser.testRatisLogParsing(TestDnRatisLogParser.java:75)
{code}

CC [~msingh]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3373) Intermittent failure in TestDnRatisLogParser

2020-04-10 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-3373:
---
Attachment: TEST-org.apache.hadoop.ozone.dn.ratis.TestDnRatisLogParser.xml
org.apache.hadoop.ozone.dn.ratis.TestDnRatisLogParser.txt
org.apache.hadoop.ozone.dn.ratis.TestDnRatisLogParser-output.txt

> Intermittent failure in TestDnRatisLogParser
> 
>
> Key: HDDS-3373
> URL: https://issues.apache.org/jira/browse/HDDS-3373
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Attila Doroszlai
>Priority: Major
> Attachments: 
> TEST-org.apache.hadoop.ozone.dn.ratis.TestDnRatisLogParser.xml, 
> org.apache.hadoop.ozone.dn.ratis.TestDnRatisLogParser-output.txt, 
> org.apache.hadoop.ozone.dn.ratis.TestDnRatisLogParser.txt
>
>
> {code:title=https://github.com/apache/hadoop-ozone/pull/783/checks?check_run_id=576054872}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 18.987 s <<< FAILURE! - in 
> org.apache.hadoop.ozone.dn.ratis.TestDnRatisLogParser
> [ERROR] 
> testRatisLogParsing(org.apache.hadoop.ozone.dn.ratis.TestDnRatisLogParser)  
> Time elapsed: 18.882 s  <<< FAILURE!
> java.lang.AssertionError
>   ...
>   at 
> org.apache.hadoop.ozone.dn.ratis.TestDnRatisLogParser.testRatisLogParsing(TestDnRatisLogParser.java:75)
> {code}
> CC [~msingh]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3372) Delete HISTORY.txt

2020-04-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3372:
-
Labels: pull-request-available  (was: )

> Delete HISTORY.txt
> --
>
> Key: HDDS-3372
> URL: https://issues.apache.org/jira/browse/HDDS-3372
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>
> During HDDS-2294, the old file was not deleted.
> This Jira aims to remove the old file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] dineshchitlangia opened a new pull request #805: HDDS-3372. Delete HISTORY.txt

2020-04-10 Thread GitBox
dineshchitlangia opened a new pull request #805: HDDS-3372. Delete HISTORY.txt
URL: https://github.com/apache/hadoop-ozone/pull/805
 
 
   ## What changes were proposed in this pull request?
   
   Deleted old HISTORY.txt missed during HDDS-2294
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3372
   
   ## How was this patch tested?
   
   Visual check.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3372) Delete HISTORY.txt

2020-04-10 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-3372:
---

 Summary: Delete HISTORY.txt
 Key: HDDS-3372
 URL: https://issues.apache.org/jira/browse/HDDS-3372
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


During HDDS-2294, the old file was not deleted.

This Jira aims to remove the old file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org