[jira] [Work logged] (HDDS-2032) Ozone client should retry writes in case of any ratis/stateMachine exceptions

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2032?focusedWorklogId=313754&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313754
 ]

ASF GitHub Bot logged work on HDDS-2032:


Author: ASF GitHub Bot
Created on: 17/Sep/19 15:40
Start Date: 17/Sep/19 15:40
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on issue #1420: HDDS-2032. Ozone 
client should retry writes in case of any ratis/stateMachine exceptions.
URL: https://github.com/apache/hadoop/pull/1420#issuecomment-532278299
 
 
   Thanks for working on this @bshashikant , there are some conflicts with this 
patch. Can you please rebase.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313754)
Time Spent: 1h  (was: 50m)

> Ozone client should retry writes in case of any ratis/stateMachine exceptions
> -
>
> Key: HDDS-2032
> URL: https://issues.apache.org/jira/browse/HDDS-2032
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Currently, Ozone client retry writes on a different pipeline or container in 
> case of some specific exceptions. But in case, it sees exception such as 
> DISK_FULL, CONTAINER_UNHEALTHY or any corruption , it just aborts the write. 
> In general, the every such exception on the client should be a retriable  
> exception in ozone client and on some specific exceptions, it should take 
> some more specific exception like excluding certain containers or pipelines 
> while retrying or informing SCM of a corrupt replica etc.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2136) OM block allocation metric not paired with its failures

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2136?focusedWorklogId=313731&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313731
 ]

ASF GitHub Bot logged work on HDDS-2136:


Author: ASF GitHub Bot
Created on: 17/Sep/19 14:50
Start Date: 17/Sep/19 14:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1460: HDDS-2136. OM 
block allocation metric not paired with its failures
URL: https://github.com/apache/hadoop/pull/1460#issuecomment-532255847
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 75 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 30 | hadoop-ozone in trunk failed. |
   | -1 | compile | 19 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 63 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 908 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | trunk passed |
   | 0 | spotbugs | 172 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 23 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 32 | hadoop-ozone in the patch failed. |
   | -1 | compile | 22 | hadoop-ozone in the patch failed. |
   | -1 | javac | 22 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 31 | hadoop-ozone: The patch generated 2 new + 352 
unchanged - 2 fixed = 354 total (was 354) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 723 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | the patch passed |
   | -1 | findbugs | 23 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 280 | hadoop-hdds in the patch failed. |
   | -1 | unit | 25 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3350 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1460/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1460 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 22d091fc3ff9 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7f90731 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1460/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1460/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1460/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1460/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1460/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1460/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1460/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1460/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1460/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1460/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1460/1/testReport/ |
   | Max. process+thread count | 400 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-mu

[jira] [Updated] (HDDS-2141) Missing total number of operations

2019-09-17 Thread Doroszlai, Attila (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-2141:

Attachment: total-new.png

> Missing total number of operations
> --
>
> Key: HDDS-2141
> URL: https://issues.apache.org/jira/browse/HDDS-2141
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Attachments: missing_total.png, total-new.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Total number of operations is missing from some metrics graphs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2141) Missing total number of operations

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2141?focusedWorklogId=313729&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313729
 ]

ASF GitHub Bot logged work on HDDS-2141:


Author: ASF GitHub Bot
Created on: 17/Sep/19 14:45
Start Date: 17/Sep/19 14:45
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1462: HDDS-2141. Missing 
total number of operations
URL: https://github.com/apache/hadoop/pull/1462#issuecomment-532253800
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313729)
Time Spent: 20m  (was: 10m)

> Missing total number of operations
> --
>
> Key: HDDS-2141
> URL: https://issues.apache.org/jira/browse/HDDS-2141
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Attachments: missing_total.png, total-new.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Total number of operations is missing from some metrics graphs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2141) Missing total number of operations

2019-09-17 Thread Doroszlai, Attila (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-2141:

Status: Patch Available  (was: In Progress)

> Missing total number of operations
> --
>
> Key: HDDS-2141
> URL: https://issues.apache.org/jira/browse/HDDS-2141
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Attachments: missing_total.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Total number of operations is missing from some metrics graphs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2141) Missing total number of operations

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2141:
-
Labels: pull-request-available  (was: )

> Missing total number of operations
> --
>
> Key: HDDS-2141
> URL: https://issues.apache.org/jira/browse/HDDS-2141
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Attachments: missing_total.png
>
>
> Total number of operations is missing from some metrics graphs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2141) Missing total number of operations

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2141?focusedWorklogId=313728&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313728
 ]

ASF GitHub Bot logged work on HDDS-2141:


Author: ASF GitHub Bot
Created on: 17/Sep/19 14:44
Start Date: 17/Sep/19 14:44
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1462: HDDS-2141. 
Missing total number of operations
URL: https://github.com/apache/hadoop/pull/1462
 
 
   ## What changes were proposed in this pull request?
   
   Sum request counts and use it as fallback in case explicit "Ops" metric is 
not available.
   
   https://issues.apache.org/jira/browse/HDDS-2141
   
   ## How was this patch tested?
   
   Ran `ozonefs` robot test, checked OM metrics page.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313728)
Remaining Estimate: 0h
Time Spent: 10m

> Missing total number of operations
> --
>
> Key: HDDS-2141
> URL: https://issues.apache.org/jira/browse/HDDS-2141
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Attachments: missing_total.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Total number of operations is missing from some metrics graphs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2132) TestKeyValueContainer is failing

2019-09-17 Thread Arpit Agarwal (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931516#comment-16931516
 ] 

Arpit Agarwal commented on HDDS-2132:
-

[~shashikant], do you know when the BCSID will be absent in RocksDB? Is it 
written lazily?

> TestKeyValueContainer is failing
> 
>
> Key: HDDS-2132
> URL: https://issues.apache.org/jira/browse/HDDS-2132
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Nanda kumar
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {{TestKeyValueContainer}} is failing with the following exception 
> {noformat}
> [ERROR] 
> testContainerImportExport(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainer)
>   Time elapsed: 0.173 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.parseKVContainerData(KeyValueContainerUtil.java:201)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.importContainerData(KeyValueContainer.java:500)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainer.testContainerImportExport(TestKeyValueContainer.java:235)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=313710&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313710
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 17/Sep/19 14:08
Start Date: 17/Sep/19 14:08
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r325188832
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/S3StorageType.java
 ##
 @@ -52,4 +52,12 @@ public static S3StorageType getDefault() {
 return STANDARD;
   }
 
+  public static S3StorageType fromReplicationType(
+  ReplicationType replicationType) {
+if (replicationType == ReplicationType.STAND_ALONE) {
+  return S3StorageType.REDUCED_REDUNDANCY;
 
 Review comment:
   Sounds reasonable but it requires the change of the protobuf (factor is not 
available from the message). Seems to be a bigger change but I added it to this 
patch (sorry, now it's an other commit to review :-P )
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313710)
Time Spent: 9h 50m  (was: 9h 40m)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 9h 50m
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-730?focusedWorklogId=313701&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313701
 ]

ASF GitHub Bot logged work on HDDS-730:
---

Author: ASF GitHub Bot
Created on: 17/Sep/19 13:50
Start Date: 17/Sep/19 13:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1459: HDDS-730. Ozone 
fs cli prints hadoop fs in usage.
URL: https://github.com/apache/hadoop/pull/1459#issuecomment-532229923
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 78 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1180 | trunk passed |
   | +1 | compile | 1045 | trunk passed |
   | +1 | checkstyle | 52 | trunk passed |
   | +1 | mvnsite | 87 | trunk passed |
   | +1 | shadedclient | 750 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 71 | trunk passed |
   | 0 | spotbugs | 131 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 129 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 48 | the patch passed |
   | +1 | compile | 999 | the patch passed |
   | +1 | javac | 999 | the patch passed |
   | -0 | checkstyle | 49 | hadoop-common-project/hadoop-common: The patch 
generated 1 new + 16 unchanged - 1 fixed = 17 total (was 17) |
   | +1 | mvnsite | 82 | the patch passed |
   | +1 | shellcheck | 5 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 744 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 72 | the patch passed |
   | +1 | findbugs | 140 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 572 | hadoop-common in the patch failed. |
   | +1 | asflicense | 52 | The patch does not generate ASF License warnings. |
   | | | 6390 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.cli.TestCLI |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1459/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1459 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
compile javac javadoc mvninstall shadedclient findbugs checkstyle |
   | uname | Linux fe4de712107b 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7f90731 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1459/1/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1459/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1459/1/testReport/ |
   | Max. process+thread count | 1377 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1459/1/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313701)
Time Spent: 20m  (was: 10m)

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>  

[jira] [Work logged] (HDDS-2016) Add option to enforce gdpr in Bucket Create command

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2016?focusedWorklogId=313696&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313696
 ]

ASF GitHub Bot logged work on HDDS-2016:


Author: ASF GitHub Bot
Created on: 17/Sep/19 13:24
Start Date: 17/Sep/19 13:24
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #1458: HDDS-2016. 
Add option to enforce gdpr in Bucket Create command.
URL: https://github.com/apache/hadoop/pull/1458#issuecomment-532219058
 
 
   failures unrelated to patch
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313696)
Time Spent: 0.5h  (was: 20m)

> Add option to enforce gdpr in Bucket Create command
> ---
>
> Key: HDDS-2016
> URL: https://issues.apache.org/jira/browse/HDDS-2016
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> e2e flow where user can enforce GDPR for a bucket during creation only.
> Add/update audit logs as this will be a useful action for compliance purpose.
> Add docs to show usage.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2137) HddsClientUtils and OzoneUtils have duplicate verifyResourceName()

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2137?focusedWorklogId=313690&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313690
 ]

ASF GitHub Bot logged work on HDDS-2137:


Author: ASF GitHub Bot
Created on: 17/Sep/19 13:03
Start Date: 17/Sep/19 13:03
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on issue #1455: HDDS-2137 : 
OzoneUtils to verify resourceName using HddsClientUtils
URL: https://github.com/apache/hadoop/pull/1455#issuecomment-532210953
 
 
   > @virajjasani I am not sure what do you need exactly. hadoop-ozone always 
use hadoop-hdds (common, client as a dependency).
   > 
   > The alternative CI builds is not triggered because we have a 
[whitelist](https://github.com/elek/argo-ozone/commit/6bd2ebe1ae78c55ae1c5b2e6c30b407443fb1b13)
 to allow PR builds
   
   Thanks @elek 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313690)
Time Spent: 1h 20m  (was: 1h 10m)

> HddsClientUtils and OzoneUtils have duplicate verifyResourceName()
> --
>
> Key: HDDS-2137
> URL: https://issues.apache.org/jira/browse/HDDS-2137
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> HddsClientUtils and OzoneUtils can share the method to verify resource name 
> that verifies if the bucket/volume name is a valid DNS name.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2137) HddsClientUtils and OzoneUtils have duplicate verifyResourceName()

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2137?focusedWorklogId=313691&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313691
 ]

ASF GitHub Bot logged work on HDDS-2137:


Author: ASF GitHub Bot
Created on: 17/Sep/19 13:03
Start Date: 17/Sep/19 13:03
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on issue #1455: HDDS-2137 : 
OzoneUtils to verify resourceName using HddsClientUtils
URL: https://github.com/apache/hadoop/pull/1455#issuecomment-532210953
 
 
   > @virajjasani I am not sure what do you need exactly. hadoop-ozone always 
use hadoop-hdds (common, client as a dependency).
   > 
   > The alternative CI builds is not triggered because we have a 
[whitelist](https://github.com/elek/argo-ozone/commit/6bd2ebe1ae78c55ae1c5b2e6c30b407443fb1b13)
 to allow PR builds
   
   Thanks @elek for whitelisting
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313691)
Time Spent: 1.5h  (was: 1h 20m)

> HddsClientUtils and OzoneUtils have duplicate verifyResourceName()
> --
>
> Key: HDDS-2137
> URL: https://issues.apache.org/jira/browse/HDDS-2137
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> HddsClientUtils and OzoneUtils can share the method to verify resource name 
> that verifies if the bucket/volume name is a valid DNS name.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2137) HddsClientUtils and OzoneUtils have duplicate verifyResourceName()

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2137?focusedWorklogId=313684&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313684
 ]

ASF GitHub Bot logged work on HDDS-2137:


Author: ASF GitHub Bot
Created on: 17/Sep/19 13:00
Start Date: 17/Sep/19 13:00
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1455: HDDS-2137 : OzoneUtils 
to verify resourceName using HddsClientUtils
URL: https://github.com/apache/hadoop/pull/1455#issuecomment-532209903
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313684)
Time Spent: 1h 10m  (was: 1h)

> HddsClientUtils and OzoneUtils have duplicate verifyResourceName()
> --
>
> Key: HDDS-2137
> URL: https://issues.apache.org/jira/browse/HDDS-2137
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> HddsClientUtils and OzoneUtils can share the method to verify resource name 
> that verifies if the bucket/volume name is a valid DNS name.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2135) OM Metric mismatch (MultipartUpload failures)

2019-09-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931393#comment-16931393
 ] 

Hudson commented on HDDS-2135:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17312 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17312/])
HDDS-2135. OM Metric mismatch (MultipartUpload failures) (elek: rev 
f3de1417873f05a0bc4842210a0c7e8f6ec82cce)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMMetrics.java


> OM Metric mismatch (MultipartUpload failures)
> -
>
> Key: HDDS-2135
> URL: https://issues.apache.org/jira/browse/HDDS-2135
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {{incNumCommitMultipartUploadPartFails()}} increments 
> {{numInitiateMultipartUploadFails}} instead of the counter for commit 
> failures.
> https://github.com/apache/hadoop/blob/85b1c728e4ed22f03db255f5ef34a2a79eb20d52/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMMetrics.java#L310-L312



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2120) Remove hadoop classes from ozonefs-current jar

2019-09-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931391#comment-16931391
 ] 

Hudson commented on HDDS-2120:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17312 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17312/])
HDDS-2120. Remove hadoop classes from ozonefs-current jar (elek: rev 
3a549cea193fef0a2386f6c932a48ca2c66ab89a)
* (edit) hadoop-ozone/ozonefs-lib-current/pom.xml


> Remove hadoop classes from ozonefs-current jar
> --
>
> Key: HDDS-2120
> URL: https://issues.apache.org/jira/browse/HDDS-2120
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> We have two kind of ozone file system jars: current and legacy. current is 
> designed to work only with exactly the same hadoop version which is used for 
> compilation (3.2 as of now).
> But as of now the hadoop classes are included in the current jar which is not 
> necessary as the jar is expected to be used in an environment where  the 
> hadoop classes (exactly the same hadoop classes) are already there. They can 
> be excluded.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2117) ContainerStateMachine#writeStateMachineData times out

2019-09-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931394#comment-16931394
 ] 

Hudson commented on HDDS-2117:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17312 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17312/])
HDDS-2117. ContainerStateMachine#writeStateMachineData times out. (ljain: rev 
7f9073132dcc9db157a6792635d2ed099f2ef0d2)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/HddsDispatcher.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java


> ContainerStateMachine#writeStateMachineData times out
> -
>
> Key: HDDS-2117
> URL: https://issues.apache.org/jira/browse/HDDS-2117
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> The issue seems to be happening because the below precondition check fails in 
> case two writeChunk gets executed in parallel and the runtime exception 
> thrown is handled correctly in ContainerStateMachine.
>  
> HddsDispatcher.java:239
> {code:java}
> Preconditions
> .checkArgument(!container2BCSIDMap.containsKey(containerID));
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread YiSheng Lien (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931387#comment-16931387
 ] 

YiSheng Lien commented on HDDS-730:
---

Thanks [~elek] the idea.

It seems better than the older idea, 

I will create the OzoneFsShell, and make it possible soon :)

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: image-2018-10-24-17-15-39-097.png, 
> ozone-cli-fs-withnonexist.png, ozone-cli-fs.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread YiSheng Lien (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931387#comment-16931387
 ] 

YiSheng Lien edited comment on HDDS-730 at 9/17/19 12:39 PM:
-

Thanks [~elek] the idea.

It seems better than the my older idea, 

I will create the OzoneFsShell, and make it possible soon :)


was (Author: cxorm):
Thanks [~elek] the idea.

It seems better than the older idea, 

I will create the OzoneFsShell, and make it possible soon :)

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: image-2018-10-24-17-15-39-097.png, 
> ozone-cli-fs-withnonexist.png, ozone-cli-fs.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2142) OM metrics mismatch (abort multipart request)

2019-09-17 Thread Doroszlai, Attila (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-2142:

Status: Patch Available  (was: In Progress)

> OM metrics mismatch (abort multipart request)
> -
>
> Key: HDDS-2142
> URL: https://issues.apache.org/jira/browse/HDDS-2142
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Attachments: abort_multipart-new.png, abort_multipart.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> AbortMultipartUpload failure count can be higher than request count.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14850) Optimize FileSystemAccessService#getFileSystemConfiguration

2019-09-17 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931381#comment-16931381
 ] 

Hadoop QA commented on HDFS-14850:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 35m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-httpfs: The 
patch generated 1 new + 48 unchanged - 0 fixed = 49 total (was 48) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 21s{color} 
| {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.fs.http.server.TestHttpFSServerWebServerWithRandomSecret |
|   | hadoop.fs.http.server.TestHttpFSServerWebServer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:39e82acc485 |
| JIRA Issue | HDFS-14850 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12980488/HDFS-14850.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d47172b8a3ec 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7f90731 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27893/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HD

[jira] [Work logged] (HDDS-2142) OM metrics mismatch (abort multipart request)

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2142?focusedWorklogId=313662&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313662
 ]

ASF GitHub Bot logged work on HDDS-2142:


Author: ASF GitHub Bot
Created on: 17/Sep/19 12:16
Start Date: 17/Sep/19 12:16
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1461: HDDS-2142. OM 
metrics mismatch (abort multipart request)
URL: https://github.com/apache/hadoop/pull/1461#issuecomment-532194305
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313662)
Time Spent: 20m  (was: 10m)

> OM metrics mismatch (abort multipart request)
> -
>
> Key: HDDS-2142
> URL: https://issues.apache.org/jira/browse/HDDS-2142
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Attachments: abort_multipart-new.png, abort_multipart.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> AbortMultipartUpload failure count can be higher than request count.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2142) OM metrics mismatch (abort multipart request)

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2142?focusedWorklogId=313661&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313661
 ]

ASF GitHub Bot logged work on HDDS-2142:


Author: ASF GitHub Bot
Created on: 17/Sep/19 12:15
Start Date: 17/Sep/19 12:15
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1461: HDDS-2142. 
OM metrics mismatch (abort multipart request)
URL: https://github.com/apache/hadoop/pull/1461
 
 
   ## What changes were proposed in this pull request?
   
   * Fix wrong call to increment `AbortMultipartUploadFails`
   * Add missing call to increment `AbortMultipartUploads`
   
   https://issues.apache.org/jira/browse/HDDS-2142
   
   ## How was this patch tested?
   
   Ran `ozones3` acceptance test, verified OM metrics page.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313661)
Remaining Estimate: 0h
Time Spent: 10m

> OM metrics mismatch (abort multipart request)
> -
>
> Key: HDDS-2142
> URL: https://issues.apache.org/jira/browse/HDDS-2142
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Attachments: abort_multipart-new.png, abort_multipart.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> AbortMultipartUpload failure count can be higher than request count.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2142) OM metrics mismatch (abort multipart request)

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2142:
-
Labels: pull-request-available  (was: )

> OM metrics mismatch (abort multipart request)
> -
>
> Key: HDDS-2142
> URL: https://issues.apache.org/jira/browse/HDDS-2142
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Attachments: abort_multipart-new.png, abort_multipart.png
>
>
> AbortMultipartUpload failure count can be higher than request count.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2142) OM metrics mismatch (abort multipart request)

2019-09-17 Thread Doroszlai, Attila (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-2142:

Attachment: abort_multipart-new.png

> OM metrics mismatch (abort multipart request)
> -
>
> Key: HDDS-2142
> URL: https://issues.apache.org/jira/browse/HDDS-2142
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Attachments: abort_multipart-new.png, abort_multipart.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> AbortMultipartUpload failure count can be higher than request count.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-17 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931365#comment-16931365
 ] 

Prabhu Joseph commented on HDFS-14845:
--

[~eyang] [~aajisaka] Thanks for the review comments.

[~eyang] 1. Have tried with a new filter initializer 
{{HttpFSAuthenticationFilterInitializer}} which adds the filter 
{{HttpFSAuthenticationFilter}} and initializes the filter configs and this 
works fine. But most of the testcases related to {{HttpFSServerWebServer}} (eg: 
{{TestHttpFSServer}}) requires more changes as they did not use {{HttpServer2}} 
and so the filter initializers are not called, instead it uses a Test Jetty 
Server with {{HttpFSServerWebApp}} which are failing as the filter won't have 
any configs.

Please let me know if we can handle this in a separate improvement Jira.

2. Have changed the {{HttpFSAuthenticationFilter$getConfiguration}} to honor 
the {{hadoop.http.authentication}} configs which will be overridden by 
{{httpfs.authentication}} configs.

Have attached  [^HDFS-14845-002.patch] with changes to ignore 
{{AuthenticationFilterInitializer}} and (2).

> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: HDFS-14845-001.patch, HDFS-14845-002.patch
>
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=313651&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313651
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 17/Sep/19 12:02
Start Date: 17/Sep/19 12:02
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r325127443
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmMultipartUploadList.java
 ##
 @@ -18,15 +18,21 @@
 
 package org.apache.hadoop.ozone.om.helpers;
 
-import java.util.ArrayList;
 import java.util.List;
 
+import org.apache.hadoop.hdds.client.ReplicationType;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
+
 /**
- * List of in-flight MU upoads.
+ * List of in-flight MPU uploads.
  */
 public class OmMultipartUploadList {
 
-  private List uploads = new ArrayList<>();
+  private ReplicationType replicationType;
+
+  private ReplicationFactor replicationFactor;
 
 Review comment:
   Yes, it was added during one of the experiments. I removed it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313651)
Time Spent: 9h 40m  (was: 9.5h)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 9h 40m
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14768) In some cases, erasure blocks are corruption when they are reconstruct.

2019-09-17 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931348#comment-16931348
 ] 

Hadoop QA commented on HDFS-14768:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 171 unchanged - 1 fixed = 171 total (was 172) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}151m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDecommissionWithStriped |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:39e82acc485 |
| JIRA Issue | HDFS-14768 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12980475/HDFS-14768.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9a70ff74886f 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e54977f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27892/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27892/testReport/ |
| Max. process+thre

[jira] [Work logged] (HDDS-2110) Arbitrary file can be downloaded with the help of ProfilerServlet

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2110?focusedWorklogId=313641&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313641
 ]

ASF GitHub Bot logged work on HDDS-2110:


Author: ASF GitHub Bot
Created on: 17/Sep/19 11:42
Start Date: 17/Sep/19 11:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1448: HDDS-2110. 
Arbitrary file can be downloaded with the help of ProfilerServlet
URL: https://github.com/apache/hadoop/pull/1448#issuecomment-532184015
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 68 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 836 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | trunk passed |
   | 0 | spotbugs | 162 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 27 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 33 | hadoop-ozone in the patch failed. |
   | -1 | compile | 26 | hadoop-ozone in the patch failed. |
   | -1 | javac | 26 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 29 | hadoop-hdds: The patch generated 14 new + 61 
unchanged - 6 fixed = 75 total (was 67) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 670 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 150 | the patch passed |
   | -1 | findbugs | 28 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 251 | hadoop-hdds in the patch passed. |
   | -1 | unit | 29 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 3183 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1448/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1448 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3aa3555d0a32 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f3de141 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1448/2/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1448/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1448/2/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1448/2/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1448/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1448/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1448/2/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1448/2/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1448/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1448/2/testReport/ |
   | Max. process+thread count | 518 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/framework U: hadoop-hdds/framework |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1448/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to 

[jira] [Updated] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-17 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HDFS-14845:
-
Attachment: HDFS-14845-002.patch

> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: HDFS-14845-001.patch, HDFS-14845-002.patch
>
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6524) Choosing datanode retries times considering with block replica number

2019-09-17 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-6524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-6524:
--
Attachment: HDFS-6524.005(2).patch

> Choosing datanode  retries times considering with block replica number
> --
>
> Key: HDFS-6524
> URL: https://issues.apache.org/jira/browse/HDFS-6524
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha1
>Reporter: Liang Xie
>Assignee: Lisheng Sun
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6524.001.patch, HDFS-6524.002.patch, 
> HDFS-6524.003.patch, HDFS-6524.004.patch, HDFS-6524.005(2).patch, 
> HDFS-6524.005.patch, HDFS-6524.txt
>
>
> Currently the chooseDataNode() does retry with the setting: 
> dfsClientConf.maxBlockAcquireFailures, which by default is 3 
> (DFS_CLIENT_MAX_BLOCK_ACQUIRE_FAILURES_DEFAULT = 3), it would be better 
> having another option, block replication factor. One cluster with only  two 
> block replica setting, or using Reed-solomon encoding solution with one 
> replica factor. It helps to reduce the long tail latency.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6524) Choosing datanode retries times considering with block replica number

2019-09-17 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-6524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-6524:
--
Attachment: (was: HDFS-6524.005.patch)

> Choosing datanode  retries times considering with block replica number
> --
>
> Key: HDFS-6524
> URL: https://issues.apache.org/jira/browse/HDFS-6524
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha1
>Reporter: Liang Xie
>Assignee: Lisheng Sun
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6524.001.patch, HDFS-6524.002.patch, 
> HDFS-6524.003.patch, HDFS-6524.004.patch, HDFS-6524.005.patch, HDFS-6524.txt
>
>
> Currently the chooseDataNode() does retry with the setting: 
> dfsClientConf.maxBlockAcquireFailures, which by default is 3 
> (DFS_CLIENT_MAX_BLOCK_ACQUIRE_FAILURES_DEFAULT = 3), it would be better 
> having another option, block replication factor. One cluster with only  two 
> block replica setting, or using Reed-solomon encoding solution with one 
> replica factor. It helps to reduce the long tail latency.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6524) Choosing datanode retries times considering with block replica number

2019-09-17 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-6524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-6524:
--
Attachment: HDFS-6524.005.patch

> Choosing datanode  retries times considering with block replica number
> --
>
> Key: HDFS-6524
> URL: https://issues.apache.org/jira/browse/HDFS-6524
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha1
>Reporter: Liang Xie
>Assignee: Lisheng Sun
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6524.001.patch, HDFS-6524.002.patch, 
> HDFS-6524.003.patch, HDFS-6524.004.patch, HDFS-6524.005.patch, 
> HDFS-6524.005.patch, HDFS-6524.txt
>
>
> Currently the chooseDataNode() does retry with the setting: 
> dfsClientConf.maxBlockAcquireFailures, which by default is 3 
> (DFS_CLIENT_MAX_BLOCK_ACQUIRE_FAILURES_DEFAULT = 3), it would be better 
> having another option, block replication factor. One cluster with only  two 
> block replica setting, or using Reed-solomon encoding solution with one 
> replica factor. It helps to reduce the long tail latency.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14795) Add Throttler for writing block

2019-09-17 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931339#comment-16931339
 ] 

Lisheng Sun commented on HDFS-14795:


 hi [~elgoiri], should we commit this patch to trunk? Thank you.

> Add Throttler for writing block
> ---
>
> Key: HDFS-14795
> URL: https://issues.apache.org/jira/browse/HDFS-14795
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14795.001.patch, HDFS-14795.002.patch, 
> HDFS-14795.003.patch, HDFS-14795.004.patch, HDFS-14795.005.patch, 
> HDFS-14795.006.patch, HDFS-14795.007.patch, HDFS-14795.008.patch, 
> HDFS-14795.009.patch, HDFS-14795.010.patch, HDFS-14795.011.patch, 
> HDFS-14795.012.patch
>
>
> DataXceiver#writeBlock
> {code:java}
> blockReceiver.receiveBlock(mirrorOut, mirrorIn, replyOut,
> mirrorAddr, null, targets, false);
> {code}
> As above code, DataXceiver#writeBlock doesn't throttler.
>  I think it is necessary to throttle for writing block, while add throttler 
> in stage of PIPELINE_SETUP_APPEND_RECOVERY or 
> PIPELINE_SETUP_STREAMING_RECOVERY.
> Default throttler value is still null.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14844) Make buffer of BlockReaderRemote#newBlockReader#BufferedOutputStream configurable

2019-09-17 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931336#comment-16931336
 ] 

Lisheng Sun commented on HDFS-14844:


hi [~elgoiri], could you help continue to take a review? Thank you.

> Make buffer of BlockReaderRemote#newBlockReader#BufferedOutputStream  
> configurable
> --
>
> Key: HDFS-14844
> URL: https://issues.apache.org/jira/browse/HDFS-14844
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14844.001.patch, HDFS-14844.002.patch, 
> HDFS-14844.003.patch, HDFS-14844.004.patch
>
>
> details for HDFS-14820



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2142) OM metrics mismatch (abort multipart request)

2019-09-17 Thread Doroszlai, Attila (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2142 started by Doroszlai, Attila.
---
> OM metrics mismatch (abort multipart request)
> -
>
> Key: HDDS-2142
> URL: https://issues.apache.org/jira/browse/HDDS-2142
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
> Attachments: abort_multipart.png
>
>
> AbortMultipartUpload failure count can be higher than request count.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2142) OM metrics mismatch (abort multipart request)

2019-09-17 Thread Doroszlai, Attila (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-2142:

Attachment: abort_multipart.png

> OM metrics mismatch (abort multipart request)
> -
>
> Key: HDDS-2142
> URL: https://issues.apache.org/jira/browse/HDDS-2142
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
> Attachments: abort_multipart.png
>
>
> AbortMultipartUpload failure count can be higher than request count.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2142) OM metrics mismatch (abort multipart request)

2019-09-17 Thread Doroszlai, Attila (Jira)
Doroszlai, Attila created HDDS-2142:
---

 Summary: OM metrics mismatch (abort multipart request)
 Key: HDDS-2142
 URL: https://issues.apache.org/jira/browse/HDDS-2142
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Affects Versions: 0.4.1
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


AbortMultipartUpload failure count can be higher than request count.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2137) HddsClientUtils and OzoneUtils have duplicate verifyResourceName()

2019-09-17 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931330#comment-16931330
 ] 

Viraj Jasani commented on HDDS-2137:


Thanks [~elek]

> HddsClientUtils and OzoneUtils have duplicate verifyResourceName()
> --
>
> Key: HDDS-2137
> URL: https://issues.apache.org/jira/browse/HDDS-2137
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> HddsClientUtils and OzoneUtils can share the method to verify resource name 
> that verifies if the bucket/volume name is a valid DNS name.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2116) Create SCMAllocationManager as background thread for pipeline creation

2019-09-17 Thread Li Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931329#comment-16931329
 ] 

Li Cheng commented on HDDS-2116:


I agree on the multi-thread part. [~swagle] what else function do you think 
SCMAllocationManager should have? Or is SCMAllocationManager still needed?

> Create SCMAllocationManager as background thread for pipeline creation
> --
>
> Key: HDDS-2116
> URL: https://issues.apache.org/jira/browse/HDDS-2116
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Li Cheng
>Priority: Major
>
> SCMAllocationManager can be leveraged to get a candidate set of datanodes 
> based on placement policies. And it should make the pipeline creation process 
> to be async and multi-thread. This should be done when we encounter with 
> performance bottleneck in terms of pipeline creation.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2117) ContainerStateMachine#writeStateMachineData times out

2019-09-17 Thread Lokesh Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-2117:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> ContainerStateMachine#writeStateMachineData times out
> -
>
> Key: HDDS-2117
> URL: https://issues.apache.org/jira/browse/HDDS-2117
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> The issue seems to be happening because the below precondition check fails in 
> case two writeChunk gets executed in parallel and the runtime exception 
> thrown is handled correctly in ContainerStateMachine.
>  
> HddsDispatcher.java:239
> {code:java}
> Preconditions
> .checkArgument(!container2BCSIDMap.containsKey(containerID));
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2117) ContainerStateMachine#writeStateMachineData times out

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2117?focusedWorklogId=313630&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313630
 ]

ASF GitHub Bot logged work on HDDS-2117:


Author: ASF GitHub Bot
Created on: 17/Sep/19 11:20
Start Date: 17/Sep/19 11:20
Worklog Time Spent: 10m 
  Work Description: lokeshj1703 commented on issue #1430: HDDS-2117. 
ContainerStateMachine#writeStateMachineData times out.
URL: https://github.com/apache/hadoop/pull/1430#issuecomment-532177452
 
 
   @bshashikant Thanks for working on the PR! I have merged the commit to trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313630)
Time Spent: 1h 50m  (was: 1h 40m)

> ContainerStateMachine#writeStateMachineData times out
> -
>
> Key: HDDS-2117
> URL: https://issues.apache.org/jira/browse/HDDS-2117
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> The issue seems to be happening because the below precondition check fails in 
> case two writeChunk gets executed in parallel and the runtime exception 
> thrown is handled correctly in ContainerStateMachine.
>  
> HddsDispatcher.java:239
> {code:java}
> Preconditions
> .checkArgument(!container2BCSIDMap.containsKey(containerID));
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2117) ContainerStateMachine#writeStateMachineData times out

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2117?focusedWorklogId=313629&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313629
 ]

ASF GitHub Bot logged work on HDDS-2117:


Author: ASF GitHub Bot
Created on: 17/Sep/19 11:19
Start Date: 17/Sep/19 11:19
Worklog Time Spent: 10m 
  Work Description: lokeshj1703 commented on pull request #1430: HDDS-2117. 
ContainerStateMachine#writeStateMachineData times out.
URL: https://github.com/apache/hadoop/pull/1430
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313629)
Time Spent: 1h 40m  (was: 1.5h)

> ContainerStateMachine#writeStateMachineData times out
> -
>
> Key: HDDS-2117
> URL: https://issues.apache.org/jira/browse/HDDS-2117
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The issue seems to be happening because the below precondition check fails in 
> case two writeChunk gets executed in parallel and the runtime exception 
> thrown is handled correctly in ContainerStateMachine.
>  
> HddsDispatcher.java:239
> {code:java}
> Preconditions
> .checkArgument(!container2BCSIDMap.containsKey(containerID));
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2133) TestOzoneContainer is failing

2019-09-17 Thread Doroszlai, Attila (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-2133:

Fix Version/s: 0.5.0

> TestOzoneContainer is failing
> -
>
> Key: HDDS-2133
> URL: https://issues.apache.org/jira/browse/HDDS-2133
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Doroszlai, Attila
>Priority: Major
> Fix For: 0.4.1, 0.5.0
>
>
> {{TestOzoneContainer}} is failing with the following exception
> {noformat}
> [ERROR] 
> testBuildContainerMap(org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer)
>   Time elapsed: 2.031 s  <<< FAILURE!
> java.lang.AssertionError: expected:<10> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer.testBuildContainerMap(TestOzoneContainer.java:143)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2132) TestKeyValueContainer is failing

2019-09-17 Thread Doroszlai, Attila (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931324#comment-16931324
 ] 

Doroszlai, Attila commented on HDDS-2132:
-

Thanks [~nandakumar131] for picking it for 0.4.1.

> TestKeyValueContainer is failing
> 
>
> Key: HDDS-2132
> URL: https://issues.apache.org/jira/browse/HDDS-2132
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Nanda kumar
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {{TestKeyValueContainer}} is failing with the following exception 
> {noformat}
> [ERROR] 
> testContainerImportExport(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainer)
>   Time elapsed: 0.173 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.parseKVContainerData(KeyValueContainerUtil.java:201)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.importContainerData(KeyValueContainer.java:500)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainer.testContainerImportExport(TestKeyValueContainer.java:235)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2133) TestOzoneContainer is failing

2019-09-17 Thread Doroszlai, Attila (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931323#comment-16931323
 ] 

Doroszlai, Attila commented on HDDS-2133:
-

https://github.com/apache/hadoop/commit/e54977f888e
https://github.com/apache/hadoop/commit/0fb42e514be

> TestOzoneContainer is failing
> -
>
> Key: HDDS-2133
> URL: https://issues.apache.org/jira/browse/HDDS-2133
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Doroszlai, Attila
>Priority: Major
> Fix For: 0.4.1
>
>
> {{TestOzoneContainer}} is failing with the following exception
> {noformat}
> [ERROR] 
> testBuildContainerMap(org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer)
>   Time elapsed: 2.031 s  <<< FAILURE!
> java.lang.AssertionError: expected:<10> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer.testBuildContainerMap(TestOzoneContainer.java:143)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2132) TestKeyValueContainer is failing

2019-09-17 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-2132:
--
Fix Version/s: 0.5.0

> TestKeyValueContainer is failing
> 
>
> Key: HDDS-2132
> URL: https://issues.apache.org/jira/browse/HDDS-2132
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Nanda kumar
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {{TestKeyValueContainer}} is failing with the following exception 
> {noformat}
> [ERROR] 
> testContainerImportExport(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainer)
>   Time elapsed: 0.173 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.parseKVContainerData(KeyValueContainerUtil.java:201)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.importContainerData(KeyValueContainer.java:500)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainer.testContainerImportExport(TestKeyValueContainer.java:235)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2111) XSS fragments can be injected to the S3g landing page

2019-09-17 Thread Nanda kumar (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931322#comment-16931322
 ] 

Nanda kumar commented on HDDS-2111:
---

Thanks [~anu].
Back-ported this to ozone-0.4.1.

> XSS fragments can be injected to the S3g landing page  
> ---
>
> Key: HDDS-2111
> URL: https://issues.apache.org/jira/browse/HDDS-2111
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Aayush
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> VULNERABILITY DETAILS
> There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
> "window.location.href".
> Considering a typical URL:
> scheme://domain:port/path?query_string#fragment_id
> Browsers encode correctly both "path" and "query_string", but not the 
> "fragment_id". 
> So if used "fragment_id" the vector is also not logged on Web Server.
> VERSION
> Chrome Version: 10.0.648.134 (Official Build 77917) beta
> REPRODUCTION CASE
> This is an index.html page:
> {code:java}
> aws s3api --endpoint 
> document.write(window.location.href.replace("static/", "")) 
> create-bucket --bucket=wordcount
> {code}
> The attack vector is:
> index.html?#alert('XSS');
> * PoC:
> For your convenience, a minimalist PoC is located on:
> http://security.onofri.org/xss_location.html?#alert('XSS');
> * References
> - DOM Based Cross-Site Scripting or XSS of the Third Kind - 
> http://www.webappsec.org/projects/articles/071105.shtml
> reference:- 
> https://bugs.chromium.org/p/chromium/issues/detail?id=76796



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2114) Rename does not preserve non-explicitly created interim directories

2019-09-17 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-2114:
--
Fix Version/s: 0.4.1

> Rename does not preserve non-explicitly created interim directories
> ---
>
> Key: HDDS-2114
> URL: https://issues.apache.org/jira/browse/HDDS-2114
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Istvan Fajth
>Assignee: Lokesh Jain
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
> Attachments: demonstrative_test.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> I am attaching a patch that adds a test that demonstrates the problem.
> The scenario is coming from the way how Hive implements acid transactions 
> with the ORC table format, but the test is redacted to the simplest possible 
> code that reproduces the issue.
> The scenario:
>  * Given a 3 level directory structure, where the top level directory was 
> explicitly created, and the interim directory is implicitly created (for 
> example either by creating a file with create("/top/interim/file") or by 
> creating a directory with mkdirs("top/interim/dir"))
>  * When the leaf is moved out from the implicitly created directory making 
> this directory an empty directory
>  * Then a FileNotFoundException is thrown when getFileStatus or listStatus is 
> called on the interim directory.
> The expected behaviour:
> after the directory is becoming empty, the directory should still be part of 
> the file system, moreover an empty FileStatus array should be returned when 
> listStatus is called on it, and also a valid FileStatus object should be 
> returned when getFileStatus is called on it.
>  
>  
> As this issue is present with Hive, and as this is how a FileSystem is 
> expected to work this seems to be an at least critical issue as I see, please 
> feel free to change the priority if needed.
> Also please note that, if the interim directory is explicitly created with 
> mkdirs("top/interim") before creating the leaf, then the issue does not 
> appear.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2132) TestKeyValueContainer is failing

2019-09-17 Thread Doroszlai, Attila (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-2132:

Fix Version/s: 0.4.1

> TestKeyValueContainer is failing
> 
>
> Key: HDDS-2132
> URL: https://issues.apache.org/jira/browse/HDDS-2132
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Nanda kumar
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {{TestKeyValueContainer}} is failing with the following exception 
> {noformat}
> [ERROR] 
> testContainerImportExport(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainer)
>   Time elapsed: 0.173 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.parseKVContainerData(KeyValueContainerUtil.java:201)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.importContainerData(KeyValueContainer.java:500)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainer.testContainerImportExport(TestKeyValueContainer.java:235)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2111) XSS fragments can be injected to the S3g landing page

2019-09-17 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-2111:
--
Fix Version/s: 0.4.1

> XSS fragments can be injected to the S3g landing page  
> ---
>
> Key: HDDS-2111
> URL: https://issues.apache.org/jira/browse/HDDS-2111
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Aayush
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> VULNERABILITY DETAILS
> There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
> "window.location.href".
> Considering a typical URL:
> scheme://domain:port/path?query_string#fragment_id
> Browsers encode correctly both "path" and "query_string", but not the 
> "fragment_id". 
> So if used "fragment_id" the vector is also not logged on Web Server.
> VERSION
> Chrome Version: 10.0.648.134 (Official Build 77917) beta
> REPRODUCTION CASE
> This is an index.html page:
> {code:java}
> aws s3api --endpoint 
> document.write(window.location.href.replace("static/", "")) 
> create-bucket --bucket=wordcount
> {code}
> The attack vector is:
> index.html?#alert('XSS');
> * PoC:
> For your convenience, a minimalist PoC is located on:
> http://security.onofri.org/xss_location.html?#alert('XSS');
> * References
> - DOM Based Cross-Site Scripting or XSS of the Third Kind - 
> http://www.webappsec.org/projects/articles/071105.shtml
> reference:- 
> https://bugs.chromium.org/p/chromium/issues/detail?id=76796



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2133) TestOzoneContainer is failing

2019-09-17 Thread Doroszlai, Attila (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila resolved HDDS-2133.
-
Fix Version/s: 0.4.1
 Assignee: Doroszlai, Attila
   Resolution: Fixed

> TestOzoneContainer is failing
> -
>
> Key: HDDS-2133
> URL: https://issues.apache.org/jira/browse/HDDS-2133
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Doroszlai, Attila
>Priority: Major
> Fix For: 0.4.1
>
>
> {{TestOzoneContainer}} is failing with the following exception
> {noformat}
> [ERROR] 
> testBuildContainerMap(org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer)
>   Time elapsed: 2.031 s  <<< FAILURE!
> java.lang.AssertionError: expected:<10> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer.testBuildContainerMap(TestOzoneContainer.java:143)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2136) OM block allocation metric not paired with its failures

2019-09-17 Thread Doroszlai, Attila (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-2136:

Status: Patch Available  (was: In Progress)

> OM block allocation metric not paired with its failures
> ---
>
> Key: HDDS-2136
> URL: https://issues.apache.org/jira/browse/HDDS-2136
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Attachments: allocation_failures.png, allocations.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Block allocation count and block allocation failure count are shown in 
> separate graphs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2136) OM block allocation metric not paired with its failures

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2136:
-
Labels: pull-request-available  (was: )

> OM block allocation metric not paired with its failures
> ---
>
> Key: HDDS-2136
> URL: https://issues.apache.org/jira/browse/HDDS-2136
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Attachments: allocation_failures.png, allocations.png
>
>
> Block allocation count and block allocation failure count are shown in 
> separate graphs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2136) OM block allocation metric not paired with its failures

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2136?focusedWorklogId=313624&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313624
 ]

ASF GitHub Bot logged work on HDDS-2136:


Author: ASF GitHub Bot
Created on: 17/Sep/19 11:05
Start Date: 17/Sep/19 11:05
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1460: HDDS-2136. OM 
block allocation metric not paired with its failures
URL: https://github.com/apache/hadoop/pull/1460#issuecomment-532173047
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313624)
Time Spent: 20m  (was: 10m)

> OM block allocation metric not paired with its failures
> ---
>
> Key: HDDS-2136
> URL: https://issues.apache.org/jira/browse/HDDS-2136
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Attachments: allocation_failures.png, allocations.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Block allocation count and block allocation failure count are shown in 
> separate graphs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2136) OM block allocation metric not paired with its failures

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2136?focusedWorklogId=313623&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313623
 ]

ASF GitHub Bot logged work on HDDS-2136:


Author: ASF GitHub Bot
Created on: 17/Sep/19 11:05
Start Date: 17/Sep/19 11:05
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1460: HDDS-2136. 
OM block allocation metric not paired with its failures
URL: https://github.com/apache/hadoop/pull/1460
 
 
   ## What changes were proposed in this pull request?
   
   Rename metric member variables for block allocation count and its failures 
to have the same prefix to ensure that they appear in the same chart.  I think 
`BlockAllocations` is both a bit simpler than `BlockAllocateCalls` and more 
consistent with other metrics.
   
   https://issues.apache.org/jira/browse/HDDS-2136
   
   ## How was this patch tested?
   
   Ran acceptance tests, checked OM Metrics web page.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313623)
Remaining Estimate: 0h
Time Spent: 10m

> OM block allocation metric not paired with its failures
> ---
>
> Key: HDDS-2136
> URL: https://issues.apache.org/jira/browse/HDDS-2136
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Attachments: allocation_failures.png, allocations.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Block allocation count and block allocation failure count are shown in 
> separate graphs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931301#comment-16931301
 ] 

Elek, Marton commented on HDDS-730:
---

Thanks @YiSheng Lien the patch. It looks good to me, but it will be available 
only from 3.3.

What do you think about extending the FsShell (OzoneFsShell extends FsShell) 
and override getUsagePrefix and change the main class of ozone sh to the 
OzoneFsShell.

Would it be possible?



> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: image-2018-10-24-17-15-39-097.png, 
> ozone-cli-fs-withnonexist.png, ozone-cli-fs.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2135) OM Metric mismatch (MultipartUpload failures)

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2135?focusedWorklogId=313614&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313614
 ]

ASF GitHub Bot logged work on HDDS-2135:


Author: ASF GitHub Bot
Created on: 17/Sep/19 10:48
Start Date: 17/Sep/19 10:48
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1453: HDDS-2135. OM 
Metric mismatch (MultipartUpload failures)
URL: https://github.com/apache/hadoop/pull/1453#issuecomment-532167731
 
 
   Thanks @bharatviswa504 for reviewing the fix, and @elek for reviewing and 
committing it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313614)
Time Spent: 1h 20m  (was: 1h 10m)

> OM Metric mismatch (MultipartUpload failures)
> -
>
> Key: HDDS-2135
> URL: https://issues.apache.org/jira/browse/HDDS-2135
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {{incNumCommitMultipartUploadPartFails()}} increments 
> {{numInitiateMultipartUploadFails}} instead of the counter for commit 
> failures.
> https://github.com/apache/hadoop/blob/85b1c728e4ed22f03db255f5ef34a2a79eb20d52/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMMetrics.java#L310-L312



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2139) Update BeanUtils and Jackson Databind dependency versions

2019-09-17 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931283#comment-16931283
 ] 

Steve Loughran commented on HDDS-2139:
--

we've been discussing removing databind altogether if we can because it's 
ability to deserialize into objects makes it a CVE factory. How much use does 
ozone make of it?

> Update BeanUtils and Jackson Databind dependency versions
> -
>
> Key: HDDS-2139
> URL: https://issues.apache.org/jira/browse/HDDS-2139
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The following Ozone dependencies have known security vulnerabilities. We 
> should update them to newer/ latest versions.
>  * Apache Common BeanUtils version 1.9.3
>  * Fasterxml Jackson version 2.9.5



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2135) OM Metric mismatch (MultipartUpload failures)

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2135?focusedWorklogId=313611&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313611
 ]

ASF GitHub Bot logged work on HDDS-2135:


Author: ASF GitHub Bot
Created on: 17/Sep/19 10:42
Start Date: 17/Sep/19 10:42
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1453: HDDS-2135. OM 
Metric mismatch (MultipartUpload failures)
URL: https://github.com/apache/hadoop/pull/1453
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313611)
Time Spent: 1h 10m  (was: 1h)

> OM Metric mismatch (MultipartUpload failures)
> -
>
> Key: HDDS-2135
> URL: https://issues.apache.org/jira/browse/HDDS-2135
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {{incNumCommitMultipartUploadPartFails()}} increments 
> {{numInitiateMultipartUploadFails}} instead of the counter for commit 
> failures.
> https://github.com/apache/hadoop/blob/85b1c728e4ed22f03db255f5ef34a2a79eb20d52/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMMetrics.java#L310-L312



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2135) OM Metric mismatch (MultipartUpload failures)

2019-09-17 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2135:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> OM Metric mismatch (MultipartUpload failures)
> -
>
> Key: HDDS-2135
> URL: https://issues.apache.org/jira/browse/HDDS-2135
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {{incNumCommitMultipartUploadPartFails()}} increments 
> {{numInitiateMultipartUploadFails}} instead of the counter for commit 
> failures.
> https://github.com/apache/hadoop/blob/85b1c728e4ed22f03db255f5ef34a2a79eb20d52/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMMetrics.java#L310-L312



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2141) Missing total number of operations

2019-09-17 Thread Doroszlai, Attila (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2141 started by Doroszlai, Attila.
---
> Missing total number of operations
> --
>
> Key: HDDS-2141
> URL: https://issues.apache.org/jira/browse/HDDS-2141
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
> Attachments: missing_total.png
>
>
> Total number of operations is missing from some metrics graphs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2120) Remove hadoop classes from ozonefs-current jar

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2120?focusedWorklogId=313607&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313607
 ]

ASF GitHub Bot logged work on HDDS-2120:


Author: ASF GitHub Bot
Created on: 17/Sep/19 10:31
Start Date: 17/Sep/19 10:31
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1434: HDDS-2120. Remove 
hadoop classes from ozonefs-current jar
URL: https://github.com/apache/hadoop/pull/1434
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313607)
Time Spent: 1h 40m  (was: 1.5h)

> Remove hadoop classes from ozonefs-current jar
> --
>
> Key: HDDS-2120
> URL: https://issues.apache.org/jira/browse/HDDS-2120
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> We have two kind of ozone file system jars: current and legacy. current is 
> designed to work only with exactly the same hadoop version which is used for 
> compilation (3.2 as of now).
> But as of now the hadoop classes are included in the current jar which is not 
> necessary as the jar is expected to be used in an environment where  the 
> hadoop classes (exactly the same hadoop classes) are already there. They can 
> be excluded.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2120) Remove hadoop classes from ozonefs-current jar

2019-09-17 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2120:
---
Fix Version/s: (was: 0.4.1)
   0.5.0

> Remove hadoop classes from ozonefs-current jar
> --
>
> Key: HDDS-2120
> URL: https://issues.apache.org/jira/browse/HDDS-2120
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We have two kind of ozone file system jars: current and legacy. current is 
> designed to work only with exactly the same hadoop version which is used for 
> compilation (3.2 as of now).
> But as of now the hadoop classes are included in the current jar which is not 
> necessary as the jar is expected to be used in an environment where  the 
> hadoop classes (exactly the same hadoop classes) are already there. They can 
> be excluded.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2120) Remove hadoop classes from ozonefs-current jar

2019-09-17 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2120:
---
Fix Version/s: 0.4.1
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Remove hadoop classes from ozonefs-current jar
> --
>
> Key: HDDS-2120
> URL: https://issues.apache.org/jira/browse/HDDS-2120
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We have two kind of ozone file system jars: current and legacy. current is 
> designed to work only with exactly the same hadoop version which is used for 
> compilation (3.2 as of now).
> But as of now the hadoop classes are included in the current jar which is not 
> necessary as the jar is expected to be used in an environment where  the 
> hadoop classes (exactly the same hadoop classes) are already there. They can 
> be excluded.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14768) In some cases, erasure blocks are corruption when they are reconstruct.

2019-09-17 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931269#comment-16931269
 ] 

Zhao Yi Ming commented on HDFS-14768:
-

[~gjhkael] Thanks for your new patch! Now, only added the UT 
*testDecommissionWithBusyNode*, the UT failed in the run mode. However, when I 
add BPs tried to debug the problem, then every time the UT can passed. This 
confuse me too much, in my understand the stabilize reproduce issue, no matter 
in the run mode or debug mode. Could you give some help? Thanks a lot!

> In some cases, erasure blocks are corruption  when they are reconstruct.
> 
>
> Key: HDFS-14768
> URL: https://issues.apache.org/jira/browse/HDFS-14768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, erasure-coding, hdfs, namenode
>Affects Versions: 3.0.2
>Reporter: guojh
>Assignee: guojh
>Priority: Major
>  Labels: patch
> Fix For: 3.3.0
>
> Attachments: 1568275810244.jpg, 1568276338275.jpg, 
> HDFS-14768.000.patch, HDFS-14768.001.patch, HDFS-14768.002.patch, 
> HDFS-14768.jpg, guojh_UT_after_deomission.txt, 
> guojh_UT_before_deomission.txt, zhaoyiming_UT_after_deomission.txt, 
> zhaoyiming_UT_beofre_deomission.txt
>
>
> Policy is RS-6-3-1024K, version is hadoop 3.0.2;
> We suppose a file's block Index is [0,1,2,3,4,5,6,7,8], And decommission 
> index[3,4], increase the index 6 datanode's
> pendingReplicationWithoutTargets  that make it large than 
> replicationStreamsHardLimit(we set 14). Then, After the method 
> chooseSourceDatanodes of BlockMananger, the liveBlockIndices is 
> [0,1,2,3,4,5,7,8], Block Counter is, Live:7, Decommission:2. 
> In method scheduleReconstruction of BlockManager, the additionalReplRequired 
> is 9 - 7 = 2. After Namenode choose two target Datanode, will assign a 
> erasureCode task to target datanode.
> When datanode get the task will build  targetIndices from liveBlockIndices 
> and target length. the code is blow.
> {code:java}
> // code placeholder
> targetIndices = new short[targets.length];
> private void initTargetIndices() { 
>   BitSet bitset = reconstructor.getLiveBitSet();
>   int m = 0; hasValidTargets = false; 
>   for (int i = 0; i < dataBlkNum + parityBlkNum; i++) {  
> if (!bitset.get) {    
>   if (reconstructor.getBlockLen > 0) {
>        if (m < targets.length) {
>          targetIndices[m++] = (short)i;
>          hasValidTargets = true;
>         }
>       }
>     }
>  }
> {code}
> targetIndices[0]=6, and targetIndices[1] is aways 0 from initial value.
> The StripedReader is  aways create reader from first 6 index block, and is 
> [0,1,2,3,4,5]
> Use the index [0,1,2,3,4,5] to build target index[6,0] will trigger the isal 
> bug. the block index6's data is corruption(all data is zero).
> I write a unit test can stabilize repreduce.
> {code:java}
> // code placeholder
> private int replicationStreamsHardLimit = 
> DFSConfigKeys.DFS_NAMENODE_REPLICATION_STREAMS_HARD_LIMIT_DEFAULT;
> numDNs = dataBlocks + parityBlocks + 10;
> @Test(timeout = 24)
> public void testFileDecommission() throws Exception {
>   LOG.info("Starting test testFileDecommission");
>   final Path ecFile = new Path(ecDir, "testFileDecommission");
>   int writeBytes = cellSize * dataBlocks;
>   writeStripedFile(dfs, ecFile, writeBytes);
>   Assert.assertEquals(0, bm.numOfUnderReplicatedBlocks());
>   FileChecksum fileChecksum1 = dfs.getFileChecksum(ecFile, writeBytes);
>   final INodeFile fileNode = cluster.getNamesystem().getFSDirectory()
>   .getINode4Write(ecFile.toString()).asFile();
>   LocatedBlocks locatedBlocks =
>   StripedFileTestUtil.getLocatedBlocks(ecFile, dfs);
>   LocatedBlock lb = dfs.getClient().getLocatedBlocks(ecFile.toString(), 0)
>   .get(0);
>   DatanodeInfo[] dnLocs = lb.getLocations();
>   LocatedStripedBlock lastBlock =
>   (LocatedStripedBlock)locatedBlocks.getLastLocatedBlock();
>   DatanodeInfo[] storageInfos = lastBlock.getLocations();
>   //
>   DatanodeDescriptor datanodeDescriptor = 
> cluster.getNameNode().getNamesystem()
>   
> .getBlockManager().getDatanodeManager().getDatanode(storageInfos[6].getDatanodeUuid());
>   BlockInfo firstBlock = fileNode.getBlocks()[0];
>   DatanodeStorageInfo[] dStorageInfos = bm.getStorages(firstBlock);
>   // the first heartbeat will consume 3 replica tasks
>   for (int i = 0; i <= replicationStreamsHardLimit + 3; i++) {
> BlockManagerTestUtil.addBlockToBeReplicated(datanodeDescriptor, new 
> Block(i),
> new DatanodeStorageInfo[]{dStorageInfos[0]});
>   }
>   assertEquals(dataBlocks + parityBlocks, dnLocs.length);
>   int[] decommNodeIndex = {3, 4};
>   final List decommisionNodes = new ArrayList();
>   // add the node which will be decommissioning
>

[jira] [Work logged] (HDDS-2120) Remove hadoop classes from ozonefs-current jar

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2120?focusedWorklogId=313600&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313600
 ]

ASF GitHub Bot logged work on HDDS-2120:


Author: ASF GitHub Bot
Created on: 17/Sep/19 10:20
Start Date: 17/Sep/19 10:20
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1434: HDDS-2120. Remove hadoop 
classes from ozonefs-current jar
URL: https://github.com/apache/hadoop/pull/1434#issuecomment-532159207
 
 
   Thanks the review @arp7 and @ajayydv 
   
   The test failures are not related. I am merging it to the trunk right now...
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313600)
Time Spent: 1.5h  (was: 1h 20m)

> Remove hadoop classes from ozonefs-current jar
> --
>
> Key: HDDS-2120
> URL: https://issues.apache.org/jira/browse/HDDS-2120
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We have two kind of ozone file system jars: current and legacy. current is 
> designed to work only with exactly the same hadoop version which is used for 
> compilation (3.2 as of now).
> But as of now the hadoop classes are included in the current jar which is not 
> necessary as the jar is expected to be used in an environment where  the 
> hadoop classes (exactly the same hadoop classes) are already there. They can 
> be excluded.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2110) Arbitrary file can be downloaded with the help of ProfilerServlet

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2110?focusedWorklogId=313599&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313599
 ]

ASF GitHub Bot logged work on HDDS-2110:


Author: ASF GitHub Bot
Created on: 17/Sep/19 10:15
Start Date: 17/Sep/19 10:15
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1448: HDDS-2110. Arbitrary 
file can be downloaded with the help of ProfilerServlet
URL: https://github.com/apache/hadoop/pull/1448#issuecomment-532157442
 
 
   I made it more safe (strict validation of the file name based on the 
original pattern). Now the HTTP headers are also safe (until now we printed out 
the file name in the header even if it contained a new line char).
   
   And we don't need to suppress any findbugs warning.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313599)
Time Spent: 1h  (was: 50m)

> Arbitrary file can be downloaded with the help of ProfilerServlet
> -
>
> Key: HDDS-2110
> URL: https://issues.apache.org/jira/browse/HDDS-2110
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Native
>Reporter: Aayush
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The LOC 324 in the file 
> [ProfileServlet.java|https://github.com/apache/hadoop/blob/217bdbd940a96986df3b96899b43caae2b5a9ed2/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ProfileServlet.java]
>  is prone to an arbitrary file download:-
> {code:java}
> protected void doGetDownload(String fileName, final HttpServletRequest req,   
>final HttpServletResponse resp) throws IOException {
> File requestedFile = 
> ProfileServlet.OUTPUT_DIR.resolve(fileName).toAbsolutePath().toFile();{code}
> As the String fileName is directly considered as the requested file.
>  
> Which is called at LOC 180 with HTTP request directly passed:-
> {code:java}
> if (req.getParameter("file") != null) {  
> doGetDownload(req.getParameter("file"), req, resp);  
> return;
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien updated HDDS-730:
--
Status: Patch Available  (was: In Progress)

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: image-2018-10-24-17-15-39-097.png, 
> ozone-cli-fs-withnonexist.png, ozone-cli-fs.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-730?focusedWorklogId=313586&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313586
 ]

ASF GitHub Bot logged work on HDDS-730:
---

Author: ASF GitHub Bot
Created on: 17/Sep/19 09:57
Start Date: 17/Sep/19 09:57
Worklog Time Spent: 10m 
  Work Description: cxorm commented on pull request #1459: HDDS-730. Ozone 
fs cli prints hadoop fs in usage.
URL: https://github.com/apache/hadoop/pull/1459
 
 
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313586)
Remaining Estimate: 0h
Time Spent: 10m

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: image-2018-10-24-17-15-39-097.png, 
> ozone-cli-fs-withnonexist.png, ozone-cli-fs.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-730:

Labels: newbie pull-request-available  (was: newbie)

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: image-2018-10-24-17-15-39-097.png, 
> ozone-cli-fs-withnonexist.png, ozone-cli-fs.png
>
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2044) Remove 'ozone' from the recon module names.

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2044?focusedWorklogId=313585&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313585
 ]

ASF GitHub Bot logged work on HDDS-2044:


Author: ASF GitHub Bot
Created on: 17/Sep/19 09:56
Start Date: 17/Sep/19 09:56
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #1381: 
HDDS-2044. Remove 'ozone' from the recon module names.
URL: https://github.com/apache/hadoop/pull/1381
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313585)
Time Spent: 50m  (was: 40m)

> Remove 'ozone' from the recon module names.
> ---
>
> Key: HDDS-2044
> URL: https://issues.apache.org/jira/browse/HDDS-2044
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently the module names "ozone-recon" and "ozone-recon-codegen". In order 
> to make them similar to other modules, they need to be changed into "recon" 
> and "recon-codegen"



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2137) HddsClientUtils and OzoneUtils have duplicate verifyResourceName()

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2137?focusedWorklogId=313583&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313583
 ]

ASF GitHub Bot logged work on HDDS-2137:


Author: ASF GitHub Bot
Created on: 17/Sep/19 09:55
Start Date: 17/Sep/19 09:55
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1455: HDDS-2137 : OzoneUtils 
to verify resourceName using HddsClientUtils
URL: https://github.com/apache/hadoop/pull/1455#issuecomment-532150505
 
 
   @virajjasani I am not sure what do you need exactly. hadoop-ozone always use 
hadoop-hdds (common, client as a dependency).
   
   The alternative CI builds is not triggered because we have a 
[whitelist](https://github.com/elek/argo-ozone/commit/6bd2ebe1ae78c55ae1c5b2e6c30b407443fb1b13)
 to allow PR builds
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313583)
Time Spent: 1h  (was: 50m)

> HddsClientUtils and OzoneUtils have duplicate verifyResourceName()
> --
>
> Key: HDDS-2137
> URL: https://issues.apache.org/jira/browse/HDDS-2137
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> HddsClientUtils and OzoneUtils can share the method to verify resource name 
> that verifies if the bucket/volume name is a valid DNS name.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread YiSheng Lien (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931254#comment-16931254
 ] 

YiSheng Lien commented on HDDS-730:
---

Attachments are demo on my computer.

!ozone-cli-fs.png!

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie
> Attachments: image-2018-10-24-17-15-39-097.png, ozone-cli-fs.png
>
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2137) HddsClientUtils and OzoneUtils have duplicate verifyResourceName()

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2137?focusedWorklogId=313582&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313582
 ]

ASF GitHub Bot logged work on HDDS-2137:


Author: ASF GitHub Bot
Created on: 17/Sep/19 09:53
Start Date: 17/Sep/19 09:53
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1455: HDDS-2137 : OzoneUtils 
to verify resourceName using HddsClientUtils
URL: https://github.com/apache/hadoop/pull/1455#issuecomment-532149669
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313582)
Time Spent: 50m  (was: 40m)

> HddsClientUtils and OzoneUtils have duplicate verifyResourceName()
> --
>
> Key: HDDS-2137
> URL: https://issues.apache.org/jira/browse/HDDS-2137
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> HddsClientUtils and OzoneUtils can share the method to verify resource name 
> that verifies if the bucket/volume name is a valid DNS name.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien updated HDDS-730:
--
Attachment: ozone-cli-fs.png

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie
> Attachments: image-2018-10-24-17-15-39-097.png, 
> ozone-cli-fs-withnonexist.png, ozone-cli-fs.png
>
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien updated HDDS-730:
--
Attachment: ozone-cli-fs-withnonexist.png

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie
> Attachments: image-2018-10-24-17-15-39-097.png, 
> ozone-cli-fs-withnonexist.png, ozone-cli-fs.png
>
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2141) Missing total number of operations

2019-09-17 Thread Doroszlai, Attila (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-2141:

Attachment: missing_total.png

> Missing total number of operations
> --
>
> Key: HDDS-2141
> URL: https://issues.apache.org/jira/browse/HDDS-2141
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
> Attachments: missing_total.png
>
>
> Total number of operations is missing from some metrics graphs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien updated HDDS-730:
--
Attachment: (was: Screenshot from 2019-09-17 17-48-30.png)

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie
> Attachments: image-2018-10-24-17-15-39-097.png
>
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien updated HDDS-730:
--
Attachment: Screenshot from 2019-09-17 17-49-29.png
Screenshot from 2019-09-17 17-48-30.png

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie
> Attachments: image-2018-10-24-17-15-39-097.png
>
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien updated HDDS-730:
--
Attachment: (was: Screenshot from 2019-09-17 17-49-29.png)

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie
> Attachments: image-2018-10-24-17-15-39-097.png
>
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread YiSheng Lien (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931252#comment-16931252
 ] 

YiSheng Lien commented on HDDS-730:
---

Attachments are demo !Screenshot from 2019-09-17 17-48-30.png! on my machine.

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie
> Attachments: Screenshot from 2019-09-17 17-48-30.png, Screenshot from 
> 2019-09-17 17-49-29.png, image-2018-10-24-17-15-39-097.png
>
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-17 Thread YiSheng Lien (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931249#comment-16931249
 ] 

YiSheng Lien commented on HDDS-730:
---

Thanks [~ssulav] report, 

I have fixed the files related to the issue, 

we can use it if the dependency of version of hadoop in pom of ozone is changed 
to 3.3.0 in the future.  

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie
> Attachments: image-2018-10-24-17-15-39-097.png
>
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2114) Rename does not preserve non-explicitly created interim directories

2019-09-17 Thread Istvan Fajth (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931246#comment-16931246
 ] 

Istvan Fajth commented on HDDS-2114:


I have tested the fix on a real cluster with Hive and the workflow that have 
failed and got us to this issue is working properly after the fix was applied. 
Thank you [~ljain] for working on it, thanks [~msingh] for the review and 
commit!

> Rename does not preserve non-explicitly created interim directories
> ---
>
> Key: HDDS-2114
> URL: https://issues.apache.org/jira/browse/HDDS-2114
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Istvan Fajth
>Assignee: Lokesh Jain
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: demonstrative_test.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> I am attaching a patch that adds a test that demonstrates the problem.
> The scenario is coming from the way how Hive implements acid transactions 
> with the ORC table format, but the test is redacted to the simplest possible 
> code that reproduces the issue.
> The scenario:
>  * Given a 3 level directory structure, where the top level directory was 
> explicitly created, and the interim directory is implicitly created (for 
> example either by creating a file with create("/top/interim/file") or by 
> creating a directory with mkdirs("top/interim/dir"))
>  * When the leaf is moved out from the implicitly created directory making 
> this directory an empty directory
>  * Then a FileNotFoundException is thrown when getFileStatus or listStatus is 
> called on the interim directory.
> The expected behaviour:
> after the directory is becoming empty, the directory should still be part of 
> the file system, moreover an empty FileStatus array should be returned when 
> listStatus is called on it, and also a valid FileStatus object should be 
> returned when getFileStatus is called on it.
>  
>  
> As this issue is present with Hive, and as this is how a FileSystem is 
> expected to work this seems to be an at least critical issue as I see, please 
> feel free to change the priority if needed.
> Also please note that, if the interim directory is explicitly created with 
> mkdirs("top/interim") before creating the leaf, then the issue does not 
> appear.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14850) Optimize FileSystemAccessService#getFileSystemConfiguration

2019-09-17 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14850:
---
Attachment: HDFS-14850.001.patch
Status: Patch Available  (was: Open)

> Optimize FileSystemAccessService#getFileSystemConfiguration
> ---
>
> Key: HDFS-14850
> URL: https://issues.apache.org/jira/browse/HDFS-14850
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14850.001.patch
>
>
> {code:java}
>  @Override
>   public Configuration getFileSystemConfiguration() {
> Configuration conf = new Configuration(true);
> ConfigurationUtils.copy(serviceHadoopConf, conf);
> conf.setBoolean(FILE_SYSTEM_SERVICE_CREATED, true);
> // Force-clear server-side umask to make HttpFS match WebHDFS behavior
> conf.set(FsPermission.UMASK_LABEL, "000");
> return conf;
>   }
> {code}
> As above code,when call 
> FileSystemAccessService#getFileSystemConfiguration,current code  new 
> Configuration every time.  
> It is not necessary and affects performance. I think it only need to new 
> Configuration in FileSystemAccessService#init once and  
> FileSystemAccessService#getFileSystemConfiguration get it.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2137) HddsClientUtils and OzoneUtils have duplicate verifyResourceName()

2019-09-17 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931223#comment-16931223
 ] 

Elek, Marton commented on HDDS-2137:


Thanks to work on this issue [~vjasani]. I added you to the HDDS project in 
jira as a contributor. Now you can be an assignee of any HDDS issues.

> HddsClientUtils and OzoneUtils have duplicate verifyResourceName()
> --
>
> Key: HDDS-2137
> URL: https://issues.apache.org/jira/browse/HDDS-2137
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> HddsClientUtils and OzoneUtils can share the method to verify resource name 
> that verifies if the bucket/volume name is a valid DNS name.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2137) HddsClientUtils and OzoneUtils have duplicate verifyResourceName()

2019-09-17 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-2137:
--

Assignee: Viraj Jasani

> HddsClientUtils and OzoneUtils have duplicate verifyResourceName()
> --
>
> Key: HDDS-2137
> URL: https://issues.apache.org/jira/browse/HDDS-2137
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> HddsClientUtils and OzoneUtils can share the method to verify resource name 
> that verifies if the bucket/volume name is a valid DNS name.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2098) Ozone shell command prints out ERROR when the log4j file is not present.

2019-09-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931221#comment-16931221
 ] 

Hudson commented on HDDS-2098:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17311 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17311/])
HDDS-2098 : Ozone shell command prints out ERROR when the log4j file … (bharat: 
rev 4f6708103859dac1b1d25180a4b16727d2aaca63)
* (edit) hadoop-ozone/common/src/main/bin/ozone


> Ozone shell command prints out ERROR when the log4j file is not present.
> 
>
> Key: HDDS-2098
> URL: https://issues.apache.org/jira/browse/HDDS-2098
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> *Exception Trace*
> {code}
> log4j:ERROR Could not read configuration file from URL 
> [file:/etc/ozone/conf/ozone-shell-log4j.properties].
> java.io.FileNotFoundException: /etc/ozone/conf/ozone-shell-log4j.properties 
> (No such file or directory)
>   at java.io.FileInputStream.open0(Native Method)
>   at java.io.FileInputStream.open(FileInputStream.java:195)
>   at java.io.FileInputStream.(FileInputStream.java:138)
>   at java.io.FileInputStream.(FileInputStream.java:93)
>   at 
> sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
>   at 
> sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
>   at 
> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557)
>   at 
> org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
>   at org.apache.log4j.LogManager.(LogManager.java:127)
>   at org.slf4j.impl.Log4jLoggerFactory.(Log4jLoggerFactory.java:66)
>   at org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:72)
>   at 
> org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:45)
>   at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
>   at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
>   at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.(Shell.java:35)
> log4j:ERROR Ignoring configuration file 
> [file:/etc/ozone/conf/ozone-shell-log4j.properties].
> log4j:WARN No appenders could be found for logger 
> (io.jaegertracing.thrift.internal.senders.ThriftSenderFactory).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> {
>   "metadata" : { },
>   "name" : "vol-test-putfile-1567740142",
>   "admin" : "root",
>   "owner" : "root",
>   "creationTime" : 1567740146501,
>   "acls" : [ {
> "type" : "USER",
> "name" : "root",
> "aclScope" : "ACCESS",
> "aclList" : [ "ALL" ]
>   }, {
> "type" : "GROUP",
> "name" : "root",
> "aclScope" : "ACCESS",
> "aclList" : [ "ALL" ]
>   } ],
>   "quota" : 1152921504606846976
> }
> {code}
> *Fix*
> When a log4j file is not present, the default should be console.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2114) Rename does not preserve non-explicitly created interim directories

2019-09-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931220#comment-16931220
 ] 

Hudson commented on HDDS-2114:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17311 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17311/])
HDDS-2114. Rename does not preserve non-explicitly created interim (msingh: rev 
292bce7908bf4830c793a3f4e80376819c038379)
* (edit) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
* (edit) 
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystem.java


> Rename does not preserve non-explicitly created interim directories
> ---
>
> Key: HDDS-2114
> URL: https://issues.apache.org/jira/browse/HDDS-2114
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Istvan Fajth
>Assignee: Lokesh Jain
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: demonstrative_test.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> I am attaching a patch that adds a test that demonstrates the problem.
> The scenario is coming from the way how Hive implements acid transactions 
> with the ORC table format, but the test is redacted to the simplest possible 
> code that reproduces the issue.
> The scenario:
>  * Given a 3 level directory structure, where the top level directory was 
> explicitly created, and the interim directory is implicitly created (for 
> example either by creating a file with create("/top/interim/file") or by 
> creating a directory with mkdirs("top/interim/dir"))
>  * When the leaf is moved out from the implicitly created directory making 
> this directory an empty directory
>  * Then a FileNotFoundException is thrown when getFileStatus or listStatus is 
> called on the interim directory.
> The expected behaviour:
> after the directory is becoming empty, the directory should still be part of 
> the file system, moreover an empty FileStatus array should be returned when 
> listStatus is called on it, and also a valid FileStatus object should be 
> returned when getFileStatus is called on it.
>  
>  
> As this issue is present with Hive, and as this is how a FileSystem is 
> expected to work this seems to be an at least critical issue as I see, please 
> feel free to change the priority if needed.
> Also please note that, if the interim directory is explicitly created with 
> mkdirs("top/interim") before creating the leaf, then the issue does not 
> appear.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2132) TestKeyValueContainer is failing

2019-09-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931222#comment-16931222
 ] 

Hudson commented on HDDS-2132:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17311 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17311/])
HDDS-2132. TestKeyValueContainer is failing (#1457). (shashikant: rev 
e54977f888e1a855e9f88b9fa41e0c8794bd0881)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/KeyValueContainerUtil.java


> TestKeyValueContainer is failing
> 
>
> Key: HDDS-2132
> URL: https://issues.apache.org/jira/browse/HDDS-2132
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Nanda kumar
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {{TestKeyValueContainer}} is failing with the following exception 
> {noformat}
> [ERROR] 
> testContainerImportExport(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainer)
>   Time elapsed: 0.173 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.parseKVContainerData(KeyValueContainerUtil.java:201)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.importContainerData(KeyValueContainer.java:500)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainer.testContainerImportExport(TestKeyValueContainer.java:235)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2132) TestKeyValueContainer is failing

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2132?focusedWorklogId=313572&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313572
 ]

ASF GitHub Bot logged work on HDDS-2132:


Author: ASF GitHub Bot
Created on: 17/Sep/19 09:09
Start Date: 17/Sep/19 09:09
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1457: HDDS-2132. 
TestKeyValueContainer is failing
URL: https://github.com/apache/hadoop/pull/1457#issuecomment-532133273
 
 
   Thanks @bshashikant for reviewing and merging it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313572)
Time Spent: 1h 10m  (was: 1h)

> TestKeyValueContainer is failing
> 
>
> Key: HDDS-2132
> URL: https://issues.apache.org/jira/browse/HDDS-2132
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Nanda kumar
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {{TestKeyValueContainer}} is failing with the following exception 
> {noformat}
> [ERROR] 
> testContainerImportExport(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainer)
>   Time elapsed: 0.173 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.parseKVContainerData(KeyValueContainerUtil.java:201)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.importContainerData(KeyValueContainer.java:500)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainer.testContainerImportExport(TestKeyValueContainer.java:235)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For

[jira] [Updated] (HDDS-2132) TestKeyValueContainer is failing

2019-09-17 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-2132:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~adoroszlai] for the contribution. I have committed this.

> TestKeyValueContainer is failing
> 
>
> Key: HDDS-2132
> URL: https://issues.apache.org/jira/browse/HDDS-2132
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Nanda kumar
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {{TestKeyValueContainer}} is failing with the following exception 
> {noformat}
> [ERROR] 
> testContainerImportExport(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainer)
>   Time elapsed: 0.173 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.parseKVContainerData(KeyValueContainerUtil.java:201)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.importContainerData(KeyValueContainer.java:500)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainer.testContainerImportExport(TestKeyValueContainer.java:235)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2132) TestKeyValueContainer is failing

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2132?focusedWorklogId=313569&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313569
 ]

ASF GitHub Bot logged work on HDDS-2132:


Author: ASF GitHub Bot
Created on: 17/Sep/19 09:01
Start Date: 17/Sep/19 09:01
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on issue #1457: HDDS-2132. 
TestKeyValueContainer is failing
URL: https://github.com/apache/hadoop/pull/1457#issuecomment-532130357
 
 
   Thanks @adoroszlai for workin on this. The patch looks good. I am +1 on the 
change.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313569)
Time Spent: 50m  (was: 40m)

> TestKeyValueContainer is failing
> 
>
> Key: HDDS-2132
> URL: https://issues.apache.org/jira/browse/HDDS-2132
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Nanda kumar
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {{TestKeyValueContainer}} is failing with the following exception 
> {noformat}
> [ERROR] 
> testContainerImportExport(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainer)
>   Time elapsed: 0.173 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.parseKVContainerData(KeyValueContainerUtil.java:201)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.importContainerData(KeyValueContainer.java:500)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainer.testContainerImportExport(TestKeyValueContainer.java:235)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-un

[jira] [Work logged] (HDDS-2132) TestKeyValueContainer is failing

2019-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2132?focusedWorklogId=313570&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-313570
 ]

ASF GitHub Bot logged work on HDDS-2132:


Author: ASF GitHub Bot
Created on: 17/Sep/19 09:01
Start Date: 17/Sep/19 09:01
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #1457: HDDS-2132. 
TestKeyValueContainer is failing
URL: https://github.com/apache/hadoop/pull/1457
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 313570)
Time Spent: 1h  (was: 50m)

> TestKeyValueContainer is failing
> 
>
> Key: HDDS-2132
> URL: https://issues.apache.org/jira/browse/HDDS-2132
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Nanda kumar
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {{TestKeyValueContainer}} is failing with the following exception 
> {noformat}
> [ERROR] 
> testContainerImportExport(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainer)
>   Time elapsed: 0.173 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.parseKVContainerData(KeyValueContainerUtil.java:201)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.importContainerData(KeyValueContainer.java:500)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainer.testContainerImportExport(TestKeyValueContainer.java:235)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2141) Missing total number of operations

2019-09-17 Thread Doroszlai, Attila (Jira)
Doroszlai, Attila created HDDS-2141:
---

 Summary: Missing total number of operations
 Key: HDDS-2141
 URL: https://issues.apache.org/jira/browse/HDDS-2141
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Affects Versions: 0.4.1
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


Total number of operations is missing from some metrics graphs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14768) In some cases, erasure blocks are corruption when they are reconstruct.

2019-09-17 Thread guojh (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931153#comment-16931153
 ] 

guojh commented on HDFS-14768:
--

[~zhaoyim] Try the new patch.

> In some cases, erasure blocks are corruption  when they are reconstruct.
> 
>
> Key: HDFS-14768
> URL: https://issues.apache.org/jira/browse/HDFS-14768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, erasure-coding, hdfs, namenode
>Affects Versions: 3.0.2
>Reporter: guojh
>Assignee: guojh
>Priority: Major
>  Labels: patch
> Fix For: 3.3.0
>
> Attachments: 1568275810244.jpg, 1568276338275.jpg, 
> HDFS-14768.000.patch, HDFS-14768.001.patch, HDFS-14768.002.patch, 
> HDFS-14768.jpg, guojh_UT_after_deomission.txt, 
> guojh_UT_before_deomission.txt, zhaoyiming_UT_after_deomission.txt, 
> zhaoyiming_UT_beofre_deomission.txt
>
>
> Policy is RS-6-3-1024K, version is hadoop 3.0.2;
> We suppose a file's block Index is [0,1,2,3,4,5,6,7,8], And decommission 
> index[3,4], increase the index 6 datanode's
> pendingReplicationWithoutTargets  that make it large than 
> replicationStreamsHardLimit(we set 14). Then, After the method 
> chooseSourceDatanodes of BlockMananger, the liveBlockIndices is 
> [0,1,2,3,4,5,7,8], Block Counter is, Live:7, Decommission:2. 
> In method scheduleReconstruction of BlockManager, the additionalReplRequired 
> is 9 - 7 = 2. After Namenode choose two target Datanode, will assign a 
> erasureCode task to target datanode.
> When datanode get the task will build  targetIndices from liveBlockIndices 
> and target length. the code is blow.
> {code:java}
> // code placeholder
> targetIndices = new short[targets.length];
> private void initTargetIndices() { 
>   BitSet bitset = reconstructor.getLiveBitSet();
>   int m = 0; hasValidTargets = false; 
>   for (int i = 0; i < dataBlkNum + parityBlkNum; i++) {  
> if (!bitset.get) {    
>   if (reconstructor.getBlockLen > 0) {
>        if (m < targets.length) {
>          targetIndices[m++] = (short)i;
>          hasValidTargets = true;
>         }
>       }
>     }
>  }
> {code}
> targetIndices[0]=6, and targetIndices[1] is aways 0 from initial value.
> The StripedReader is  aways create reader from first 6 index block, and is 
> [0,1,2,3,4,5]
> Use the index [0,1,2,3,4,5] to build target index[6,0] will trigger the isal 
> bug. the block index6's data is corruption(all data is zero).
> I write a unit test can stabilize repreduce.
> {code:java}
> // code placeholder
> private int replicationStreamsHardLimit = 
> DFSConfigKeys.DFS_NAMENODE_REPLICATION_STREAMS_HARD_LIMIT_DEFAULT;
> numDNs = dataBlocks + parityBlocks + 10;
> @Test(timeout = 24)
> public void testFileDecommission() throws Exception {
>   LOG.info("Starting test testFileDecommission");
>   final Path ecFile = new Path(ecDir, "testFileDecommission");
>   int writeBytes = cellSize * dataBlocks;
>   writeStripedFile(dfs, ecFile, writeBytes);
>   Assert.assertEquals(0, bm.numOfUnderReplicatedBlocks());
>   FileChecksum fileChecksum1 = dfs.getFileChecksum(ecFile, writeBytes);
>   final INodeFile fileNode = cluster.getNamesystem().getFSDirectory()
>   .getINode4Write(ecFile.toString()).asFile();
>   LocatedBlocks locatedBlocks =
>   StripedFileTestUtil.getLocatedBlocks(ecFile, dfs);
>   LocatedBlock lb = dfs.getClient().getLocatedBlocks(ecFile.toString(), 0)
>   .get(0);
>   DatanodeInfo[] dnLocs = lb.getLocations();
>   LocatedStripedBlock lastBlock =
>   (LocatedStripedBlock)locatedBlocks.getLastLocatedBlock();
>   DatanodeInfo[] storageInfos = lastBlock.getLocations();
>   //
>   DatanodeDescriptor datanodeDescriptor = 
> cluster.getNameNode().getNamesystem()
>   
> .getBlockManager().getDatanodeManager().getDatanode(storageInfos[6].getDatanodeUuid());
>   BlockInfo firstBlock = fileNode.getBlocks()[0];
>   DatanodeStorageInfo[] dStorageInfos = bm.getStorages(firstBlock);
>   // the first heartbeat will consume 3 replica tasks
>   for (int i = 0; i <= replicationStreamsHardLimit + 3; i++) {
> BlockManagerTestUtil.addBlockToBeReplicated(datanodeDescriptor, new 
> Block(i),
> new DatanodeStorageInfo[]{dStorageInfos[0]});
>   }
>   assertEquals(dataBlocks + parityBlocks, dnLocs.length);
>   int[] decommNodeIndex = {3, 4};
>   final List decommisionNodes = new ArrayList();
>   // add the node which will be decommissioning
>   decommisionNodes.add(dnLocs[decommNodeIndex[0]]);
>   decommisionNodes.add(dnLocs[decommNodeIndex[1]]);
>   decommissionNode(0, decommisionNodes, AdminStates.DECOMMISSIONED);
>   assertEquals(decommisionNodes.size(), fsn.getNumDecomLiveDataNodes());
>   bm.getDatanodeManager().removeDatanode(datanodeDescriptor);
>   //assertNull(checkFile(dfs, ecFil

[jira] [Updated] (HDFS-14768) In some cases, erasure blocks are corruption when they are reconstruct.

2019-09-17 Thread guojh (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guojh updated HDFS-14768:
-
Attachment: HDFS-14768.002.patch

> In some cases, erasure blocks are corruption  when they are reconstruct.
> 
>
> Key: HDFS-14768
> URL: https://issues.apache.org/jira/browse/HDFS-14768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, erasure-coding, hdfs, namenode
>Affects Versions: 3.0.2
>Reporter: guojh
>Assignee: guojh
>Priority: Major
>  Labels: patch
> Fix For: 3.3.0
>
> Attachments: 1568275810244.jpg, 1568276338275.jpg, 
> HDFS-14768.000.patch, HDFS-14768.001.patch, HDFS-14768.002.patch, 
> HDFS-14768.jpg, guojh_UT_after_deomission.txt, 
> guojh_UT_before_deomission.txt, zhaoyiming_UT_after_deomission.txt, 
> zhaoyiming_UT_beofre_deomission.txt
>
>
> Policy is RS-6-3-1024K, version is hadoop 3.0.2;
> We suppose a file's block Index is [0,1,2,3,4,5,6,7,8], And decommission 
> index[3,4], increase the index 6 datanode's
> pendingReplicationWithoutTargets  that make it large than 
> replicationStreamsHardLimit(we set 14). Then, After the method 
> chooseSourceDatanodes of BlockMananger, the liveBlockIndices is 
> [0,1,2,3,4,5,7,8], Block Counter is, Live:7, Decommission:2. 
> In method scheduleReconstruction of BlockManager, the additionalReplRequired 
> is 9 - 7 = 2. After Namenode choose two target Datanode, will assign a 
> erasureCode task to target datanode.
> When datanode get the task will build  targetIndices from liveBlockIndices 
> and target length. the code is blow.
> {code:java}
> // code placeholder
> targetIndices = new short[targets.length];
> private void initTargetIndices() { 
>   BitSet bitset = reconstructor.getLiveBitSet();
>   int m = 0; hasValidTargets = false; 
>   for (int i = 0; i < dataBlkNum + parityBlkNum; i++) {  
> if (!bitset.get) {    
>   if (reconstructor.getBlockLen > 0) {
>        if (m < targets.length) {
>          targetIndices[m++] = (short)i;
>          hasValidTargets = true;
>         }
>       }
>     }
>  }
> {code}
> targetIndices[0]=6, and targetIndices[1] is aways 0 from initial value.
> The StripedReader is  aways create reader from first 6 index block, and is 
> [0,1,2,3,4,5]
> Use the index [0,1,2,3,4,5] to build target index[6,0] will trigger the isal 
> bug. the block index6's data is corruption(all data is zero).
> I write a unit test can stabilize repreduce.
> {code:java}
> // code placeholder
> private int replicationStreamsHardLimit = 
> DFSConfigKeys.DFS_NAMENODE_REPLICATION_STREAMS_HARD_LIMIT_DEFAULT;
> numDNs = dataBlocks + parityBlocks + 10;
> @Test(timeout = 24)
> public void testFileDecommission() throws Exception {
>   LOG.info("Starting test testFileDecommission");
>   final Path ecFile = new Path(ecDir, "testFileDecommission");
>   int writeBytes = cellSize * dataBlocks;
>   writeStripedFile(dfs, ecFile, writeBytes);
>   Assert.assertEquals(0, bm.numOfUnderReplicatedBlocks());
>   FileChecksum fileChecksum1 = dfs.getFileChecksum(ecFile, writeBytes);
>   final INodeFile fileNode = cluster.getNamesystem().getFSDirectory()
>   .getINode4Write(ecFile.toString()).asFile();
>   LocatedBlocks locatedBlocks =
>   StripedFileTestUtil.getLocatedBlocks(ecFile, dfs);
>   LocatedBlock lb = dfs.getClient().getLocatedBlocks(ecFile.toString(), 0)
>   .get(0);
>   DatanodeInfo[] dnLocs = lb.getLocations();
>   LocatedStripedBlock lastBlock =
>   (LocatedStripedBlock)locatedBlocks.getLastLocatedBlock();
>   DatanodeInfo[] storageInfos = lastBlock.getLocations();
>   //
>   DatanodeDescriptor datanodeDescriptor = 
> cluster.getNameNode().getNamesystem()
>   
> .getBlockManager().getDatanodeManager().getDatanode(storageInfos[6].getDatanodeUuid());
>   BlockInfo firstBlock = fileNode.getBlocks()[0];
>   DatanodeStorageInfo[] dStorageInfos = bm.getStorages(firstBlock);
>   // the first heartbeat will consume 3 replica tasks
>   for (int i = 0; i <= replicationStreamsHardLimit + 3; i++) {
> BlockManagerTestUtil.addBlockToBeReplicated(datanodeDescriptor, new 
> Block(i),
> new DatanodeStorageInfo[]{dStorageInfos[0]});
>   }
>   assertEquals(dataBlocks + parityBlocks, dnLocs.length);
>   int[] decommNodeIndex = {3, 4};
>   final List decommisionNodes = new ArrayList();
>   // add the node which will be decommissioning
>   decommisionNodes.add(dnLocs[decommNodeIndex[0]]);
>   decommisionNodes.add(dnLocs[decommNodeIndex[1]]);
>   decommissionNode(0, decommisionNodes, AdminStates.DECOMMISSIONED);
>   assertEquals(decommisionNodes.size(), fsn.getNumDecomLiveDataNodes());
>   bm.getDatanodeManager().removeDatanode(datanodeDescriptor);
>   //assertNull(checkFile(dfs, ecFile, 9, decommisionNodes, numDNs));
>   // Ensure dec

[jira] [Updated] (HDFS-14768) In some cases, erasure blocks are corruption when they are reconstruct.

2019-09-17 Thread Zhao Yi Ming (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao Yi Ming updated HDFS-14768:

Attachment: (was: HDFS-14768.jpg)

> In some cases, erasure blocks are corruption  when they are reconstruct.
> 
>
> Key: HDFS-14768
> URL: https://issues.apache.org/jira/browse/HDFS-14768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, erasure-coding, hdfs, namenode
>Affects Versions: 3.0.2
>Reporter: guojh
>Assignee: guojh
>Priority: Major
>  Labels: patch
> Fix For: 3.3.0
>
> Attachments: 1568275810244.jpg, 1568276338275.jpg, 
> HDFS-14768.000.patch, HDFS-14768.001.patch, HDFS-14768.jpg, 
> guojh_UT_after_deomission.txt, guojh_UT_before_deomission.txt, 
> zhaoyiming_UT_after_deomission.txt, zhaoyiming_UT_beofre_deomission.txt
>
>
> Policy is RS-6-3-1024K, version is hadoop 3.0.2;
> We suppose a file's block Index is [0,1,2,3,4,5,6,7,8], And decommission 
> index[3,4], increase the index 6 datanode's
> pendingReplicationWithoutTargets  that make it large than 
> replicationStreamsHardLimit(we set 14). Then, After the method 
> chooseSourceDatanodes of BlockMananger, the liveBlockIndices is 
> [0,1,2,3,4,5,7,8], Block Counter is, Live:7, Decommission:2. 
> In method scheduleReconstruction of BlockManager, the additionalReplRequired 
> is 9 - 7 = 2. After Namenode choose two target Datanode, will assign a 
> erasureCode task to target datanode.
> When datanode get the task will build  targetIndices from liveBlockIndices 
> and target length. the code is blow.
> {code:java}
> // code placeholder
> targetIndices = new short[targets.length];
> private void initTargetIndices() { 
>   BitSet bitset = reconstructor.getLiveBitSet();
>   int m = 0; hasValidTargets = false; 
>   for (int i = 0; i < dataBlkNum + parityBlkNum; i++) {  
> if (!bitset.get) {    
>   if (reconstructor.getBlockLen > 0) {
>        if (m < targets.length) {
>          targetIndices[m++] = (short)i;
>          hasValidTargets = true;
>         }
>       }
>     }
>  }
> {code}
> targetIndices[0]=6, and targetIndices[1] is aways 0 from initial value.
> The StripedReader is  aways create reader from first 6 index block, and is 
> [0,1,2,3,4,5]
> Use the index [0,1,2,3,4,5] to build target index[6,0] will trigger the isal 
> bug. the block index6's data is corruption(all data is zero).
> I write a unit test can stabilize repreduce.
> {code:java}
> // code placeholder
> private int replicationStreamsHardLimit = 
> DFSConfigKeys.DFS_NAMENODE_REPLICATION_STREAMS_HARD_LIMIT_DEFAULT;
> numDNs = dataBlocks + parityBlocks + 10;
> @Test(timeout = 24)
> public void testFileDecommission() throws Exception {
>   LOG.info("Starting test testFileDecommission");
>   final Path ecFile = new Path(ecDir, "testFileDecommission");
>   int writeBytes = cellSize * dataBlocks;
>   writeStripedFile(dfs, ecFile, writeBytes);
>   Assert.assertEquals(0, bm.numOfUnderReplicatedBlocks());
>   FileChecksum fileChecksum1 = dfs.getFileChecksum(ecFile, writeBytes);
>   final INodeFile fileNode = cluster.getNamesystem().getFSDirectory()
>   .getINode4Write(ecFile.toString()).asFile();
>   LocatedBlocks locatedBlocks =
>   StripedFileTestUtil.getLocatedBlocks(ecFile, dfs);
>   LocatedBlock lb = dfs.getClient().getLocatedBlocks(ecFile.toString(), 0)
>   .get(0);
>   DatanodeInfo[] dnLocs = lb.getLocations();
>   LocatedStripedBlock lastBlock =
>   (LocatedStripedBlock)locatedBlocks.getLastLocatedBlock();
>   DatanodeInfo[] storageInfos = lastBlock.getLocations();
>   //
>   DatanodeDescriptor datanodeDescriptor = 
> cluster.getNameNode().getNamesystem()
>   
> .getBlockManager().getDatanodeManager().getDatanode(storageInfos[6].getDatanodeUuid());
>   BlockInfo firstBlock = fileNode.getBlocks()[0];
>   DatanodeStorageInfo[] dStorageInfos = bm.getStorages(firstBlock);
>   // the first heartbeat will consume 3 replica tasks
>   for (int i = 0; i <= replicationStreamsHardLimit + 3; i++) {
> BlockManagerTestUtil.addBlockToBeReplicated(datanodeDescriptor, new 
> Block(i),
> new DatanodeStorageInfo[]{dStorageInfos[0]});
>   }
>   assertEquals(dataBlocks + parityBlocks, dnLocs.length);
>   int[] decommNodeIndex = {3, 4};
>   final List decommisionNodes = new ArrayList();
>   // add the node which will be decommissioning
>   decommisionNodes.add(dnLocs[decommNodeIndex[0]]);
>   decommisionNodes.add(dnLocs[decommNodeIndex[1]]);
>   decommissionNode(0, decommisionNodes, AdminStates.DECOMMISSIONED);
>   assertEquals(decommisionNodes.size(), fsn.getNumDecomLiveDataNodes());
>   bm.getDatanodeManager().removeDatanode(datanodeDescriptor);
>   //assertNull(checkFile(dfs, ecFile, 9, decommisionNodes, numDNs));
>   // Ensure decommiss

[jira] [Updated] (HDFS-14768) In some cases, erasure blocks are corruption when they are reconstruct.

2019-09-17 Thread Zhao Yi Ming (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao Yi Ming updated HDFS-14768:

Attachment: HDFS-14768.jpg

> In some cases, erasure blocks are corruption  when they are reconstruct.
> 
>
> Key: HDFS-14768
> URL: https://issues.apache.org/jira/browse/HDFS-14768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, erasure-coding, hdfs, namenode
>Affects Versions: 3.0.2
>Reporter: guojh
>Assignee: guojh
>Priority: Major
>  Labels: patch
> Fix For: 3.3.0
>
> Attachments: 1568275810244.jpg, 1568276338275.jpg, 
> HDFS-14768.000.patch, HDFS-14768.001.patch, HDFS-14768.jpg, 
> guojh_UT_after_deomission.txt, guojh_UT_before_deomission.txt, 
> zhaoyiming_UT_after_deomission.txt, zhaoyiming_UT_beofre_deomission.txt
>
>
> Policy is RS-6-3-1024K, version is hadoop 3.0.2;
> We suppose a file's block Index is [0,1,2,3,4,5,6,7,8], And decommission 
> index[3,4], increase the index 6 datanode's
> pendingReplicationWithoutTargets  that make it large than 
> replicationStreamsHardLimit(we set 14). Then, After the method 
> chooseSourceDatanodes of BlockMananger, the liveBlockIndices is 
> [0,1,2,3,4,5,7,8], Block Counter is, Live:7, Decommission:2. 
> In method scheduleReconstruction of BlockManager, the additionalReplRequired 
> is 9 - 7 = 2. After Namenode choose two target Datanode, will assign a 
> erasureCode task to target datanode.
> When datanode get the task will build  targetIndices from liveBlockIndices 
> and target length. the code is blow.
> {code:java}
> // code placeholder
> targetIndices = new short[targets.length];
> private void initTargetIndices() { 
>   BitSet bitset = reconstructor.getLiveBitSet();
>   int m = 0; hasValidTargets = false; 
>   for (int i = 0; i < dataBlkNum + parityBlkNum; i++) {  
> if (!bitset.get) {    
>   if (reconstructor.getBlockLen > 0) {
>        if (m < targets.length) {
>          targetIndices[m++] = (short)i;
>          hasValidTargets = true;
>         }
>       }
>     }
>  }
> {code}
> targetIndices[0]=6, and targetIndices[1] is aways 0 from initial value.
> The StripedReader is  aways create reader from first 6 index block, and is 
> [0,1,2,3,4,5]
> Use the index [0,1,2,3,4,5] to build target index[6,0] will trigger the isal 
> bug. the block index6's data is corruption(all data is zero).
> I write a unit test can stabilize repreduce.
> {code:java}
> // code placeholder
> private int replicationStreamsHardLimit = 
> DFSConfigKeys.DFS_NAMENODE_REPLICATION_STREAMS_HARD_LIMIT_DEFAULT;
> numDNs = dataBlocks + parityBlocks + 10;
> @Test(timeout = 24)
> public void testFileDecommission() throws Exception {
>   LOG.info("Starting test testFileDecommission");
>   final Path ecFile = new Path(ecDir, "testFileDecommission");
>   int writeBytes = cellSize * dataBlocks;
>   writeStripedFile(dfs, ecFile, writeBytes);
>   Assert.assertEquals(0, bm.numOfUnderReplicatedBlocks());
>   FileChecksum fileChecksum1 = dfs.getFileChecksum(ecFile, writeBytes);
>   final INodeFile fileNode = cluster.getNamesystem().getFSDirectory()
>   .getINode4Write(ecFile.toString()).asFile();
>   LocatedBlocks locatedBlocks =
>   StripedFileTestUtil.getLocatedBlocks(ecFile, dfs);
>   LocatedBlock lb = dfs.getClient().getLocatedBlocks(ecFile.toString(), 0)
>   .get(0);
>   DatanodeInfo[] dnLocs = lb.getLocations();
>   LocatedStripedBlock lastBlock =
>   (LocatedStripedBlock)locatedBlocks.getLastLocatedBlock();
>   DatanodeInfo[] storageInfos = lastBlock.getLocations();
>   //
>   DatanodeDescriptor datanodeDescriptor = 
> cluster.getNameNode().getNamesystem()
>   
> .getBlockManager().getDatanodeManager().getDatanode(storageInfos[6].getDatanodeUuid());
>   BlockInfo firstBlock = fileNode.getBlocks()[0];
>   DatanodeStorageInfo[] dStorageInfos = bm.getStorages(firstBlock);
>   // the first heartbeat will consume 3 replica tasks
>   for (int i = 0; i <= replicationStreamsHardLimit + 3; i++) {
> BlockManagerTestUtil.addBlockToBeReplicated(datanodeDescriptor, new 
> Block(i),
> new DatanodeStorageInfo[]{dStorageInfos[0]});
>   }
>   assertEquals(dataBlocks + parityBlocks, dnLocs.length);
>   int[] decommNodeIndex = {3, 4};
>   final List decommisionNodes = new ArrayList();
>   // add the node which will be decommissioning
>   decommisionNodes.add(dnLocs[decommNodeIndex[0]]);
>   decommisionNodes.add(dnLocs[decommNodeIndex[1]]);
>   decommissionNode(0, decommisionNodes, AdminStates.DECOMMISSIONED);
>   assertEquals(decommisionNodes.size(), fsn.getNumDecomLiveDataNodes());
>   bm.getDatanodeManager().removeDatanode(datanodeDescriptor);
>   //assertNull(checkFile(dfs, ecFile, 9, decommisionNodes, numDNs));
>   // Ensure decommissioned datan

[jira] [Commented] (HDFS-14768) In some cases, erasure blocks are corruption when they are reconstruct.

2019-09-17 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931139#comment-16931139
 ] 

Zhao Yi Ming commented on HDFS-14768:
-

[~gjhkael] The checkstyle is not passed, please correct. Thanks! 

Also I fetch the latest trunk code, then only added the UT 
*testDecommissionWithBusyNode*, then without your fix the UT still passed. 

!HDFS-14768.jpg!

 

> In some cases, erasure blocks are corruption  when they are reconstruct.
> 
>
> Key: HDFS-14768
> URL: https://issues.apache.org/jira/browse/HDFS-14768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, erasure-coding, hdfs, namenode
>Affects Versions: 3.0.2
>Reporter: guojh
>Assignee: guojh
>Priority: Major
>  Labels: patch
> Fix For: 3.3.0
>
> Attachments: 1568275810244.jpg, 1568276338275.jpg, 
> HDFS-14768.000.patch, HDFS-14768.001.patch, HDFS-14768.jpg, 
> guojh_UT_after_deomission.txt, guojh_UT_before_deomission.txt, 
> zhaoyiming_UT_after_deomission.txt, zhaoyiming_UT_beofre_deomission.txt
>
>
> Policy is RS-6-3-1024K, version is hadoop 3.0.2;
> We suppose a file's block Index is [0,1,2,3,4,5,6,7,8], And decommission 
> index[3,4], increase the index 6 datanode's
> pendingReplicationWithoutTargets  that make it large than 
> replicationStreamsHardLimit(we set 14). Then, After the method 
> chooseSourceDatanodes of BlockMananger, the liveBlockIndices is 
> [0,1,2,3,4,5,7,8], Block Counter is, Live:7, Decommission:2. 
> In method scheduleReconstruction of BlockManager, the additionalReplRequired 
> is 9 - 7 = 2. After Namenode choose two target Datanode, will assign a 
> erasureCode task to target datanode.
> When datanode get the task will build  targetIndices from liveBlockIndices 
> and target length. the code is blow.
> {code:java}
> // code placeholder
> targetIndices = new short[targets.length];
> private void initTargetIndices() { 
>   BitSet bitset = reconstructor.getLiveBitSet();
>   int m = 0; hasValidTargets = false; 
>   for (int i = 0; i < dataBlkNum + parityBlkNum; i++) {  
> if (!bitset.get) {    
>   if (reconstructor.getBlockLen > 0) {
>        if (m < targets.length) {
>          targetIndices[m++] = (short)i;
>          hasValidTargets = true;
>         }
>       }
>     }
>  }
> {code}
> targetIndices[0]=6, and targetIndices[1] is aways 0 from initial value.
> The StripedReader is  aways create reader from first 6 index block, and is 
> [0,1,2,3,4,5]
> Use the index [0,1,2,3,4,5] to build target index[6,0] will trigger the isal 
> bug. the block index6's data is corruption(all data is zero).
> I write a unit test can stabilize repreduce.
> {code:java}
> // code placeholder
> private int replicationStreamsHardLimit = 
> DFSConfigKeys.DFS_NAMENODE_REPLICATION_STREAMS_HARD_LIMIT_DEFAULT;
> numDNs = dataBlocks + parityBlocks + 10;
> @Test(timeout = 24)
> public void testFileDecommission() throws Exception {
>   LOG.info("Starting test testFileDecommission");
>   final Path ecFile = new Path(ecDir, "testFileDecommission");
>   int writeBytes = cellSize * dataBlocks;
>   writeStripedFile(dfs, ecFile, writeBytes);
>   Assert.assertEquals(0, bm.numOfUnderReplicatedBlocks());
>   FileChecksum fileChecksum1 = dfs.getFileChecksum(ecFile, writeBytes);
>   final INodeFile fileNode = cluster.getNamesystem().getFSDirectory()
>   .getINode4Write(ecFile.toString()).asFile();
>   LocatedBlocks locatedBlocks =
>   StripedFileTestUtil.getLocatedBlocks(ecFile, dfs);
>   LocatedBlock lb = dfs.getClient().getLocatedBlocks(ecFile.toString(), 0)
>   .get(0);
>   DatanodeInfo[] dnLocs = lb.getLocations();
>   LocatedStripedBlock lastBlock =
>   (LocatedStripedBlock)locatedBlocks.getLastLocatedBlock();
>   DatanodeInfo[] storageInfos = lastBlock.getLocations();
>   //
>   DatanodeDescriptor datanodeDescriptor = 
> cluster.getNameNode().getNamesystem()
>   
> .getBlockManager().getDatanodeManager().getDatanode(storageInfos[6].getDatanodeUuid());
>   BlockInfo firstBlock = fileNode.getBlocks()[0];
>   DatanodeStorageInfo[] dStorageInfos = bm.getStorages(firstBlock);
>   // the first heartbeat will consume 3 replica tasks
>   for (int i = 0; i <= replicationStreamsHardLimit + 3; i++) {
> BlockManagerTestUtil.addBlockToBeReplicated(datanodeDescriptor, new 
> Block(i),
> new DatanodeStorageInfo[]{dStorageInfos[0]});
>   }
>   assertEquals(dataBlocks + parityBlocks, dnLocs.length);
>   int[] decommNodeIndex = {3, 4};
>   final List decommisionNodes = new ArrayList();
>   // add the node which will be decommissioning
>   decommisionNodes.add(dnLocs[decommNodeIndex[0]]);
>   decommisionNodes.add(dnLocs[decommNodeIndex[1]]);
>   decommissionNode(0, decommisionNodes, AdminStates.DECOMMISSI

<    1   2   3   >