[jira] [Commented] (HDDS-684) Fix HDDS-4 branch after HDDS-490 and HADOOP-15832

2018-10-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657748#comment-16657748
 ] 

Hadoop QA commented on HDDS-684:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
40s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
20s{color} | {color:red} dist in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} hadoop-ozone: The patch generated 1 new + 1 
unchanged - 1 fixed = 2 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
18s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | 

[jira] [Updated] (HDDS-704) Fix the Dependency convergence issue on HDDS-4

2018-10-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-704:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~anu] for the review. I've commit the patch to the feature branch. 

> Fix the Dependency convergence issue on HDDS-4
> --
>
> Key: HDDS-704
> URL: https://issues.apache.org/jira/browse/HDDS-704
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-704-HDDS-4.001.patch
>
>
> {code}
> Dependency convergence error for org.bouncycastle:bcprov-jdk15on:1.54 paths 
> to dependency are:
> +-org.apache.hadoop:hadoop-hdds-common:0.4.0-SNAPSHOT
>   +-org.bouncycastle:bcprov-jdk15on:1.54
> and
> +-org.apache.hadoop:hadoop-hdds-common:0.4.0-SNAPSHOT
>   +-org.bouncycastle:bcpkix-jdk15on:1.54
> +-org.bouncycastle:bcprov-jdk15on:1.60
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> Failed while enforcing releasability. See above detailed error message.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-704) Fix the Dependency convergence issue on HDDS-4

2018-10-19 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657500#comment-16657500
 ] 

Xiaoyu Yao edited comment on HDDS-704 at 10/20/18 4:07 AM:
---

bcpkix-jdk15on depends on bcprov-jdk15on.

One way to fix this is to exclude bcprov-jdk15on when bringing in bcpkix-jdk15o 
dependency. 

The other way is to simply drop the explicit dependency on bcprov-jdk15on and 
depend only on bcpkix-jdk15on. 

Since we don't have specific version requirement on bcprov-jdk15on, I incline 
the second approach and I will post a patch shortly. 
 


was (Author: xyao):
bcpkix-jdk15on depends on bcprov-jdk15on.

Once way to fix this is to exclude bcprov-jdk15on when bringing in 
bcpkix-jdk15o dependency. 

The other way is to simply drop the explicit dependency on bcprov-jdk15on and 
depend only on bcpkix-jdk15on. 

Since we don't have specific version requirement on bcprov-jdk15on, I incline 
the second approach and I will post a patch shortly. 
 

> Fix the Dependency convergence issue on HDDS-4
> --
>
> Key: HDDS-704
> URL: https://issues.apache.org/jira/browse/HDDS-704
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-704-HDDS-4.001.patch
>
>
> {code}
> Dependency convergence error for org.bouncycastle:bcprov-jdk15on:1.54 paths 
> to dependency are:
> +-org.apache.hadoop:hadoop-hdds-common:0.4.0-SNAPSHOT
>   +-org.bouncycastle:bcprov-jdk15on:1.54
> and
> +-org.apache.hadoop:hadoop-hdds-common:0.4.0-SNAPSHOT
>   +-org.bouncycastle:bcpkix-jdk15on:1.54
> +-org.bouncycastle:bcprov-jdk15on:1.60
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> Failed while enforcing releasability. See above detailed error message.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14004) TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk

2018-10-19 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657702#comment-16657702
 ] 

Ayush Saxena commented on HDFS-14004:
-

Thanks [~elgoiri] [~jojochuang]
Have updated the patch v2 as per the suggestion. :)

> TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk
> ---
>
> Key: HDFS-14004
> URL: https://issues.apache.org/jira/browse/HDFS-14004
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14004-01.patch, HDFS-14004-02.patch
>
>
> Reference
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/930/testReport/junit/org.apache.hadoop.hdfs/TestLeaseRecovery2/testCloseWhileRecoverLease/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14004) TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk

2018-10-19 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14004:

Attachment: HDFS-14004-02.patch

> TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk
> ---
>
> Key: HDFS-14004
> URL: https://issues.apache.org/jira/browse/HDFS-14004
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14004-01.patch, HDFS-14004-02.patch
>
>
> Reference
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/930/testReport/junit/org.apache.hadoop.hdfs/TestLeaseRecovery2/testCloseWhileRecoverLease/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-544) Unconditional wait findbug warning from ReplicationSupervisor

2018-10-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657636#comment-16657636
 ] 

Hadoop QA commented on HDDS-544:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
48s{color} | {color:red} hadoop-hdds/container-service in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-hdds/container-service: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-hdds/container-service generated 0 new + 0 
unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-hdds_container-service generated 1 new + 4 
unchanged - 0 fixed = 5 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
47s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-544 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944824/HDDS-544.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux dec26645d186 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f069d38 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1465/artifact/out/branch-findbugs-hadoop-hdds_container-service-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1465/artifact/out/diff-checkstyle-hadoop-hdds_container-service.txt
 |
| javadoc | 

[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication

2018-10-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657634#comment-16657634
 ] 

Íñigo Goiri commented on HDFS-12284:


While testing this on Windows, I found that for some reason the RPC server is 
not happy when we try to connect using 0.0.0.0.
For this reason we need to add:
{code}
conf.set(DFS_ROUTER_RPC_BIND_HOST_KEY, "localhost");
{code}
To set a particular host to the RPC server and be able to connect.
I'll post an updated patch with this.

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284-HDFS-13532.004.patch, 
> HDFS-12284-HDFS-13532.005.patch, HDFS-12284-HDFS-13532.006.patch, 
> HDFS-12284-HDFS-13532.007.patch, HDFS-12284-HDFS-13532.008.patch, 
> HDFS-12284.000.patch, HDFS-12284.001.patch, HDFS-12284.002.patch, 
> HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13983) TestOfflineImageViewer crashes in windows

2018-10-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657628#comment-16657628
 ] 

Hudson commented on HDFS-13983:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15278 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15278/])
HDFS-13983. TestOfflineImageViewer crashes in windows. Contributed by 
(inigoiri: rev f069d38c8d3c0bfa91b70a60e4e556ec708fc411)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewerPB.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java


> TestOfflineImageViewer crashes in windows
> -
>
> Key: HDFS-13983
> URL: https://issues.apache.org/jira/browse/HDFS-13983
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-13893-with-patch-intellij-idea.JPG, 
> HDFS-13893-with-patch-mvn.JPG, 
> HDFS-13893-with-patch-without-sysout-close-intellij-idea.JPG, 
> HDFS-13893-without-patch-intellij-idea.JPG, HDFS-13893-without-patch-mvn.JPG, 
> HDFS-13983-01.patch, HDFS-13983-02.patch, HDFS-13983-03.patch
>
>
> TestOfflineImageViewer crashes in windows because, OfflineImageViewer 
> REVERSEXML tries to delete the outputfile and re-create the same stream which 
> is already created.
> Also there are unclosed RAF for input files which blocks from files being 
> deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-524) log4j is added with root to apache/hadoop:2 and apache/hadoop:3 images

2018-10-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657621#comment-16657621
 ] 

Hadoop QA commented on HDDS-524:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} docker {color} | {color:blue}  0m  
6s{color} | {color:blue} Dockerfile 
'/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/sourcedir/dev-support/docker/Dockerfile'
 not found, falling back to built-in. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  2m 
52s{color} | {color:red} Docker failed to build yetus/hadoop:date2018-10-20. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-524 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944828/HDDS-524-docker-hadoop-runner.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1466/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> log4j is added with root to apache/hadoop:2 and apache/hadoop:3 images
> --
>
> Key: HDDS-524
> URL: https://issues.apache.org/jira/browse/HDDS-524
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-524-docker-hadoop-runner.001.patch, 
> HDDS-524-docker-hadoop-runner.002.patch
>
>
> {code}
> docker run -it apache/hadoop:2 ls -lah  /opt/hadoop/etc/hadoop
> total 152K
> drwxr-xr-x 1 hadoop users 4.0K Aug 13 17:08 .
> drwxr-xr-x 1 hadoop users 4.0K Nov 13  2017 ..
> -rw-r--r-- 1 hadoop users 7.7K Nov 13  2017 capacity-scheduler.xml
> ...
> -rw-r--r-- 1 hadoop users 5.8K Nov 13  2017 kms-site.xml
> -rw-r--r-- 1 root   root  1023 Aug 13 17:04 log4j.properties
> -rw-r--r-- 1 hadoop users 1.1K Nov 13  2017 mapred-env.cmd
> ...
> {code}
> The owner of the log4j is root instead of hadoop. For this reason we can't 
> use the images for acceptance tests as the launcher script can't overwrite 
> log4j properties based on the environment variables.
> Same is true with 
> {code}
> docker run -it apache/hadoop:2 ls -lah  /opt/hadoop/etc/hadoop
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-524) log4j is added with root to apache/hadoop:2 and apache/hadoop:3 images

2018-10-19 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657607#comment-16657607
 ] 

Dinesh Chitlangia commented on HDDS-524:


[~elek] - Attached patch 002 that implements the second approach
{quote}To fix the owner of log4j.properties with executing the chown from the 
Dockerfile.
{quote}

> log4j is added with root to apache/hadoop:2 and apache/hadoop:3 images
> --
>
> Key: HDDS-524
> URL: https://issues.apache.org/jira/browse/HDDS-524
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-524-docker-hadoop-runner.001.patch, 
> HDDS-524-docker-hadoop-runner.002.patch
>
>
> {code}
> docker run -it apache/hadoop:2 ls -lah  /opt/hadoop/etc/hadoop
> total 152K
> drwxr-xr-x 1 hadoop users 4.0K Aug 13 17:08 .
> drwxr-xr-x 1 hadoop users 4.0K Nov 13  2017 ..
> -rw-r--r-- 1 hadoop users 7.7K Nov 13  2017 capacity-scheduler.xml
> ...
> -rw-r--r-- 1 hadoop users 5.8K Nov 13  2017 kms-site.xml
> -rw-r--r-- 1 root   root  1023 Aug 13 17:04 log4j.properties
> -rw-r--r-- 1 hadoop users 1.1K Nov 13  2017 mapred-env.cmd
> ...
> {code}
> The owner of the log4j is root instead of hadoop. For this reason we can't 
> use the images for acceptance tests as the launcher script can't overwrite 
> log4j properties based on the environment variables.
> Same is true with 
> {code}
> docker run -it apache/hadoop:2 ls -lah  /opt/hadoop/etc/hadoop
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-524) log4j is added with root to apache/hadoop:2 and apache/hadoop:3 images

2018-10-19 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-524:
---
Attachment: HDDS-524-docker-hadoop-runner.002.patch

> log4j is added with root to apache/hadoop:2 and apache/hadoop:3 images
> --
>
> Key: HDDS-524
> URL: https://issues.apache.org/jira/browse/HDDS-524
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-524-docker-hadoop-runner.001.patch, 
> HDDS-524-docker-hadoop-runner.002.patch
>
>
> {code}
> docker run -it apache/hadoop:2 ls -lah  /opt/hadoop/etc/hadoop
> total 152K
> drwxr-xr-x 1 hadoop users 4.0K Aug 13 17:08 .
> drwxr-xr-x 1 hadoop users 4.0K Nov 13  2017 ..
> -rw-r--r-- 1 hadoop users 7.7K Nov 13  2017 capacity-scheduler.xml
> ...
> -rw-r--r-- 1 hadoop users 5.8K Nov 13  2017 kms-site.xml
> -rw-r--r-- 1 root   root  1023 Aug 13 17:04 log4j.properties
> -rw-r--r-- 1 hadoop users 1.1K Nov 13  2017 mapred-env.cmd
> ...
> {code}
> The owner of the log4j is root instead of hadoop. For this reason we can't 
> use the images for acceptance tests as the launcher script can't overwrite 
> log4j properties based on the environment variables.
> Same is true with 
> {code}
> docker run -it apache/hadoop:2 ls -lah  /opt/hadoop/etc/hadoop
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13983) TestOfflineImageViewer crashes in windows

2018-10-19 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13983:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thanks [~vinayrpet] for the patch and [~surmountian] and [~ayushtkn] for the 
review.
Committed to trunk.

> TestOfflineImageViewer crashes in windows
> -
>
> Key: HDFS-13983
> URL: https://issues.apache.org/jira/browse/HDFS-13983
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-13893-with-patch-intellij-idea.JPG, 
> HDFS-13893-with-patch-mvn.JPG, 
> HDFS-13893-with-patch-without-sysout-close-intellij-idea.JPG, 
> HDFS-13893-without-patch-intellij-idea.JPG, HDFS-13893-without-patch-mvn.JPG, 
> HDFS-13983-01.patch, HDFS-13983-02.patch, HDFS-13983-03.patch
>
>
> TestOfflineImageViewer crashes in windows because, OfflineImageViewer 
> REVERSEXML tries to delete the outputfile and re-create the same stream which 
> is already created.
> Also there are unclosed RAF for input files which blocks from files being 
> deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-544) Unconditional wait findbug warning from ReplicationSupervisor

2018-10-19 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657577#comment-16657577
 ] 

Arpit Agarwal edited comment on HDDS-544 at 10/19/18 11:52 PM:
---

Thanks [~anu]. Missed a few changes in the earlier patches.

v03 patch
- fix build break in DatanodeStateMachine
- Improve InterruptedException handling in {{stop}} routine.
- A few more javadoc comments.


was (Author: arpitagarwal):
Thanks [~anu]. Missed a few changes in the uploaded patch.

v03 patch
- fix build break in DatanodeStateMachine
- Improve InterruptedException handling in {{stop}} routine.
- A few more javadoc comments.

> Unconditional wait findbug warning from ReplicationSupervisor
> -
>
> Key: HDDS-544
> URL: https://issues.apache.org/jira/browse/HDDS-544
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Attachments: HDDS-544.01.patch, HDDS-544.02.patch, HDDS-544.03.patch
>
>
> We have a findbug warning in ReplicationSupervisor:
> {code}
>  Multithreaded correctness Warnings
> Code  Warning
> UWUnconditional wait in 
> org.apache.hadoop.ozone.container.replication.ReplicationSupervisor$Worker.run()
>   
> Details
> UW_UNCOND_WAIT: Unconditional wait
> This method contains a call to java.lang.Object.wait() which is not guarded 
> by conditional control flow.  The code should verify that condition it 
> intends to wait for is not already satisfied before calling wait; any 
> previous notifications will be ignored
> {code}
> This issue is to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-544) Unconditional wait findbug warning from ReplicationSupervisor

2018-10-19 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657577#comment-16657577
 ] 

Arpit Agarwal commented on HDDS-544:


Thanks [~anu]. Missed a few changes in the uploaded patch.

v03 patch
- fix build break in DatanodeStateMachine
- Improve InterruptedException handling in {{stop}} routine.
- A few more javadoc comments.

> Unconditional wait findbug warning from ReplicationSupervisor
> -
>
> Key: HDDS-544
> URL: https://issues.apache.org/jira/browse/HDDS-544
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Attachments: HDDS-544.01.patch, HDDS-544.02.patch, HDDS-544.03.patch
>
>
> We have a findbug warning in ReplicationSupervisor:
> {code}
>  Multithreaded correctness Warnings
> Code  Warning
> UWUnconditional wait in 
> org.apache.hadoop.ozone.container.replication.ReplicationSupervisor$Worker.run()
>   
> Details
> UW_UNCOND_WAIT: Unconditional wait
> This method contains a call to java.lang.Object.wait() which is not guarded 
> by conditional control flow.  The code should verify that condition it 
> intends to wait for is not already satisfied before calling wait; any 
> previous notifications will be ignored
> {code}
> This issue is to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-544) Unconditional wait findbug warning from ReplicationSupervisor

2018-10-19 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-544:
---
Attachment: HDDS-544.03.patch

> Unconditional wait findbug warning from ReplicationSupervisor
> -
>
> Key: HDDS-544
> URL: https://issues.apache.org/jira/browse/HDDS-544
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Attachments: HDDS-544.01.patch, HDDS-544.02.patch, HDDS-544.03.patch
>
>
> We have a findbug warning in ReplicationSupervisor:
> {code}
>  Multithreaded correctness Warnings
> Code  Warning
> UWUnconditional wait in 
> org.apache.hadoop.ozone.container.replication.ReplicationSupervisor$Worker.run()
>   
> Details
> UW_UNCOND_WAIT: Unconditional wait
> This method contains a call to java.lang.Object.wait() which is not guarded 
> by conditional control flow.  The code should verify that condition it 
> intends to wait for is not already satisfied before calling wait; any 
> previous notifications will be ignored
> {code}
> This issue is to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-706) Invalid Getting Started docker-compose YAML

2018-10-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657571#comment-16657571
 ] 

Anu Engineer commented on HDDS-706:
---

cc: [~elek]

> Invalid Getting Started docker-compose YAML
> ---
>
> Key: HDDS-706
> URL: https://issues.apache.org/jira/browse/HDDS-706
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: website
>Reporter: Andrew Grande
>Priority: Major
> Attachments: docker-compose.yaml
>
>
> Consistent indentation is critical to the YAML file structure. The page here 
> lists a docker-compose file which is invalid.
> Here's the type of errors one is getting:
> {noformat}
> > docker-compose up -d
> ERROR: yaml.scanner.ScannerError: mapping values are not allowed here
>  in "./docker-compose.yaml", line 5, column 12{noformat}
>  I'm attaching a fixed YAML file, please ensure the getting started page 
> preserves the correct indentation and formatting.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14004) TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk

2018-10-19 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657564#comment-16657564
 ] 

Wei-Chiu Chuang commented on HDFS-14004:


Thanks [~ayushtkn] for the root causing the test failure. Good job, because I 
couldn't reproduce it locally.

The whole point of the test was to examine the exact sequence of conditions 
where a client issues recovery, closing the file, and DN completes recovery and 
report back to the NN. In which case, prior to the fix HDFS-10240, NN would 
increment genstamp when DN reports back, despite the file has already closed, 
causing corruption (because of the mismatch of genstamp). After the fix, NN 
rejects closing of the file if the file is under recovery.

 

If you let IBR to continue, this exact sequence demonstrated above can't be 
guaranteed, because NN may receive the block report of the recovered block 
before client requests the closing of the file. I feel like solution #2 is more 
appropriate given the scenario under test.

> TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk
> ---
>
> Key: HDFS-14004
> URL: https://issues.apache.org/jira/browse/HDFS-14004
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14004-01.patch
>
>
> Reference
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/930/testReport/junit/org.apache.hadoop.hdfs/TestLeaseRecovery2/testCloseWhileRecoverLease/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13996) Make HttpFS' ACLs RegEx configurable

2018-10-19 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657552#comment-16657552
 ] 

Siyao Meng commented on HDFS-13996:
---

I have figured out the way to inherit WebHDFS configuration. But I'm thinking 
if there is a reason we should have a separate config for HttpFS in 
httpfs-site.xml, i.e. "HTTPFS_ACL_PERMISSION_PATTERN_DEFAULT" and 
"HTTPFS_ACL_PERMISSION_PATTERN_KEY".

> Make HttpFS' ACLs RegEx configurable
> 
>
> Key: HDFS-13996
> URL: https://issues.apache.org/jira/browse/HDFS-13996
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.6.5, 3.0.3, 2.7.7
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Previously in HDFS-11421, WebHDFS' ACLs RegEx is made configurable, but it's 
> not configurable yet in HttpFS. For now in HttpFS, the ACL permission pattern 
> is fixed to DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-13996) Make HttpFS' ACLs RegEx configurable

2018-10-19 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13996 started by Siyao Meng.
-
> Make HttpFS' ACLs RegEx configurable
> 
>
> Key: HDFS-13996
> URL: https://issues.apache.org/jira/browse/HDFS-13996
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.6.5, 3.0.3, 2.7.7
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Previously in HDFS-11421, WebHDFS' ACLs RegEx is made configurable, but it's 
> not configurable yet in HttpFS. For now in HttpFS, the ACL permission pattern 
> is fixed to DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-706) Invalid Getting Started docker-compose YAML

2018-10-19 Thread Andrew Grande (JIRA)
Andrew Grande created HDDS-706:
--

 Summary: Invalid Getting Started docker-compose YAML
 Key: HDDS-706
 URL: https://issues.apache.org/jira/browse/HDDS-706
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: website
Reporter: Andrew Grande
 Attachments: docker-compose.yaml

Consistent indentation is critical to the YAML file structure. The page here 
lists a docker-compose file which is invalid.

Here's the type of errors one is getting:
{noformat}

> docker-compose up -d
ERROR: yaml.scanner.ScannerError: mapping values are not allowed here
 in "./docker-compose.yaml", line 5, column 12{noformat}
 I'm attaching a fixed YAML file, please ensure the getting started page 
preserves the correct indentation and formatting.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-706) Invalid Getting Started docker-compose YAML

2018-10-19 Thread Andrew Grande (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Grande updated HDDS-706:
---
Attachment: docker-compose.yaml

> Invalid Getting Started docker-compose YAML
> ---
>
> Key: HDDS-706
> URL: https://issues.apache.org/jira/browse/HDDS-706
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: website
>Reporter: Andrew Grande
>Priority: Major
> Attachments: docker-compose.yaml
>
>
> Consistent indentation is critical to the YAML file structure. The page here 
> lists a docker-compose file which is invalid.
> Here's the type of errors one is getting:
> {noformat}
> > docker-compose up -d
> ERROR: yaml.scanner.ScannerError: mapping values are not allowed here
>  in "./docker-compose.yaml", line 5, column 12{noformat}
>  I'm attaching a fixed YAML file, please ensure the getting started page 
> preserves the correct indentation and formatting.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-704) Fix the Dependency convergence issue on HDDS-4

2018-10-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657545#comment-16657545
 ] 

Hadoop QA commented on HDDS-704:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 18m 
52s{color} | {color:red} root in HDDS-4 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
30m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
56s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-704 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944811/HDDS-704-HDDS-4.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux b480d1d8a65c 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-4 / a782444 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1464/artifact/out/branch-mvninstall-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1464/testReport/ |
| Max. process+thread count | 470 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common U: hadoop-hdds/common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1464/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix the Dependency convergence issue on HDDS-4
> --
>
> Key: HDDS-704
> URL: https://issues.apache.org/jira/browse/HDDS-704
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>   

[jira] [Work started] (HDDS-643) Parse Authorization header in a separate filter

2018-10-19 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-643 started by Bharat Viswanadham.
---
> Parse Authorization header in a separate filter
> ---
>
> Key: HDDS-643
> URL: https://issues.apache.org/jira/browse/HDDS-643
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is created from HDDS-522 comment from [~elek]
>  # I think the authorization headers could be parsed in a separated filters 
> similar to the request ids. But it could be implemented later. This is more 
> like a prototype.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-643) Parse Authorization header in a separate filter

2018-10-19 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-643:
---

Assignee: Bharat Viswanadham

> Parse Authorization header in a separate filter
> ---
>
> Key: HDDS-643
> URL: https://issues.apache.org/jira/browse/HDDS-643
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is created from HDDS-522 comment from [~elek]
>  # I think the authorization headers could be parsed in a separated filters 
> similar to the request ids. But it could be implemented later. This is more 
> like a prototype.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13983) TestOfflineImageViewer crashes in windows

2018-10-19 Thread Xiao Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657528#comment-16657528
 ] 

Xiao Liang commented on HDFS-13983:
---

+1 for [^HDFS-13983-03.patch]

> TestOfflineImageViewer crashes in windows
> -
>
> Key: HDFS-13983
> URL: https://issues.apache.org/jira/browse/HDFS-13983
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Attachments: HDFS-13893-with-patch-intellij-idea.JPG, 
> HDFS-13893-with-patch-mvn.JPG, 
> HDFS-13893-with-patch-without-sysout-close-intellij-idea.JPG, 
> HDFS-13893-without-patch-intellij-idea.JPG, HDFS-13893-without-patch-mvn.JPG, 
> HDFS-13983-01.patch, HDFS-13983-02.patch, HDFS-13983-03.patch
>
>
> TestOfflineImageViewer crashes in windows because, OfflineImageViewer 
> REVERSEXML tries to delete the outputfile and re-create the same stream which 
> is already created.
> Also there are unclosed RAF for input files which blocks from files being 
> deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-705) OS3Exception resource name should be the actual resource name

2018-10-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657527#comment-16657527
 ] 

Hadoop QA commented on HDDS-705:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-ozone/s3gateway generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} s3gateway in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-ozone/s3gateway |
|  |  Load of known null value in 
org.apache.hadoop.ozone.s3.endpoint.EndpointBase.parseUsername(HttpHeaders)  At 
EndpointBase.java:in 
org.apache.hadoop.ozone.s3.endpoint.EndpointBase.parseUsername(HttpHeaders)  At 
EndpointBase.java:[line 188] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-705 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944809/HDDS-705.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4c8d687ae9ee 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 00254d7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1463/artifact/out/new-findbugs-hadoop-ozone_s3gateway.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1463/testReport/ |
| Max. process+thread count | 340 

[jira] [Commented] (HDDS-615) ozone-dist should depend on hadoop-ozone-file-system

2018-10-19 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657524#comment-16657524
 ] 

Bharat Viswanadham commented on HDDS-615:
-

[~elek]

So, I think after changing the ozone personality in yetus we can commit this, 
so that we can be sure that this patch is solving the problem.

> ozone-dist should depend on hadoop-ozone-file-system
> 
>
> Key: HDDS-615
> URL: https://issues.apache.org/jira/browse/HDDS-615
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-615.001.patch
>
>
> In the Yetus build of HDDS-523 the build of the dist project was failed:
> {code:java}
> Mon Oct  8 14:16:06 UTC 2018
> cd /testptch/hadoop/hadoop-ozone/dist
> /usr/bin/mvn -Phdds 
> -Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-trunk-patch-1 -Ptest-patch 
> -DskipTests -fae clean install -DskipTests=true -Dmaven.javadoc.skip=true 
> -Dcheckstyle.skip=true -Dfindbugs.skip=true
> [INFO] Scanning for projects...
> [INFO]
>  
> [INFO] 
> 
> [INFO] Building Apache Hadoop Ozone Distribution 0.3.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-ozone-dist 
> ---
> [INFO] Deleting /testptch/hadoop/hadoop-ozone/dist (includes = 
> [dependency-reduced-pom.xml], excludes = [])
> [INFO] 
> [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-ozone-dist 
> ---
> [INFO] Executing tasks
> main:
> [mkdir] Created dir: /testptch/hadoop/hadoop-ozone/dist/target/test-dir
> [INFO] Executed tasks
> [INFO] 
> [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
> hadoop-ozone-dist ---
> [INFO] 
> [INFO] --- exec-maven-plugin:1.3.1:exec (dist) @ hadoop-ozone-dist ---
> cp: cannot stat 
> '/testptch/hadoop/hadoop-ozone/ozonefs/target/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar':
>  No such file or directory
> Current directory /testptch/hadoop/hadoop-ozone/dist/target
> $ rm -rf ozone-0.3.0-SNAPSHOT
> $ mkdir ozone-0.3.0-SNAPSHOT
> $ cd ozone-0.3.0-SNAPSHOT
> $ cp -p /testptch/hadoop/LICENSE.txt .
> $ cp -p /testptch/hadoop/NOTICE.txt .
> $ cp -p /testptch/hadoop/README.txt .
> $ mkdir -p ./share/hadoop/mapreduce
> $ mkdir -p ./share/hadoop/ozone
> $ mkdir -p ./share/hadoop/hdds
> $ mkdir -p ./share/hadoop/yarn
> $ mkdir -p ./share/hadoop/hdfs
> $ mkdir -p ./share/hadoop/common
> $ mkdir -p ./share/ozone/web
> $ mkdir -p ./bin
> $ mkdir -p ./sbin
> $ mkdir -p ./etc
> $ mkdir -p ./libexec
> $ cp -r /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/conf 
> etc/hadoop
> $ cp 
> /testptch/hadoop/hadoop-ozone/common/src/main/conf/om-audit-log4j2.properties 
> etc/hadoop
> $ cp /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop 
> bin/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop.cmd 
> bin/
> $ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/ozone bin/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
>  libexec/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.cmd
>  libexec/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
>  libexec/
> $ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/ozone-config.sh 
> libexec/
> $ cp -r /testptch/hadoop/hadoop-ozone/common/src/main/shellprofile.d libexec/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemons.sh
>  sbin/
> $ cp 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/bin/workers.sh 
> sbin/
> $ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/start-ozone.sh sbin/
> $ cp /testptch/hadoop/hadoop-ozone/common/src/main/bin/stop-ozone.sh sbin/
> $ mkdir -p ./share/hadoop/ozonefs
> $ cp 
> /testptch/hadoop/hadoop-ozone/ozonefs/target/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar
>  ./share/hadoop/ozonefs/hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar
> Failed!
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 7.832 s
> [INFO] Finished at: 2018-10-08T14:16:16+00:00
> [INFO] Final Memory: 33M/625M
> [INFO] 
> 
> [ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.3.1:exec 
> (dist) on project hadoop-ozone-dist: Command execution failed. Process exited 
> with an error: 1 

[jira] [Commented] (HDDS-704) Fix the Dependency convergence issue on HDDS-4

2018-10-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657522#comment-16657522
 ] 

Anu Engineer commented on HDDS-704:
---

+1, Thanks for fixing this.

> Fix the Dependency convergence issue on HDDS-4
> --
>
> Key: HDDS-704
> URL: https://issues.apache.org/jira/browse/HDDS-704
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-704-HDDS-4.001.patch
>
>
> {code}
> Dependency convergence error for org.bouncycastle:bcprov-jdk15on:1.54 paths 
> to dependency are:
> +-org.apache.hadoop:hadoop-hdds-common:0.4.0-SNAPSHOT
>   +-org.bouncycastle:bcprov-jdk15on:1.54
> and
> +-org.apache.hadoop:hadoop-hdds-common:0.4.0-SNAPSHOT
>   +-org.bouncycastle:bcpkix-jdk15on:1.54
> +-org.bouncycastle:bcprov-jdk15on:1.60
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> Failed while enforcing releasability. See above detailed error message.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-120) Adding HDDS datanode Audit Log

2018-10-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657515#comment-16657515
 ] 

Anu Engineer commented on HDDS-120:
---

Looks good, thanks for sharing. cc: [~jnp]

> Adding HDDS datanode Audit Log
> --
>
> Key: HDDS-120
> URL: https://issues.apache.org/jira/browse/HDDS-120
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
>
> This can be useful to find users who overload the DNs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-120) Adding HDDS datanode Audit Log

2018-10-19 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657511#comment-16657511
 ] 

Dinesh Chitlangia commented on HDDS-120:


[~xyao] , [~anu] - Here is early look at the entries in DN Audit Log.
{noformat}
2018-10-19 18:09:32,321 | INFO | DNAudit | user=null | ip=null | op=WRITE_CHUNK 
{blockId=org.apache.hadoop.hdds.client.BlockID@28ca254f[containerID=2,localID=100924586143252489],
 
chunkInfo=ChunkInfo{chunkName='b93ffed60e643485c0b5dddcfd54abae_stream_c715b4af-5872-49f1-a830-892ca6f296c6_chunk_1,
 offset=0, len=10240}} | ret=SUCCESS | 

2018-10-19 18:09:32,390 | INFO | DNAudit | user=null | ip=null | op=PUT_BLOCK 
{blockData={blockID='org.apache.hadoop.hdds.client.BlockID@53ff5cb3[containerID=2,localID=100924586143252484],
 metadata={TYPE=KEY}, 
chunks=[ChunkInfo{chunkName='6e6edd4c89ed1253d76c9ffdabb362dc_stream_8bb110b3-2d66-43f2-87e4-a2b100d98801_chunk_1,
 offset=0, len=10240}]}} | ret=SUCCESS | 
{noformat}
I will log another Jira to add user/ip using GRPC as that piece is dependent on 
other work.

Please review these sample entry and let me know what you think.

> Adding HDDS datanode Audit Log
> --
>
> Key: HDDS-120
> URL: https://issues.apache.org/jira/browse/HDDS-120
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
>
> This can be useful to find users who overload the DNs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-704) Fix the Dependency convergence issue on HDDS-4

2018-10-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-704:

Attachment: HDDS-704-HDDS-4.001.patch

> Fix the Dependency convergence issue on HDDS-4
> --
>
> Key: HDDS-704
> URL: https://issues.apache.org/jira/browse/HDDS-704
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-704-HDDS-4.001.patch
>
>
> {code}
> Dependency convergence error for org.bouncycastle:bcprov-jdk15on:1.54 paths 
> to dependency are:
> +-org.apache.hadoop:hadoop-hdds-common:0.4.0-SNAPSHOT
>   +-org.bouncycastle:bcprov-jdk15on:1.54
> and
> +-org.apache.hadoop:hadoop-hdds-common:0.4.0-SNAPSHOT
>   +-org.bouncycastle:bcpkix-jdk15on:1.54
> +-org.bouncycastle:bcprov-jdk15on:1.60
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> Failed while enforcing releasability. See above detailed error message.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-704) Fix the Dependency convergence issue on HDDS-4

2018-10-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-704:

Status: Patch Available  (was: Open)

> Fix the Dependency convergence issue on HDDS-4
> --
>
> Key: HDDS-704
> URL: https://issues.apache.org/jira/browse/HDDS-704
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-704-HDDS-4.001.patch
>
>
> {code}
> Dependency convergence error for org.bouncycastle:bcprov-jdk15on:1.54 paths 
> to dependency are:
> +-org.apache.hadoop:hadoop-hdds-common:0.4.0-SNAPSHOT
>   +-org.bouncycastle:bcprov-jdk15on:1.54
> and
> +-org.apache.hadoop:hadoop-hdds-common:0.4.0-SNAPSHOT
>   +-org.bouncycastle:bcpkix-jdk15on:1.54
> +-org.bouncycastle:bcprov-jdk15on:1.60
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> Failed while enforcing releasability. See above detailed error message.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-704) Fix the Dependency convergence issue on HDDS-4

2018-10-19 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657500#comment-16657500
 ] 

Xiaoyu Yao commented on HDDS-704:
-

bcpkix-jdk15on depends on bcprov-jdk15on.

Once way to fix this is to exclude bcprov-jdk15on when bringing in 
bcpkix-jdk15o dependency. 

The other way is to simply drop the explicit dependency on bcprov-jdk15on and 
depend only on bcpkix-jdk15on. 

Since we don't have specific version requirement on bcprov-jdk15on, I incline 
the second approach and I will post a patch shortly. 
 

> Fix the Dependency convergence issue on HDDS-4
> --
>
> Key: HDDS-704
> URL: https://issues.apache.org/jira/browse/HDDS-704
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> {code}
> Dependency convergence error for org.bouncycastle:bcprov-jdk15on:1.54 paths 
> to dependency are:
> +-org.apache.hadoop:hadoop-hdds-common:0.4.0-SNAPSHOT
>   +-org.bouncycastle:bcprov-jdk15on:1.54
> and
> +-org.apache.hadoop:hadoop-hdds-common:0.4.0-SNAPSHOT
>   +-org.bouncycastle:bcpkix-jdk15on:1.54
> +-org.bouncycastle:bcprov-jdk15on:1.60
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> Failed while enforcing releasability. See above detailed error message.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2018-10-19 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko reopened HDFS-12026:


Reopening.
I think it is a blocker for release 3.2.
If there is no progress on this, I would recommend reverting.
Potentially the entire branch HDFS-8707, I didn't check how much of it is 
relied on this change.

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch, 
> HDFS-12026.HDFS-8707.007.patch, HDFS-12026.HDFS-8707.008.patch, 
> HDFS-12026.HDFS-8707.009.patch, HDFS-12026.HDFS-8707.010.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flag:
> -std=c++11
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2018-10-19 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-12026:
---
Priority: Blocker  (was: Major)
Target Version/s: 3.2.0

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Blocker
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch, 
> HDFS-12026.HDFS-8707.007.patch, HDFS-12026.HDFS-8707.008.patch, 
> HDFS-12026.HDFS-8707.009.patch, HDFS-12026.HDFS-8707.010.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flag:
> -std=c++11
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-705) OS3Exception resource name should be the actual resource name

2018-10-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657493#comment-16657493
 ] 

Anu Engineer commented on HDDS-705:
---

+1, pending jenkins. 

> OS3Exception resource name should be the actual resource name
> -
>
> Key: HDDS-705
> URL: https://issues.apache.org/jira/browse/HDDS-705
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-705.00.patch
>
>
> [https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html]
> {code:java}
>  
>  
>  NoSuchKey 
>  The resource you requested does not exist 
>  /mybucket/myfoto.jpg 
>  4442587FB7D0A2F9 
> {code}
>  
> Right now in the code we are print resource as "bucket" , "key" instead of 
> actual resource name.
>  
> Documentation shows key name with bucket, but actually when tried on AWS S3 
> endpoint it shows just key name, found this information when using mitmproxy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-702) Used fixed/external version from hadoop jars in hdds/ozone projects

2018-10-19 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657480#comment-16657480
 ] 

Arpit Agarwal commented on HDDS-702:


Ok I didn't realize the snapshot repos were being auto-updated. I retract my 
objection.

> Used fixed/external version from hadoop jars in hdds/ozone projects
> ---
>
> Key: HDDS-702
> URL: https://issues.apache.org/jira/browse/HDDS-702
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-702.001.patch
>
>
> In the current form of the project ozone uses the in-tree snapshot version of 
> the hadoop (hadoop 3.3.0-SNAPSHOT as of now).
> I propose to use a fixed version from the hadoop jars which could be 
> independent from the in-tree hadoop.
> 1. With using already released hadoop (such as hadoop-3.1) we can upload the 
> ozone jar files to the maven repository without pseudo-releasing the hadoop 
> snapshot dependencies. (In the current form it's not possible without also 
> uploading a custom, ozone flavour of hadoop-common/hadoop-hdfs)
> 2. With using fixed version of hadoop the build could be faster and the yetus 
> builds could be simplified (it's very easy to identify the projects which 
> should be checked/tested if only the hdds/ozone projects are part of the 
> build: we can do full build/tests all the time).
> After the previous work it's possible to switch to fixed hadoop version, 
> because:
> 1) we have no more proto file dependency between hdds and hdfs (HDDS-378, and 
> previous works by Mukul and Nanda)
> 2) we don't need to depend on the in-tree hadoop-project-dist (HDDS-447)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-702) Used fixed/external version from hadoop jars in hdds/ozone projects

2018-10-19 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657474#comment-16657474
 ] 

Bharat Viswanadham edited comment on HDDS-702 at 10/19/18 9:47 PM:
---

[~elek]

Thanks for the info, it makes sense to have parent project still as 
hadoop-project.

One more observation I have seen is when I tried to build with below command

mvn clean install -T 6 -Pdist -Phdds -DskipTests -Dmaven.javadoc.skip=true -am 
-pl :hadoop-ozone-dist

I see below warning, I am able to successfully build it though.

[WARNING] The requested profile “hdds” could not be activated because it does 
not exist.

[~arpitagarwal]

The scenario you have mentioned should not happen, as this in pom.xml we have 
provided snapshot repo's from where we should get the Hadoop snapshot jars.

 


was (Author: bharatviswa):
[~elek]

Thanks for the info, it makes sense to have parent project still as 
hadoop-project.

[~arpitagarwal]

The scenario you have mentioned should not happen, as this in pom.xml we have 
provided snapshot repo's from where we should get the Hadoop snapshot jars.

 

> Used fixed/external version from hadoop jars in hdds/ozone projects
> ---
>
> Key: HDDS-702
> URL: https://issues.apache.org/jira/browse/HDDS-702
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-702.001.patch
>
>
> In the current form of the project ozone uses the in-tree snapshot version of 
> the hadoop (hadoop 3.3.0-SNAPSHOT as of now).
> I propose to use a fixed version from the hadoop jars which could be 
> independent from the in-tree hadoop.
> 1. With using already released hadoop (such as hadoop-3.1) we can upload the 
> ozone jar files to the maven repository without pseudo-releasing the hadoop 
> snapshot dependencies. (In the current form it's not possible without also 
> uploading a custom, ozone flavour of hadoop-common/hadoop-hdfs)
> 2. With using fixed version of hadoop the build could be faster and the yetus 
> builds could be simplified (it's very easy to identify the projects which 
> should be checked/tested if only the hdds/ozone projects are part of the 
> build: we can do full build/tests all the time).
> After the previous work it's possible to switch to fixed hadoop version, 
> because:
> 1) we have no more proto file dependency between hdds and hdfs (HDDS-378, and 
> previous works by Mukul and Nanda)
> 2) we don't need to depend on the in-tree hadoop-project-dist (HDDS-447)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-702) Used fixed/external version from hadoop jars in hdds/ozone projects

2018-10-19 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657474#comment-16657474
 ] 

Bharat Viswanadham commented on HDDS-702:
-

[~elek]

Thanks for the info, it makes sense to have parent project still as 
hadoop-project.

[~arpitagarwal]

The scenario you have mentioned should not happen, as this in pom.xml we have 
provided snapshot repo's from where we should get the Hadoop snapshot jars.

 

> Used fixed/external version from hadoop jars in hdds/ozone projects
> ---
>
> Key: HDDS-702
> URL: https://issues.apache.org/jira/browse/HDDS-702
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-702.001.patch
>
>
> In the current form of the project ozone uses the in-tree snapshot version of 
> the hadoop (hadoop 3.3.0-SNAPSHOT as of now).
> I propose to use a fixed version from the hadoop jars which could be 
> independent from the in-tree hadoop.
> 1. With using already released hadoop (such as hadoop-3.1) we can upload the 
> ozone jar files to the maven repository without pseudo-releasing the hadoop 
> snapshot dependencies. (In the current form it's not possible without also 
> uploading a custom, ozone flavour of hadoop-common/hadoop-hdfs)
> 2. With using fixed version of hadoop the build could be faster and the yetus 
> builds could be simplified (it's very easy to identify the projects which 
> should be checked/tested if only the hdds/ozone projects are part of the 
> build: we can do full build/tests all the time).
> After the previous work it's possible to switch to fixed hadoop version, 
> because:
> 1) we have no more proto file dependency between hdds and hdfs (HDDS-378, and 
> previous works by Mukul and Nanda)
> 2) we don't need to depend on the in-tree hadoop-project-dist (HDDS-447)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-705) OS3Exception resource name should be the actual resource name

2018-10-19 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-705:

Description: 
[https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html]
{code:java}
 
 
 NoSuchKey 
 The resource you requested does not exist 
 /mybucket/myfoto.jpg 
 4442587FB7D0A2F9 
{code}
 

Right now in the code we are print resource as "bucket" , "key" instead of 
actual resource name.

 

Documentation shows key name with bucket, but actually when tried on AWS S3 
endpoint it shows just key name, found this information when using mitmproxy.

  was:
[https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html]
{code:java}
 
 
 NoSuchKey 
 The resource you requested does not exist 
 /mybucket/myfoto.jpg 
 4442587FB7D0A2F9 
{code}
 

Right now in the code we are print resource as "bucket" , "key" instead of 
actual resource name.


> OS3Exception resource name should be the actual resource name
> -
>
> Key: HDDS-705
> URL: https://issues.apache.org/jira/browse/HDDS-705
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-705.00.patch
>
>
> [https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html]
> {code:java}
>  
>  
>  NoSuchKey 
>  The resource you requested does not exist 
>  /mybucket/myfoto.jpg 
>  4442587FB7D0A2F9 
> {code}
>  
> Right now in the code we are print resource as "bucket" , "key" instead of 
> actual resource name.
>  
> Documentation shows key name with bucket, but actually when tried on AWS S3 
> endpoint it shows just key name, found this information when using mitmproxy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-705) OS3Exception resource name should be the actual resource name

2018-10-19 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-705:

Description: 
[https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html]
{code:java}
 
 
 NoSuchKey 
 The resource you requested does not exist 
 /mybucket/myfoto.jpg 
 4442587FB7D0A2F9 
{code}
 

Right now in the code we are print resource as "bucket" , "key" instead of 
actual resource name.

  was:
[https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html]

  NoSuchKey 
The resource you requested does not exist 
/mybucket/myfoto.jpg 
4442587FB7D0A2F9 

 

Right now in the code we are print resource as "bucket" , "key" instead of 
actual resource name.


> OS3Exception resource name should be the actual resource name
> -
>
> Key: HDDS-705
> URL: https://issues.apache.org/jira/browse/HDDS-705
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-705.00.patch
>
>
> [https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html]
> {code:java}
>  
>  
>  NoSuchKey 
>  The resource you requested does not exist 
>  /mybucket/myfoto.jpg 
>  4442587FB7D0A2F9 
> {code}
>  
> Right now in the code we are print resource as "bucket" , "key" instead of 
> actual resource name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-705) OS3Exception resource name should be the actual resource name

2018-10-19 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-705:

Target Version/s: 0.3.0

> OS3Exception resource name should be the actual resource name
> -
>
> Key: HDDS-705
> URL: https://issues.apache.org/jira/browse/HDDS-705
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-705.00.patch
>
>
> [https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html]
>   NoSuchKey 
> The resource you requested does not exist 
> /mybucket/myfoto.jpg 
> 4442587FB7D0A2F9 
>  
> Right now in the code we are print resource as "bucket" , "key" instead of 
> actual resource name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-705) OS3Exception resource name should be the actual resource name

2018-10-19 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-705:

Issue Type: Sub-task  (was: Task)
Parent: HDDS-434

> OS3Exception resource name should be the actual resource name
> -
>
> Key: HDDS-705
> URL: https://issues.apache.org/jira/browse/HDDS-705
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-705.00.patch
>
>
> [https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html]
>   NoSuchKey 
> The resource you requested does not exist 
> /mybucket/myfoto.jpg 
> 4442587FB7D0A2F9 
>  
> Right now in the code we are print resource as "bucket" , "key" instead of 
> actual resource name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-705) OS3Exception resource name should be the actual resource name

2018-10-19 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-705:

Status: Patch Available  (was: In Progress)

> OS3Exception resource name should be the actual resource name
> -
>
> Key: HDDS-705
> URL: https://issues.apache.org/jira/browse/HDDS-705
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-705.00.patch
>
>
> [https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html]
>   NoSuchKey 
> The resource you requested does not exist 
> /mybucket/myfoto.jpg 
> 4442587FB7D0A2F9 
>  
> Right now in the code we are print resource as "bucket" , "key" instead of 
> actual resource name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-705) OS3Exception resource name should be the actual resource name

2018-10-19 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-705:

Attachment: HDDS-705.00.patch

> OS3Exception resource name should be the actual resource name
> -
>
> Key: HDDS-705
> URL: https://issues.apache.org/jira/browse/HDDS-705
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-705.00.patch
>
>
> [https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html]
>   NoSuchKey 
> The resource you requested does not exist 
> /mybucket/myfoto.jpg 
> 4442587FB7D0A2F9 
>  
> Right now in the code we are print resource as "bucket" , "key" instead of 
> actual resource name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-705) OS3Exception resource name should be the actual resource name

2018-10-19 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-705 started by Bharat Viswanadham.
---
> OS3Exception resource name should be the actual resource name
> -
>
> Key: HDDS-705
> URL: https://issues.apache.org/jira/browse/HDDS-705
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> [https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html]
>   NoSuchKey 
> The resource you requested does not exist 
> /mybucket/myfoto.jpg 
> 4442587FB7D0A2F9 
>  
> Right now in the code we are print resource as "bucket" , "key" instead of 
> actual resource name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-705) OS3Exception resource name should be the actual resource name

2018-10-19 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-705:
---

 Summary: OS3Exception resource name should be the actual resource 
name
 Key: HDDS-705
 URL: https://issues.apache.org/jira/browse/HDDS-705
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


[https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html]

  NoSuchKey 
The resource you requested does not exist 
/mybucket/myfoto.jpg 
4442587FB7D0A2F9 

 

Right now in the code we are print resource as "bucket" , "key" instead of 
actual resource name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-676) Enable Read from open Containers via Standalone Protocol

2018-10-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657454#comment-16657454
 ] 

Anu Engineer commented on HDDS-676:
---

1. message GetBlockRequestProto {
required DatanodeBlockID blockID = 1;
*required* uint64 blockCommitSequenceId = 2;
}

A question rather than a comment. If we make this a required field, and if get 
to a datanode and want to read a block -- say a case where we say give any 
version of data you have, how does that work? Should we make this optional? 
This also means that if you have written a block using 0.2.1, then we cannot 
read that via this protocol. It is an Alpha release, so we don't have to 
strictly be backward compactable; just a thought.


2. More a question of code style: should we have bcsId inside blockID or no? In 
the blockManager Interface.

3. BlockManagerImpl.java#putBlock 
{code}
long blockCommitSequenceId = data.getBlockCommitSequenceId();
long blockCommitSequenceIdValue = getBlockCommitSequenceId(db);
{code}

My apologies, but this code gives me a headache. I understand the context and 
the why, But a function with variables *blockCommitSequenceId*, 
*blockCommitSequenceIdValue* and *blockCommitSequenceIdKey* causes a stack 
overflow for me :(

4. BlockManagerImpl.java#getBlock:Line 155
I am asking more to make sure that I understand this correctly. We look up the 
max container bcsid, and then later read the bcsid. I am guessing we do this 
because we are counting on containerBCSId will be cached and it will not cause 
a real disk I/O and therefore checking it makes it easier for us to fail faster?

5. ContainerProtcolCalls.java#readSmallFile:Line 394
{code}
// by default, set the bcsId to be 0
.setBlockCommitSequenceId(0);
{code}

This does not match with the earlier logic in putBlock. I am presuming the 
putSmallFile call was committed via Ratis. Please correct me if I am wrong.


{code}
// default blockCommitSequenceId for any block is 0. It the putBlock
// request is not coming via Ratis(for test scenarios), it will be 0.
// In such cases, we should overwrite the block as well
if (blockCommitSequenceIdValue != null && blockCommitSequenceId != 0) {
if (blockCommitSequenceId <= Longs
.fromByteArray(blockCommitSequenceIdValue)) {
if (blockCommitSequenceId != 0) {
if (blockCommitSequenceId <= blockCommitSequenceIdValue) {
// Since the blockCommitSequenceId stored in the db is greater than
// equal to blockCommitSequenceId to be updated, it means the putBlock
// transaction is reapplied in the ContainerStateMachine on restart.
// It also implies that the given block must already exist in the db.
// just log and return
LOG.warn("blockCommitSequenceId " + Longs
.fromByteArray(blockCommitSequenceIdValue)
LOG.warn("blockCommitSequenceId " + blockCommitSequenceIdValue
+ " in the Container Db is greater than" + " the supplied value "
+ blockCommitSequenceId + " .Ignoring it");
return data.getSize();
}
}
{code}


6. XceieverClientGrpc.java#sendCommand Can we please separate out the trying 
for different datanodes vs. connecting and reading data. That make this into 2 
functions. One that connects and reads, another which tries all nodes till we 
get a successful read.

There is also an issue that is not taken care in this code path. What happens 
if a block is deleted? If the datanode says I don't have this block, do we 
still try all 3 data nodes? I am presuming we don't have to deal with it since 
it is an edge case.

7. nit: The fact that we have to maintain dnIndex and Size might be a good 
indication that we need old school for-loop at line 154 instead of a for-each 
loop.


8. Nit: XceiverClientRatis#watchForCommit - if this is a test only function 
perhaps have in a test helper file?

> Enable Read from open Containers via Standalone Protocol
> 
>
> Key: HDDS-676
> URL: https://issues.apache.org/jira/browse/HDDS-676
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-676.001.patch, HDDS-676.002.patch, 
> HDDS-676.003.patch, HDDS-676.004.patch
>
>
> With BlockCommitSequenceId getting updated per block commit on open 
> containers in OM as well datanode, Ozone Client reads can through Standalone 
> protocol not necessarily requiring Ratis. Client should verify the BCSID of 
> the container which has the data block , which should always be greater than 
> or equal to the BCSID of the block to be read and the existing block BCSID 
> should exactly match that of the block to be read. As a part of this, Client 
> can try to read from a replica with a supplied BCSID and failover to the next 
> one in case the block does ont exist on one replica.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HDDS-703) Ozone docs does not render correctly on a Mobile Device

2018-10-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657453#comment-16657453
 ] 

Hadoop QA commented on HDDS-703:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-703 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944801/HDDS-703.001.patch |
| Optional Tests |  asflicense  |
| uname | Linux a683faaa83fe 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e2cecb6 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 411 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/docs U: hadoop-ozone/docs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1462/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone docs does not render correctly on a Mobile Device
> ---
>
> Key: HDDS-703
> URL: https://issues.apache.org/jira/browse/HDDS-703
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.2.1
>Reporter: Andrew Grande
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-703.001.patch
>
>
> if you connect to [https://hadoop.apache.org/ozone/docs/0.2.1-alpha/] and try 
> to browse the documentation the side nav bar is not visible. This is needed 
> for effective navigation of the ozone documentation.  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-702) Used fixed/external version from hadoop jars in hdds/ozone projects

2018-10-19 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657451#comment-16657451
 ] 

Arpit Agarwal commented on HDDS-702:


I like the idea a lot but I am hesitant to commit it with a snapshot dependency 
because the build experience will become unpredictable. E.g. If I just try to 
compile Ozone in trunk without 3.2.1-SNAPSHOT artifacts in my local Maven repo, 
then the build will fail. 

> Used fixed/external version from hadoop jars in hdds/ozone projects
> ---
>
> Key: HDDS-702
> URL: https://issues.apache.org/jira/browse/HDDS-702
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-702.001.patch
>
>
> In the current form of the project ozone uses the in-tree snapshot version of 
> the hadoop (hadoop 3.3.0-SNAPSHOT as of now).
> I propose to use a fixed version from the hadoop jars which could be 
> independent from the in-tree hadoop.
> 1. With using already released hadoop (such as hadoop-3.1) we can upload the 
> ozone jar files to the maven repository without pseudo-releasing the hadoop 
> snapshot dependencies. (In the current form it's not possible without also 
> uploading a custom, ozone flavour of hadoop-common/hadoop-hdfs)
> 2. With using fixed version of hadoop the build could be faster and the yetus 
> builds could be simplified (it's very easy to identify the projects which 
> should be checked/tested if only the hdds/ozone projects are part of the 
> build: we can do full build/tests all the time).
> After the previous work it's possible to switch to fixed hadoop version, 
> because:
> 1) we have no more proto file dependency between hdds and hdfs (HDDS-378, and 
> previous works by Mukul and Nanda)
> 2) we don't need to depend on the in-tree hadoop-project-dist (HDDS-447)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-702) Used fixed/external version from hadoop jars in hdds/ozone projects

2018-10-19 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657443#comment-16657443
 ] 

Elek, Marton commented on HDDS-702:
---

bq. why can't we remove the parent from hadoop-hdds and hadoop-ozone pom.xml

We can, but we have useful conventions in hadoop-project such as 
checkstyle/findbugs/enforcement plugins. As we follow the main hadoop 
conventions it was easier. But we can change it if we will have more 
differences.

bq. I think we can commit this patch, as when Hadoop 3.2 gets released the only 
thing we need to change is to change hadoop.version to 3.2. (Not sure if any 
additional needs to be done)

One additional minor change: With changing to 3.2.0 we can remove the snapshot 
repository references. I added it to the hadoop-ozone/hadoop-hdds projects as 
it requires to download the snapshot parent (which also contains the snapshot 
repository entry, but only available after download). 3.2.0 will be synced with 
central and can work without the repository entries.

bq.  Should we wait until Apache Hadoop 3.2.0 is released so we can point to a 
non-snapshot version?

I agree with [~bharatviswa] we can commit it earlier. Even now we use snapshot 
versions from hadoop, just it's 3.3.0-SNAPSHOT. Better to start earlier this 
approach: hopefully we can include it in 0.3.0 (with hadoop 3.2.0) 

But I have no problem if you prefer to wait to 3.2.0. 

ps: I modified the configuration of the maven enforcer plugin in the patch. I 
have a very detailed comment in HDDS-691 about this change.

> Used fixed/external version from hadoop jars in hdds/ozone projects
> ---
>
> Key: HDDS-702
> URL: https://issues.apache.org/jira/browse/HDDS-702
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-702.001.patch
>
>
> In the current form of the project ozone uses the in-tree snapshot version of 
> the hadoop (hadoop 3.3.0-SNAPSHOT as of now).
> I propose to use a fixed version from the hadoop jars which could be 
> independent from the in-tree hadoop.
> 1. With using already released hadoop (such as hadoop-3.1) we can upload the 
> ozone jar files to the maven repository without pseudo-releasing the hadoop 
> snapshot dependencies. (In the current form it's not possible without also 
> uploading a custom, ozone flavour of hadoop-common/hadoop-hdfs)
> 2. With using fixed version of hadoop the build could be faster and the yetus 
> builds could be simplified (it's very easy to identify the projects which 
> should be checked/tested if only the hdds/ozone projects are part of the 
> build: we can do full build/tests all the time).
> After the previous work it's possible to switch to fixed hadoop version, 
> because:
> 1) we have no more proto file dependency between hdds and hdfs (HDDS-378, and 
> previous works by Mukul and Nanda)
> 2) we don't need to depend on the in-tree hadoop-project-dist (HDDS-447)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14004) TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk

2018-10-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657427#comment-16657427
 ] 

Íñigo Goiri commented on HDFS-14004:


Pausing the IBR should be fine then ( [^HDFS-14004-01.patch]).
The rest of the test with the heartbeat triggering seems to check the full 
logic.
I'd like to get the feedback from [~jojochuang] though.

> TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk
> ---
>
> Key: HDFS-14004
> URL: https://issues.apache.org/jira/browse/HDFS-14004
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14004-01.patch
>
>
> Reference
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/930/testReport/junit/org.apache.hadoop.hdfs/TestLeaseRecovery2/testCloseWhileRecoverLease/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14004) TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk

2018-10-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657420#comment-16657420
 ] 

Hadoop QA commented on HDFS-14004:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 15m 
57s{color} | {color:red} Docker failed to build yetus/hadoop:4b8c2b1. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14004 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944772/HDFS-14004-01.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25320/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk
> ---
>
> Key: HDFS-14004
> URL: https://issues.apache.org/jira/browse/HDFS-14004
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14004-01.patch
>
>
> Reference
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/930/testReport/junit/org.apache.hadoop.hdfs/TestLeaseRecovery2/testCloseWhileRecoverLease/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13983) TestOfflineImageViewer crashes in windows

2018-10-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657405#comment-16657405
 ] 

Íñigo Goiri commented on HDFS-13983:


The tests for  [^HDFS-13983-03.patch] run with no problems here:
https://builds.apache.org/job/PreCommit-HDFS-Build/25248/testReport/org.apache.hadoop.hdfs.tools.offlineImageViewer/

I haven't seen TestPersistentStoragePolicySatisfier failing before but I don't 
see how it would be related.
+1

> TestOfflineImageViewer crashes in windows
> -
>
> Key: HDFS-13983
> URL: https://issues.apache.org/jira/browse/HDFS-13983
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Attachments: HDFS-13893-with-patch-intellij-idea.JPG, 
> HDFS-13893-with-patch-mvn.JPG, 
> HDFS-13893-with-patch-without-sysout-close-intellij-idea.JPG, 
> HDFS-13893-without-patch-intellij-idea.JPG, HDFS-13893-without-patch-mvn.JPG, 
> HDFS-13983-01.patch, HDFS-13983-02.patch, HDFS-13983-03.patch
>
>
> TestOfflineImageViewer crashes in windows because, OfflineImageViewer 
> REVERSEXML tries to delete the outputfile and re-create the same stream which 
> is already created.
> Also there are unclosed RAF for input files which blocks from files being 
> deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-703) Ozone docs does not render correctly on a Mobile Device

2018-10-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657404#comment-16657404
 ] 

Anu Engineer commented on HDDS-703:
---

+1, I have tested the given link from various devices. Thanks for an extremely 
fast response.

> Ozone docs does not render correctly on a Mobile Device
> ---
>
> Key: HDDS-703
> URL: https://issues.apache.org/jira/browse/HDDS-703
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.2.1
>Reporter: Andrew Grande
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-703.001.patch
>
>
> if you connect to [https://hadoop.apache.org/ozone/docs/0.2.1-alpha/] and try 
> to browse the documentation the side nav bar is not visible. This is needed 
> for effective navigation of the ozone documentation.  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-703) Ozone docs does not render correctly on a Mobile Device

2018-10-19 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-703:
--
Status: Patch Available  (was: Open)

I changed the hamburger menu to open the sidebar instead of navbar and I copied 
the asf links to the end of the sidebar (only for mobile render).

You can test it from this (temporary) page:
https://kv.anzix.net/odip/

It's enough to resize your desktop browser to a small window. Compare it with:
https://hadoop.apache.org/ozone/docs/0.2.1-alpha/

> Ozone docs does not render correctly on a Mobile Device
> ---
>
> Key: HDDS-703
> URL: https://issues.apache.org/jira/browse/HDDS-703
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.2.1
>Reporter: Andrew Grande
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-703.001.patch
>
>
> if you connect to [https://hadoop.apache.org/ozone/docs/0.2.1-alpha/] and try 
> to browse the documentation the side nav bar is not visible. This is needed 
> for effective navigation of the ozone documentation.  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-703) Ozone docs does not render correctly on a Mobile Device

2018-10-19 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-703:
--
Attachment: HDDS-703.001.patch

> Ozone docs does not render correctly on a Mobile Device
> ---
>
> Key: HDDS-703
> URL: https://issues.apache.org/jira/browse/HDDS-703
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.2.1
>Reporter: Andrew Grande
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-703.001.patch
>
>
> if you connect to [https://hadoop.apache.org/ozone/docs/0.2.1-alpha/] and try 
> to browse the documentation the side nav bar is not visible. This is needed 
> for effective navigation of the ozone documentation.  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-703) Ozone docs does not render correctly on a Mobile Device

2018-10-19 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-703:
-

Assignee: Elek, Marton

> Ozone docs does not render correctly on a Mobile Device
> ---
>
> Key: HDDS-703
> URL: https://issues.apache.org/jira/browse/HDDS-703
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.2.1
>Reporter: Andrew Grande
>Assignee: Elek, Marton
>Priority: Major
>
> if you connect to [https://hadoop.apache.org/ozone/docs/0.2.1-alpha/] and try 
> to browse the documentation the side nav bar is not visible. This is needed 
> for effective navigation of the ozone documentation.  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-691) Dependency convergence error for org.apache.hadoop:hadoop-annotations

2018-10-19 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-691:
--
Affects Version/s: 0.2.1
 Target Version/s: 0.3.0

> Dependency convergence error for org.apache.hadoop:hadoop-annotations
> -
>
> Key: HDDS-691
> URL: https://issues.apache.org/jira/browse/HDDS-691
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.2.1
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: HDDS-691_20181018.patch, HDDS-691_20181019.patch
>
>
> {code}
> [WARNING] 
> Dependency convergence error for 
> org.apache.hadoop:hadoop-annotations:3.3.0-SNAPSHOT paths to dependency are:
> +-org.apache.hadoop:hadoop-hdds-common:0.4.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.3.0-20181017.235917-140
> +-org.apache.hadoop:hadoop-annotations:3.3.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdds-common:0.4.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.3.0-20181017.235917-140
> +-org.apache.hadoop:hadoop-annotations:3.3.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdds-common:0.4.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-annotations:3.3.0-20181017.235840-140
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-704) Fix the Dependency convergence issue on HDDS-4

2018-10-19 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-704:
---

 Summary: Fix the Dependency convergence issue on HDDS-4
 Key: HDDS-704
 URL: https://issues.apache.org/jira/browse/HDDS-704
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


{code}
Dependency convergence error for org.bouncycastle:bcprov-jdk15on:1.54 paths to 
dependency are:
+-org.apache.hadoop:hadoop-hdds-common:0.4.0-SNAPSHOT
  +-org.bouncycastle:bcprov-jdk15on:1.54
and
+-org.apache.hadoop:hadoop-hdds-common:0.4.0-SNAPSHOT
  +-org.bouncycastle:bcpkix-jdk15on:1.54
+-org.bouncycastle:bcprov-jdk15on:1.60

[WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
failed with message:
Failed while enforcing releasability. See above detailed error message.
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-693) Support multi-chunk signatures in s3g PUT object endpoint

2018-10-19 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657373#comment-16657373
 ] 

Bharat Viswanadham commented on HDDS-693:
-

Hi [~elek]

Patch overall LGTM.

*I have a comment:*
 # We have only read(), we don't have read(byte[] b, int off, int len), we 
might see some slow operation during put with SignedInputStream.  

> Support multi-chunk signatures in s3g PUT object endpoint
> -
>
> Key: HDDS-693
> URL: https://issues.apache.org/jira/browse/HDDS-693
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-693.001.patch, HDDS-693.002.patch
>
>
> I tried to execute s3a unit tests with our s3 gateway and in 
> ITestS3AContractMkdir.testMkDirRmRfDir I got the following error: 
> {code}
> org.apache.hadoop.fs.FileAlreadyExistsException: Can't make directory for 
> path 's3a://buckettest/test' since it is a file.
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2077)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:2027)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2274)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractMkdirTest.testMkDirRmRfDir(AbstractContractMkdirTest.java:55)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}
> Checking the created key I found that the size is not zero (it's a directory 
> entry) but 86. Checking the content of the key I can see:
> {code}
>  cat /tmp/qwe2
> 0;chunk-signature=23abb2bd920ddeeaac78a63ed808bc59fa6e7d3ef0e356474b82cdc2f8c93c40
> {code}
> The reason is that it's uploaded with multi-chunk signature.
> In case of the header 
> x-amz-content-sha256=STREAMING-AWS4-HMAC-SHA256-PAYLOAD, the body is special: 
> Multiple signed chunks are following each other with additional signature 
> lines.
> See the documentation for more details:
> https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html
> In this jira I would add an initial support for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14004) TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk

2018-10-19 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657372#comment-16657372
 ] 

Ayush Saxena commented on HDFS-14004:
-

{quote}Can we prevent the block from completing without pausing the IBR
{quote}
To my understanding NO, If the block is supposed to be completed and IBR is not 
paused it will get completed for sure.

But if we see for the last block which we want to prevent getting completed.I 
think it wouldn't be in the state when it is "supposed to be completed" after 
just hflush.So IBR should not do any harm.

> TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk
> ---
>
> Key: HDFS-14004
> URL: https://issues.apache.org/jira/browse/HDFS-14004
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14004-01.patch
>
>
> Reference
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/930/testReport/junit/org.apache.hadoop.hdfs/TestLeaseRecovery2/testCloseWhileRecoverLease/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14004) TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk

2018-10-19 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14004:

Status: Patch Available  (was: Open)

> TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk
> ---
>
> Key: HDFS-14004
> URL: https://issues.apache.org/jira/browse/HDFS-14004
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14004-01.patch
>
>
> Reference
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/930/testReport/junit/org.apache.hadoop.hdfs/TestLeaseRecovery2/testCloseWhileRecoverLease/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13983) TestOfflineImageViewer crashes in windows

2018-10-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657349#comment-16657349
 ] 

Hadoop QA commented on HDFS-13983:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m  
6s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 33 unchanged - 4 fixed = 33 total (was 37) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestHAMetrics |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.fs.TestHdfsNativeCodeLoader |
|   | hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13983 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944759/HDFS-13983-03.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 931caa1ab0f5 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9aebafd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25319/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25319/testReport/ |
| Max. process+thread count | 

[jira] [Commented] (HDFS-14010) Pass correct DF usage to ReservedSpaceCalculator builder

2018-10-19 Thread Lukas Majercak (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657347#comment-16657347
 ] 

Lukas Majercak commented on HDFS-14010:
---

002.patch LGTM. Maybe some warn log when usage==null, and a comment for the 
unit test.

> Pass correct DF usage to ReservedSpaceCalculator builder
> 
>
> Key: HDFS-14010
> URL: https://issues.apache.org/jira/browse/HDFS-14010
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.2
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Minor
> Attachments: HDFS-14010.001.patch, HDFS-14010.002.patch
>
>
> In FsVolumeImpl's constructor, we currently pass the DF usage that was passed 
> to the constructor to ReservedSpaceCalculator.Builder. This can cause issues 
> if the usage is changed in the constructor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-703) Ozone docs does not render correctly on a Mobile Device

2018-10-19 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-703:
--
Description: 
if you connect to [https://hadoop.apache.org/ozone/docs/0.2.1-alpha/] and try 
to browse the documentation the side nav bar is not visible. This is needed for 
effective navigation of the ozone documentation.  

 

  was:if you connect to [https://hadoop.apache.org/ozone/docs/0.2.1-alpha/] and 
try to browse the documentation the side nav bar is not visible. This is needed 
for effective navigation of the ozone documentation.  Thanks to [~aperepel] for 
reporting this issue.


> Ozone docs does not render correctly on a Mobile Device
> ---
>
> Key: HDDS-703
> URL: https://issues.apache.org/jira/browse/HDDS-703
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.2.1
>Reporter: Andrew Grande
>Priority: Major
>
> if you connect to [https://hadoop.apache.org/ozone/docs/0.2.1-alpha/] and try 
> to browse the documentation the side nav bar is not visible. This is needed 
> for effective navigation of the ozone documentation.  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-703) Ozone docs does not render correctly on a Mobile Device

2018-10-19 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-703:
-

 Summary: Ozone docs does not render correctly on a Mobile Device
 Key: HDDS-703
 URL: https://issues.apache.org/jira/browse/HDDS-703
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.2.1
Reporter: Andrew Grande


if you connect to [https://hadoop.apache.org/ozone/docs/0.2.1-alpha/] and try 
to browse the documentation the side nav bar is not visible. This is needed for 
effective navigation of the ozone documentation.  Thanks to [~aperepel] for 
reporting this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-679) Add query parameter to the constructed query in VirtualHostStyleFilter

2018-10-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657310#comment-16657310
 ] 

Hudson commented on HDDS-679:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15274 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15274/])
HDDS-679. Add query parameter to the constructed query in (aengineer: rev 
d7b012e5600fa19b330d61a2572499f14fe9bb61)
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/TestVirtualHostStyleFilter.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/VirtualHostStyleFilter.java


> Add query parameter to the constructed query in VirtualHostStyleFilter
> --
>
> Key: HDDS-679
> URL: https://issues.apache.org/jira/browse/HDDS-679
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-679.00.patch
>
>
> org.apache.hadoop.ozone.s3.VirtualHostStyleFilter supports virtual host style 
> bucket addresses (eg. http://bucket.localhost/ instead of 
> http://localhost/bucket)
> It could be activated by setting the domain name to ozone.s3g.domain.name.
> Based on the configuration it recreates the URL of the request before the 
> request is processed. 
> Unfortunately during this recreation the query part of the URL is lost. (eg 
> http://bucket.localhost/?prefix=/ will be converted to 
> http://localhost/bucket)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-544) Unconditional wait findbug warning from ReplicationSupervisor

2018-10-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657302#comment-16657302
 ] 

Anu Engineer commented on HDDS-544:
---

[~elek] There is a compilation failure with this patch. Please take a look when 
you get a chance. Thanks
{code:java}
r-service: Compilation failure
[ERROR] 
/Users/aengineer/apache/hadoop/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java:[311,19]
 cannot find symbol{code}

> Unconditional wait findbug warning from ReplicationSupervisor
> -
>
> Key: HDDS-544
> URL: https://issues.apache.org/jira/browse/HDDS-544
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Attachments: HDDS-544.01.patch, HDDS-544.02.patch
>
>
> We have a findbug warning in ReplicationSupervisor:
> {code}
>  Multithreaded correctness Warnings
> Code  Warning
> UWUnconditional wait in 
> org.apache.hadoop.ozone.container.replication.ReplicationSupervisor$Worker.run()
>   
> Details
> UW_UNCOND_WAIT: Unconditional wait
> This method contains a call to java.lang.Object.wait() which is not guarded 
> by conditional control flow.  The code should verify that condition it 
> intends to wait for is not already satisfied before calling wait; any 
> previous notifications will be ignored
> {code}
> This issue is to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-361) Use DBStore and TableStore for DN metadata

2018-10-19 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-361:
--
Status: Open  (was: Patch Available)

[~ljain] I am cancelling this patch until you get time to update the patch. It 
helps by not seeing this patch in the review queues.

> Use DBStore and TableStore for DN metadata
> --
>
> Key: HDDS-361
> URL: https://issues.apache.org/jira/browse/HDDS-361
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-361.001.patch, HDDS-361.002.patch
>
>
> As part of OM performance improvement we used Tables for storing a particular 
> type of key value pair in the rocks db. This Jira aims to use Tables for 
> separating block keys and deletion transactions in the container db.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-679) Add query parameter to the constructed query in VirtualHostStyleFilter

2018-10-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657285#comment-16657285
 ] 

Anu Engineer edited comment on HDDS-679 at 10/19/18 7:04 PM:
-

[~elek] Thanks for root causing this issue. [~bharatviswa] Thanks for fixing 
this issue. I have committed this patch to the trunk and ozone-0.3 branches.


was (Author: anu):
[~bharatviswa] Thanks for root causing and fixing this issue. I have committed 
this patch to the trunk and ozone-0.3 branches.

> Add query parameter to the constructed query in VirtualHostStyleFilter
> --
>
> Key: HDDS-679
> URL: https://issues.apache.org/jira/browse/HDDS-679
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-679.00.patch
>
>
> org.apache.hadoop.ozone.s3.VirtualHostStyleFilter supports virtual host style 
> bucket addresses (eg. http://bucket.localhost/ instead of 
> http://localhost/bucket)
> It could be activated by setting the domain name to ozone.s3g.domain.name.
> Based on the configuration it recreates the URL of the request before the 
> request is processed. 
> Unfortunately during this recreation the query part of the URL is lost. (eg 
> http://bucket.localhost/?prefix=/ will be converted to 
> http://localhost/bucket)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-679) Add query parameter to the constructed query in VirtualHostStyleFilter

2018-10-19 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-679:
--
   Resolution: Fixed
Fix Version/s: 0.4.0
   0.3.0
   Status: Resolved  (was: Patch Available)

[~bharatviswa] Thanks for root causing and fixing this issue. I have committed 
this patch to the trunk and ozone-0.3 branches.

> Add query parameter to the constructed query in VirtualHostStyleFilter
> --
>
> Key: HDDS-679
> URL: https://issues.apache.org/jira/browse/HDDS-679
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-679.00.patch
>
>
> org.apache.hadoop.ozone.s3.VirtualHostStyleFilter supports virtual host style 
> bucket addresses (eg. http://bucket.localhost/ instead of 
> http://localhost/bucket)
> It could be activated by setting the domain name to ozone.s3g.domain.name.
> Based on the configuration it recreates the URL of the request before the 
> request is processed. 
> Unfortunately during this recreation the query part of the URL is lost. (eg 
> http://bucket.localhost/?prefix=/ will be converted to 
> http://localhost/bucket)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-621) ozone genconf improvements

2018-10-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657284#comment-16657284
 ] 

Hudson commented on HDDS-621:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15273 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15273/])
HDDS-621. ozone genconf improvements. Contributed by Dinesh Chitlangia. (arp: 
rev c456d6b3a5061b99141869f07dc1820ec96b7a67)
* (edit) hadoop-ozone/docs/content/Settings.md
* (edit) 
hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/genconf/TestGenerateOzoneRequiredConfigurations.java
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genconf/GenerateOzoneRequiredConfigurations.java


> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-621.001.patch, HDDS-621.002.patch, 
> HDDS-621.003.patch
>
>
> A few potential improvements to genconf:
>  # -Path should be optional :default to current config directory 
> _etc/hadoop_.-
>  # genconf silently overwrites existing _ozone-site.xml_. It should never do 
> so.
>  # The generated config file should have _ozone.enabled = true_.
>  # -Have a {{pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-612) Even after setting hdds.scm.chillmode.enabled to false, SCM allocateblock fails with ChillModePrecheck exception

2018-10-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657283#comment-16657283
 ] 

Hudson commented on HDDS-612:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15273 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15273/])
HDDS-612. Even after setting hdds.scm.chillmode.enabled to false, SCM (arp: rev 
dc2740804330f555dd3262b1db33add7c7ab4ff4)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestScmChillMode.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMChillModeManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/ChillModePrecheck.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java


> Even after setting hdds.scm.chillmode.enabled to false, SCM allocateblock 
> fails with ChillModePrecheck exception
> 
>
> Key: HDDS-612
> URL: https://issues.apache.org/jira/browse/HDDS-612
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-612.001.patch, HDDS-612.002.patch, 
> HDDS-612.003.patch
>
>
> {code:java}
> 2018-10-09 23:11:58,047 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 0 on 9863, call Call#70 Retry#0 
> org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol.allocateScmBlock 
> from 172.27.56.9:53442
> org.apache.hadoop.hdds.scm.exceptions.SCMException: ChillModePrecheck failed 
> for allocateBlock
> at 
> org.apache.hadoop.hdds.scm.server.ChillModePrecheck.check(ChillModePrecheck.java:38)
> at 
> org.apache.hadoop.hdds.scm.server.ChillModePrecheck.check(ChillModePrecheck.java:30)
> at org.apache.hadoop.hdds.scm.ScmUtils.preCheck(ScmUtils.java:42)
> at 
> org.apache.hadoop.hdds.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:191)
> at 
> org.apache.hadoop.hdds.scm.server.SCMBlockProtocolServer.allocateBlock(SCMBlockProtocolServer.java:143)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:74)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:6255)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-658) Implement s3 bucket list backend call and use it from rest endpoint

2018-10-19 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657269#comment-16657269
 ] 

Bharat Viswanadham edited comment on HDDS-658 at 10/19/18 6:48 PM:
---

[~elek]

For this jira, you are proposing to have an implementation in OM which takes 
the username, and returns the Volume for that user if it exists or else return 
"NO VOLUME for User" (where can check this to just return empty response). Let 
me know if my understanding is correct.


was (Author: bharatviswa):
[~elek]

For this jira, you are proposing to have an implementation in OM which takes 
the username, and returns the Volume for that user if it exists or else return 
"NO VOLUME for User" (where can map this to just return empty response). Let me 
know if my understanding is correct.

> Implement s3 bucket list backend call and use it from rest endpoint
> ---
>
> Key: HDDS-658
> URL: https://issues.apache.org/jira/browse/HDDS-658
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>
> HDDS-657 provides a very basic functionality for list buckets. There are two 
> problems there:
>  # It repeats the username -> volume name mapping convention.
>  # Doesn't work if volume doesn't exist (no s3 buckets created, yet).
> The proper solution is to do the same on server side:
>  # Use the existing naming convention in OM
>  # Return empty list in case of value is missing.
> It requires an additional rpc call to the om.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-658) Implement s3 bucket list backend call and use it from rest endpoint

2018-10-19 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657269#comment-16657269
 ] 

Bharat Viswanadham commented on HDDS-658:
-

[~elek]

For this jira, you are proposing to have an implementation in OM which takes 
the username, and returns the Volume for that user if it exists or else return 
"NO VOLUME for User" (where can map this to just return empty response). Let me 
know if my understanding is correct.

> Implement s3 bucket list backend call and use it from rest endpoint
> ---
>
> Key: HDDS-658
> URL: https://issues.apache.org/jira/browse/HDDS-658
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>
> HDDS-657 provides a very basic functionality for list buckets. There are two 
> problems there:
>  # It repeats the username -> volume name mapping convention.
>  # Doesn't work if volume doesn't exist (no s3 buckets created, yet).
> The proper solution is to do the same on server side:
>  # Use the existing naming convention in OM
>  # Return empty list in case of value is missing.
> It requires an additional rpc call to the om.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-621) ozone genconf improvements

2018-10-19 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657264#comment-16657264
 ] 

Dinesh Chitlangia commented on HDDS-621:


[~arpitagarwal] Thanks for review and commit.

> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-621.001.patch, HDDS-621.002.patch, 
> HDDS-621.003.patch
>
>
> A few potential improvements to genconf:
>  # -Path should be optional :default to current config directory 
> _etc/hadoop_.-
>  # genconf silently overwrites existing _ozone-site.xml_. It should never do 
> so.
>  # The generated config file should have _ozone.enabled = true_.
>  # -Have a {{pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-679) Add query parameter to the constructed query in VirtualHostStyleFilter

2018-10-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657261#comment-16657261
 ] 

Anu Engineer commented on HDDS-679:
---

+1, I will commit this soon.

> Add query parameter to the constructed query in VirtualHostStyleFilter
> --
>
> Key: HDDS-679
> URL: https://issues.apache.org/jira/browse/HDDS-679
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-679.00.patch
>
>
> org.apache.hadoop.ozone.s3.VirtualHostStyleFilter supports virtual host style 
> bucket addresses (eg. http://bucket.localhost/ instead of 
> http://localhost/bucket)
> It could be activated by setting the domain name to ozone.s3g.domain.name.
> Based on the configuration it recreates the URL of the request before the 
> request is processed. 
> Unfortunately during this recreation the query part of the URL is lost. (eg 
> http://bucket.localhost/?prefix=/ will be converted to 
> http://localhost/bucket)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-702) Used fixed/external version from hadoop jars in hdds/ozone projects

2018-10-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657258#comment-16657258
 ] 

Anu Engineer commented on HDDS-702:
---

I have verified that acceptance tests pass with this patch. Thx.

> Used fixed/external version from hadoop jars in hdds/ozone projects
> ---
>
> Key: HDDS-702
> URL: https://issues.apache.org/jira/browse/HDDS-702
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-702.001.patch
>
>
> In the current form of the project ozone uses the in-tree snapshot version of 
> the hadoop (hadoop 3.3.0-SNAPSHOT as of now).
> I propose to use a fixed version from the hadoop jars which could be 
> independent from the in-tree hadoop.
> 1. With using already released hadoop (such as hadoop-3.1) we can upload the 
> ozone jar files to the maven repository without pseudo-releasing the hadoop 
> snapshot dependencies. (In the current form it's not possible without also 
> uploading a custom, ozone flavour of hadoop-common/hadoop-hdfs)
> 2. With using fixed version of hadoop the build could be faster and the yetus 
> builds could be simplified (it's very easy to identify the projects which 
> should be checked/tested if only the hdds/ozone projects are part of the 
> build: we can do full build/tests all the time).
> After the previous work it's possible to switch to fixed hadoop version, 
> because:
> 1) we have no more proto file dependency between hdds and hdfs (HDDS-378, and 
> previous works by Mukul and Nanda)
> 2) we don't need to depend on the in-tree hadoop-project-dist (HDDS-447)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-621) ozone genconf improvements

2018-10-19 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-621:
---
   Resolution: Fixed
Fix Version/s: 0.4.0
   0.3.0
   Status: Resolved  (was: Patch Available)

+1

Thanks [~dineshchitlangia]. I've committed this.

> ozone genconf improvements
> --
>
> Key: HDDS-621
> URL: https://issues.apache.org/jira/browse/HDDS-621
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-621.001.patch, HDDS-621.002.patch, 
> HDDS-621.003.patch
>
>
> A few potential improvements to genconf:
>  # -Path should be optional :default to current config directory 
> _etc/hadoop_.-
>  # genconf silently overwrites existing _ozone-site.xml_. It should never do 
> so.
>  # The generated config file should have _ozone.enabled = true_.
>  # -Have a {{pseudo}} option to generate configs for starting pseudo-cluster. 
> This should be useful for quick dev-testing.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13994) Improve DataNode BlockSender waitForMinLength

2018-10-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657242#comment-16657242
 ] 

Hudson commented on HDFS-13994:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15272 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15272/])
HDFS-13994. Improve DataNode BlockSender waitForMinLength. Contributed 
(inigoiri: rev 8b64fbab1a4c7d65a5daf515c2d170d6a2fd4917)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalReplicaInPipeline.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/LocalReplicaInPipeline.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java


> Improve DataNode BlockSender waitForMinLength
> -
>
> Key: HDFS-13994
> URL: https://issues.apache.org/jira/browse/HDFS-13994
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-13994.1.patch, HDFS-13994.2.patch, 
> HDFS-13994.3.patch, HDFS-13994.4.patch, HDFS-13994.5.patch
>
>
> {code:java|title=BlockSender.java}
>   private static void waitForMinLength(ReplicaInPipeline rbw, long len)
>   throws IOException {
> // Wait for 3 seconds for rbw replica to reach the minimum length
> for (int i = 0; i < 30 && rbw.getBytesOnDisk() < len; i++) {
>   try {
> Thread.sleep(100);
>   } catch (InterruptedException ie) {
> throw new IOException(ie);
>   }
> }
> long bytesOnDisk = rbw.getBytesOnDisk();
> if (bytesOnDisk < len) {
>   throw new IOException(
>   String.format("Need %d bytes, but only %d bytes available", len,
>   bytesOnDisk));
> }
>   }
>  {code}
> It is not very efficient to poll for status in this way.  Instead, use 
> {{notifyAll}} within the {{ReplicaInPipeline}} to notify the caller when the 
> replica has reached a certain size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9872) HDFS bytes-default configurations should accept multiple size units

2018-10-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-9872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657243#comment-16657243
 ] 

Hudson commented on HDFS-9872:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15272 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15272/])
HDFS-9872. HDFS bytes-default configurations should accept multiple size 
(inigoiri: rev 88cce32551e6d52fd1c5a5bfd6c41499bf6ab1ab)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/AvailableSpaceVolumeChoosingPolicy.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeResourceChecker.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReservedSpaceCalculator.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java


> HDFS bytes-default configurations should accept multiple size units
> ---
>
> Key: HDFS-9872
> URL: https://issues.apache.org/jira/browse/HDFS-9872
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-9872.001.patch, HDFS-9872.002.patch, 
> HDFS-9872.003.patch, HDFS-9872.004.patch
>
>
> In HDFS-1314 and HDFS-9842, it had make some configurations can be friendly 
> accept multiply size unit, such as 134217728 can be also 128m, or 8048, can 
> be also replaced by 8k. And in some configurations, the value will be large, 
> like in 
> {{dfs.datanode.available-space-volume-choosing-policy.balanced-space-threshold}},
>  its default value is 10g, equal to 10737418240. Obviously, it's not 
> convenient to direct transform. So we could make some hdfs bytes-default 
> configurations which without size unit name  be friendly accepted multiply 
> size unit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14004) TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk

2018-10-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657225#comment-16657225
 ] 

Íñigo Goiri commented on HDFS-14004:


The whole point of this test is to keep a lease open and closing it.
Can we prevent the block from completing without pausing the IBR?

> TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk
> ---
>
> Key: HDFS-14004
> URL: https://issues.apache.org/jira/browse/HDFS-14004
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14004-01.patch
>
>
> Reference
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/930/testReport/junit/org.apache.hadoop.hdfs/TestLeaseRecovery2/testCloseWhileRecoverLease/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12355) Webhdfs needs to support encryption zones.

2018-10-19 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12355:
---
Issue Type: New Feature  (was: Bug)

> Webhdfs needs to support encryption zones.
> --
>
> Key: HDFS-12355
> URL: https://issues.apache.org/jira/browse/HDFS-12355
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Major
>
> Will create a sub tasks.
> 1. Add fsserverdefaults to {{NamenodeWebhdfsMethods}}.
> 2. Return File encryption info in {{GETFILESTATUS}} call from 
> {{NamenodeWebhdfsMethods}}
> 3. Adding {{CryptoInputStream}} and {{CryptoOutputStream}} to InputStream and 
> OutputStream.
> 4. {{WebhdfsFilesystem}} needs to acquire kms delegation token from kms 
> servers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12355) Webhdfs needs to support encryption zones.

2018-10-19 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657206#comment-16657206
 ] 

Wei-Chiu Chuang commented on HDFS-12355:


The ship has sailed for 3.2.0. Retarget 3.3.0

> Webhdfs needs to support encryption zones.
> --
>
> Key: HDFS-12355
> URL: https://issues.apache.org/jira/browse/HDFS-12355
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Major
>
> Will create a sub tasks.
> 1. Add fsserverdefaults to {{NamenodeWebhdfsMethods}}.
> 2. Return File encryption info in {{GETFILESTATUS}} call from 
> {{NamenodeWebhdfsMethods}}
> 3. Adding {{CryptoInputStream}} and {{CryptoOutputStream}} to InputStream and 
> OutputStream.
> 4. {{WebhdfsFilesystem}} needs to acquire kms delegation token from kms 
> servers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-701) Support key multi-delete

2018-10-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657207#comment-16657207
 ] 

Hudson commented on HDDS-701:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15271 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15271/])
HDDS-701. Support key multi-delete. Contributed by Elek, Marton. (aengineer: 
rev 9aebafd2da44fd048d201b6ea5a043d7dda3dad9)
* (add) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/MultiDeleteRequest.java
* (add) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/HeaderPreprocessor.java
* (add) hadoop-ozone/dist/src/main/smoketest/s3/objectmultidelete.robot
* (add) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestObjectMultiDelete.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
* (add) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/MultiDeleteResponse.java


> Support key multi-delete
> 
>
> Key: HDDS-701
> URL: https://issues.apache.org/jira/browse/HDDS-701
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-701.001.patch
>
>
> The s3a unit tests use multi-delete for object deletion (see 
> https://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html)
> To get meaningful results we need to have a basic implementation on the 
> multi-delete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12355) Webhdfs needs to support encryption zones.

2018-10-19 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12355:
---
Target Version/s: 3.3.0  (was: 3.2.0)

> Webhdfs needs to support encryption zones.
> --
>
> Key: HDFS-12355
> URL: https://issues.apache.org/jira/browse/HDFS-12355
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Major
>
> Will create a sub tasks.
> 1. Add fsserverdefaults to {{NamenodeWebhdfsMethods}}.
> 2. Return File encryption info in {{GETFILESTATUS}} call from 
> {{NamenodeWebhdfsMethods}}
> 3. Adding {{CryptoInputStream}} and {{CryptoOutputStream}} to InputStream and 
> OutputStream.
> 4. {{WebhdfsFilesystem}} needs to acquire kms delegation token from kms 
> servers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14004) TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk

2018-10-19 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14004:

Attachment: HDFS-14004-01.patch

> TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk
> ---
>
> Key: HDFS-14004
> URL: https://issues.apache.org/jira/browse/HDFS-14004
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14004-01.patch
>
>
> Reference
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/930/testReport/junit/org.apache.hadoop.hdfs/TestLeaseRecovery2/testCloseWhileRecoverLease/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14004) TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk

2018-10-19 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657197#comment-16657197
 ] 

Ayush Saxena commented on HDFS-14004:
-

bq.  I don't know if removing the block report pause will break the test 
scenario.

IIUC the pauseIBR was added to prevent the block getting into the completed 
state.If it gets completed we wouldn't be getting that exception message.But in 
general the block here changes to complete when a new block request is given 
(when second block is requested) and the last block for which we are checking 
will get completed generally when close is called(which we don't). If there is 
some other scenario which could complete then back to 2 as [~knanasi] already 
said the safest of all.

But in practical deployments that wouldn't be a usual scenario where just one 
block would be written whereas IBR wouldn't be on pause ever.

Triggered v1 to check are the results in my local sync with the Jenkins.

[~elgoiri] Any suggestions from your side?

> TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk
> ---
>
> Key: HDFS-14004
> URL: https://issues.apache.org/jira/browse/HDFS-14004
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14004-01.patch
>
>
> Reference
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/930/testReport/junit/org.apache.hadoop.hdfs/TestLeaseRecovery2/testCloseWhileRecoverLease/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-702) Used fixed/external version from hadoop jars in hdds/ozone projects

2018-10-19 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657195#comment-16657195
 ] 

Bharat Viswanadham commented on HDDS-702:
-

Sorry when I am looking more into the patch, I got a question why can't we 
remove the parent from hadoop-hdds and hadoop-ozone pom.xml


org.apache.hadoop
hadoop-project
3.2.1-SNAPSHOT



 

And declare the version of jars which we are using in these modules in 
 . As anyway when we are releasing Hadoop, we are 
removing these modules completely. 

> Used fixed/external version from hadoop jars in hdds/ozone projects
> ---
>
> Key: HDDS-702
> URL: https://issues.apache.org/jira/browse/HDDS-702
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-702.001.patch
>
>
> In the current form of the project ozone uses the in-tree snapshot version of 
> the hadoop (hadoop 3.3.0-SNAPSHOT as of now).
> I propose to use a fixed version from the hadoop jars which could be 
> independent from the in-tree hadoop.
> 1. With using already released hadoop (such as hadoop-3.1) we can upload the 
> ozone jar files to the maven repository without pseudo-releasing the hadoop 
> snapshot dependencies. (In the current form it's not possible without also 
> uploading a custom, ozone flavour of hadoop-common/hadoop-hdfs)
> 2. With using fixed version of hadoop the build could be faster and the yetus 
> builds could be simplified (it's very easy to identify the projects which 
> should be checked/tested if only the hdds/ozone projects are part of the 
> build: we can do full build/tests all the time).
> After the previous work it's possible to switch to fixed hadoop version, 
> because:
> 1) we have no more proto file dependency between hdds and hdfs (HDDS-378, and 
> previous works by Mukul and Nanda)
> 2) we don't need to depend on the in-tree hadoop-project-dist (HDDS-447)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-691) Dependency convergence error for org.apache.hadoop:hadoop-annotations

2018-10-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657173#comment-16657173
 ] 

Hadoop QA commented on HDDS-691:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
52m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
7s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-691 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944741/HDDS-691_20181019.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux 69bac8d18e47 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b22651e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1461/testReport/ |
| Max. process+thread count | 436 (vs. ulimit of 1) |
| modules | C: hadoop-project hadoop-hdds/common U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1461/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Dependency convergence error for org.apache.hadoop:hadoop-annotations
> 

[jira] [Updated] (HDFS-9872) HDFS bytes-default configurations should accept multiple size units

2018-10-19 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-9872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-9872:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thanks [~linyiqun] for the patch.
Committed to trunk.

> HDFS bytes-default configurations should accept multiple size units
> ---
>
> Key: HDFS-9872
> URL: https://issues.apache.org/jira/browse/HDFS-9872
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-9872.001.patch, HDFS-9872.002.patch, 
> HDFS-9872.003.patch, HDFS-9872.004.patch
>
>
> In HDFS-1314 and HDFS-9842, it had make some configurations can be friendly 
> accept multiply size unit, such as 134217728 can be also 128m, or 8048, can 
> be also replaced by 8k. And in some configurations, the value will be large, 
> like in 
> {{dfs.datanode.available-space-volume-choosing-policy.balanced-space-threshold}},
>  its default value is 10g, equal to 10737418240. Obviously, it's not 
> convenient to direct transform. So we could make some hdfs bytes-default 
> configurations which without size unit name  be friendly accepted multiply 
> size unit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13994) Improve DataNode BlockSender waitForMinLength

2018-10-19 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13994:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thanks [~belugabehr] for the patch.
Committed to trunk.

> Improve DataNode BlockSender waitForMinLength
> -
>
> Key: HDFS-13994
> URL: https://issues.apache.org/jira/browse/HDFS-13994
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-13994.1.patch, HDFS-13994.2.patch, 
> HDFS-13994.3.patch, HDFS-13994.4.patch, HDFS-13994.5.patch
>
>
> {code:java|title=BlockSender.java}
>   private static void waitForMinLength(ReplicaInPipeline rbw, long len)
>   throws IOException {
> // Wait for 3 seconds for rbw replica to reach the minimum length
> for (int i = 0; i < 30 && rbw.getBytesOnDisk() < len; i++) {
>   try {
> Thread.sleep(100);
>   } catch (InterruptedException ie) {
> throw new IOException(ie);
>   }
> }
> long bytesOnDisk = rbw.getBytesOnDisk();
> if (bytesOnDisk < len) {
>   throw new IOException(
>   String.format("Need %d bytes, but only %d bytes available", len,
>   bytesOnDisk));
> }
>   }
>  {code}
> It is not very efficient to poll for status in this way.  Instead, use 
> {{notifyAll}} within the {{ReplicaInPipeline}} to notify the caller when the 
> replica has reached a certain size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13994) Improve DataNode BlockSender waitForMinLength

2018-10-19 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13994:
---
Summary: Improve DataNode BlockSender waitForMinLength  (was: DataNode 
BlockSender waitForMinLength)

> Improve DataNode BlockSender waitForMinLength
> -
>
> Key: HDFS-13994
> URL: https://issues.apache.org/jira/browse/HDFS-13994
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13994.1.patch, HDFS-13994.2.patch, 
> HDFS-13994.3.patch, HDFS-13994.4.patch, HDFS-13994.5.patch
>
>
> {code:java|title=BlockSender.java}
>   private static void waitForMinLength(ReplicaInPipeline rbw, long len)
>   throws IOException {
> // Wait for 3 seconds for rbw replica to reach the minimum length
> for (int i = 0; i < 30 && rbw.getBytesOnDisk() < len; i++) {
>   try {
> Thread.sleep(100);
>   } catch (InterruptedException ie) {
> throw new IOException(ie);
>   }
> }
> long bytesOnDisk = rbw.getBytesOnDisk();
> if (bytesOnDisk < len) {
>   throw new IOException(
>   String.format("Need %d bytes, but only %d bytes available", len,
>   bytesOnDisk));
> }
>   }
>  {code}
> It is not very efficient to poll for status in this way.  Instead, use 
> {{notifyAll}} within the {{ReplicaInPipeline}} to notify the caller when the 
> replica has reached a certain size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-702) Used fixed/external version from hadoop jars in hdds/ozone projects

2018-10-19 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657156#comment-16657156
 ] 

Bharat Viswanadham edited comment on HDDS-702 at 10/19/18 5:31 PM:
---

+1 LGTM.

I think we can commit this patch, as when Hadoop 3.2 gets released the only 
thing we need to change is to change hadoop.version to 3.2. (Not sure if any 
additional needs to be done)

 

I have tried out and able to build successfully. 

mvn clean install -T 6 -Pdist -Phdds -DskipTests -Dmaven.javadoc.skip=true -am 
-pl :hadoop-ozone-dist


was (Author: bharatviswa):
+1 LGTM.

I think we can commit this patch, as when Hadoop 3.2 gets released the only 
thing we need to change is to change hadoop.version to 3.2. (Not sure if any 
additional needs to be done)

> Used fixed/external version from hadoop jars in hdds/ozone projects
> ---
>
> Key: HDDS-702
> URL: https://issues.apache.org/jira/browse/HDDS-702
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-702.001.patch
>
>
> In the current form of the project ozone uses the in-tree snapshot version of 
> the hadoop (hadoop 3.3.0-SNAPSHOT as of now).
> I propose to use a fixed version from the hadoop jars which could be 
> independent from the in-tree hadoop.
> 1. With using already released hadoop (such as hadoop-3.1) we can upload the 
> ozone jar files to the maven repository without pseudo-releasing the hadoop 
> snapshot dependencies. (In the current form it's not possible without also 
> uploading a custom, ozone flavour of hadoop-common/hadoop-hdfs)
> 2. With using fixed version of hadoop the build could be faster and the yetus 
> builds could be simplified (it's very easy to identify the projects which 
> should be checked/tested if only the hdds/ozone projects are part of the 
> build: we can do full build/tests all the time).
> After the previous work it's possible to switch to fixed hadoop version, 
> because:
> 1) we have no more proto file dependency between hdds and hdfs (HDDS-378, and 
> previous works by Mukul and Nanda)
> 2) we don't need to depend on the in-tree hadoop-project-dist (HDDS-447)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-702) Used fixed/external version from hadoop jars in hdds/ozone projects

2018-10-19 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657156#comment-16657156
 ] 

Bharat Viswanadham commented on HDDS-702:
-

+1 LGTM.

I think we can commit this patch, as when Hadoop 3.2 gets released the only 
thing we need to change is to change hadoop.version to 3.2. (Not sure if any 
additional needs to be done)

> Used fixed/external version from hadoop jars in hdds/ozone projects
> ---
>
> Key: HDDS-702
> URL: https://issues.apache.org/jira/browse/HDDS-702
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-702.001.patch
>
>
> In the current form of the project ozone uses the in-tree snapshot version of 
> the hadoop (hadoop 3.3.0-SNAPSHOT as of now).
> I propose to use a fixed version from the hadoop jars which could be 
> independent from the in-tree hadoop.
> 1. With using already released hadoop (such as hadoop-3.1) we can upload the 
> ozone jar files to the maven repository without pseudo-releasing the hadoop 
> snapshot dependencies. (In the current form it's not possible without also 
> uploading a custom, ozone flavour of hadoop-common/hadoop-hdfs)
> 2. With using fixed version of hadoop the build could be faster and the yetus 
> builds could be simplified (it's very easy to identify the projects which 
> should be checked/tested if only the hdds/ozone projects are part of the 
> build: we can do full build/tests all the time).
> After the previous work it's possible to switch to fixed hadoop version, 
> because:
> 1) we have no more proto file dependency between hdds and hdfs (HDDS-378, and 
> previous works by Mukul and Nanda)
> 2) we don't need to depend on the in-tree hadoop-project-dist (HDDS-447)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14011) RBF: Add more information to HdfsFileStatus for a mount point

2018-10-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657154#comment-16657154
 ] 

Íñigo Goiri commented on HDFS-14011:


Thanks [~ajisakaa] for taking this, this has been a TODO for a while.
The new unit test looks good.

For the new failures there must be something basic that we need to change; can 
you take a look?

> RBF: Add more information to HdfsFileStatus for a mount point
> -
>
> Key: HDFS-14011
> URL: https://issues.apache.org/jira/browse/HDFS-14011
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-14011.01.patch
>
>
> RouterClientProtocol#getMountPointStatus does not use information of the 
> mount point, therefore, 'hdfs dfs -ls' to a directory including mount point 
> returns the incorrect information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9872) HDFS bytes-default configurations should accept multiple size units

2018-10-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-9872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657152#comment-16657152
 ] 

Íñigo Goiri commented on HDFS-9872:
---

+1 on  [^HDFS-9872.004.patch].
Committing shortly to trunk.

> HDFS bytes-default configurations should accept multiple size units
> ---
>
> Key: HDFS-9872
> URL: https://issues.apache.org/jira/browse/HDFS-9872
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDFS-9872.001.patch, HDFS-9872.002.patch, 
> HDFS-9872.003.patch, HDFS-9872.004.patch
>
>
> In HDFS-1314 and HDFS-9842, it had make some configurations can be friendly 
> accept multiply size unit, such as 134217728 can be also 128m, or 8048, can 
> be also replaced by 8k. And in some configurations, the value will be large, 
> like in 
> {{dfs.datanode.available-space-volume-choosing-policy.balanced-space-threshold}},
>  its default value is 10g, equal to 10737418240. Obviously, it's not 
> convenient to direct transform. So we could make some hdfs bytes-default 
> configurations which without size unit name  be friendly accepted multiply 
> size unit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-701) Support key multi-delete

2018-10-19 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-701:
--
   Resolution: Fixed
Fix Version/s: 0.4.0
   0.3.0
   Status: Resolved  (was: Patch Available)

[~elek] Thanks for the contribution. I have committed to the trunk and 
ozone-0.3. I had to rebase the patch to the top of the trunk since I have some 
conflicts in the bucket endpoint includes. Please make sure it looks correct 
when you get a chance.

> Support key multi-delete
> 
>
> Key: HDDS-701
> URL: https://issues.apache.org/jira/browse/HDDS-701
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-701.001.patch
>
>
> The s3a unit tests use multi-delete for object deletion (see 
> https://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html)
> To get meaningful results we need to have a basic implementation on the 
> multi-delete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9872) HDFS bytes-default configurations should accept multiple size units

2018-10-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-9872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657132#comment-16657132
 ] 

Hadoop QA commented on HDFS-9872:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 374 
unchanged - 2 fixed = 374 total (was 376) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
45s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestHdfsNativeCodeLoader |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-9872 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944728/HDFS-9872.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| 

  1   2   >